AlterEgo is a wearable peripheral neural interface that allows humans to converse in natural language with machines, artificial intelligence assistants, services, and other people by articulating words internally without any voice or observable movements. Audio feedback via bone conduction makes the system closed-loop without disrupting the user's usual auditory perception. While people with speech disorders are the obvious user base, AlterEgo also helps augment the human interaction of all groups of people.
I am developing sequence to sequence deep learning models for expanding the vocabulary of the interface to enable fast, inconspicuous, and high-bandwidth information transfer.
I designed convolutional neural architecture to classify minimally articulated text for Multiple Sclerosis patients with speech disorders.
I designed custom electronics for high-resolution data acquisition and software for preprocessing the signals to obtain cleaner data.