Personalized Active Learner (PAL) is a closed-loop, contextually-aware, personalized and user-centric wearable device which leverages Artificial Intelligence and Biotechnology for Memory Augmentation, Language Learning, and Behavior Change.
PAL aims to enable people to design their lives (i.e. inspire behavior change) to optimize their cognitive, physical, and emotional well-being, while providing a holistic view of their daily activities and physiology. People's actions deeply influence their internal and external bodily states, which in turn affect people's actions. PAL aims to help people to be aware of the correlations between their different activities and internal states so that better behavioral awareness can drive intrinsically motivated behavior change. For example, how a user’s sleeping, eating, social or meditation patterns correlate with their electrophysiology (e.g. heart rate and brain waves), their hormones, or even their epigenetic changes - so that the users can be more self-aware and consciously choose the best activities for themselves.
I designed custom hardware for the wearable housing an unobtrusive camera and added utilities like audio I/O (microphone, and bone conduction transducer or earphones), embedded electronics and battery. I employed plug-n-play designs, having them modular so as to easily conform across genders and to accommodate varied likes of users. The modular design also facilitates independent charging of the device with a snap-on battery module. I also designed an inconspicuous touch interface in a wearable form factor - Ring.
The hardware is central to Object Detection, Scene Description, Face Recognition, Food Detection, Activity Detection and Physiological Sensing, and robust and comfortable enough to carry in-the-wild and longitudinal experiments, as opposed to restricted in-the-lab settings.
I designed custom electronics with BLE (Bluetooth Low Energy) and Cloud connectivity for:
Real-time on-device Object Detection and Scene Description for Language Learning.
Hardware accelerated (using Vision Processing Unit) low-shot machine learning for Face Recognition - training the offline model on device in real-time on a single data point - for ‘personalized’ Memory Augmentation.
Real-time Activity Detection with zero-shot machine learning using camera, IMU, GPS, date and time for Contextual Awareness.
High-Resolution Physiology data acquisition and processing.
Touch-Ring with BLE Connectivity. It also facilitates EDA (Electrodermal Activity) integration into the same ring.
Object Detection and Face Recognition:
I developed accelerated object detection and face recognition (with HOG and SVM, and active learning) using ambulatory real-world data, for language learning and memory augmentation.
I built an automated pipeline for signal processing and feature extraction of physiological (SCG and EEG) data. This helps to correlate physiology of the users to their activities and provide in-time feedback - taking a step closer to look into the ‘causes’ of human health rather than just focusing on ‘cure’. Also, the activity detection pipeline can confirm the effectiveness of the feedback and can facilitate dynamic and enhancing interventions - a feedback for the feedback system.
I wrote the firmware for the ring to sense and transmit combination of touch-triggers, like single, double or triple taps, and short or long presses over Bluetooth.
I developed an Android app, which can also be synced to the Touch-Ring, for self-triggered user-desired interventions. The app facilitates the user to maintain personal libraries of self-help as playable quotes, songs, audio recordings and videos.