Abdelkareem Bedri

For my updated list of publications check my Google Scholar profile 

Projects

 EarBit 

Chronic and widespread diseases such as obesity, diabetes, and hypercholesterolemia require patients to monitor their food intake, and food journaling is currently the most common method for doing so. However, food journaling is subject to self-bias and recall errors, and is poorly adhered to by patients. In this paper, we propose an alternative by introducing EarBit, a wearable system that detects eating moments. We evaluate the performance of inertial, optical, and acoustic sensing modalities and focus on inertial sensing, by virtue of its recognition and usability performance. Using data collected in a simulated home setting with minimum restrictions on participants’ behavior, we build our models and evaluate them with an unconstrained outside-the-lab study. For both studies, we obtained video footage as ground truth for participants activities. Using leave-one-user-out validation, EarBit recognized all the eating episodes in the semi-controlled lab study, and achieved an accuracy of 90.1% and an F1-score of 90.9% in detecting chewing instances. In the unconstrained, outside-the-lab evaluation, EarBit obtained an accuracy of 93% and an F1-score of 80.1% in detecting chewing instances. It also accurately recognized all but one recorded eating episodes. These episodes ranged from a 2 minute snack to a 30 minute meal.

TapSkin

The touchscreen has been the dominant input surface for smartphones and smartwatches. However, its small size compared to a phone limits the richness of the input gestures that can be supported. We present TapSkin, an interaction technique that recognizes up to 11 distinct tap gestures on the skin around the watch using only the inertial sensors and microphone on a commodity smartwatch. An evaluation with 12 participants shows our system can provide classification accuracies from 90.69% to 97.32% in three gesture families– number pad, d-pad, and corner taps. We discuss the opportunities and remaining challenges for widespread use of this technique to increase input richness on a smartwatch withoutrequiring further on-body instrumentation.

  

Inner Ear Interface - 2014

The Inner Ear Interface (IEI) can identify pre-trained phrases a user is speaking without the need for voicing. Current investigations suggest that using an earbud-style sensor to measure air pressure changes and deformation of the ear canal while the user speaks may also be sufficient for detecting and distinguishing articulated, but un-voiced, phrases. 

2.5 million Americans use Augmentative and Alternative Communication (AAC) aids to communicate. They include people with cerebral palsy, cranial facial abnormalities, stroke, brain injury, multiple sclerosis, Parkinsons, ALS, and members of the Deaf community when communicating with the hearing. AAC devices tend to be significantly slower than speech, and their users may have additional movement disabilities that limit the use of the device. IEI may prove a faster and cheaper communication and control alternative for some of these AAC users.Military personnel may desire silent speech interfaces for situations that are highly noisy (aircraft cockpits or active ground combat) or require stealth (special operations or underwater warfare). Glass’s head-up display and gesture sensing suggest a silent speech interface where a user mouths a command or signal, verifies it on the display, and then executes it with a head motion. While many wearable computer users desire the speed, “natural interaction,” and hands-free nature of a speech interface, these same users find voicing in public socially awkward. Unvoiced speech may be a viable alternative if it can be sensed in a reasonable manner like what the IEI suggest.


Automatic Grading System for ASL proficiency Test - 2013

Ninety percent of Deaf children are born into hearing families.  Most of these children do not have the appropriate exposure to sign language or any other formal communication method at an early age.  This results in delaying  the age at which they acquire their first language. All of the major theories of language acquisition, despite their diversity, acknowledge that such delays will cause harm. Relations with caregivers are disrupted, cognition is hindered and the chance for a productive and fulfilling life is diminished. To solve this problem and to provide the appropriate intervention, linguists need to thoroughly investigate language proficiency levels of Deaf children. Sign language proficiency assessment tools have been developed, but the lack of highly skilled personnel to carry out the assessment is hindering the evaluation process. Hence, employing Automated Sign Language Recognition (ASLR) systems to perform the proficiency assessment task can be a potential alternative.

In this project we work on the design and implementation of a fully automated grading system for the American Sign Language Assessment Instrument (ASLAI). Our system uses the Microsoft Kinect sensor for sign capturing and utilizes Hidden Markov Models (HMMs) to perform sign classification. Preliminary results show high potential for this approach (more than 95% accuracy achieved).


2D Haptic Interface using Textures Differences  

Haptic interface channels can be added to enrich the human machine interaction. In this research we have investigated a novel method enabling users to explore 2D shapes by tactile perception. This method is based on human capability to sense the boundary between 2d objects that have different textures. We used this feature for contour tracking and shape definition of simulated objects. A tactile display was designed and implemented especially for this purpose to generate the required textures and compliance levels. The display comprised a 2X2 array of voice coils attached to a Pantograph mechanism. Subjects demonstrated quick adaptation to the method of presenting textures and attained higher accuracy in tracking compared to the use of combined pressure and slippage force stimulation. The method achieved good results in defining a number of shapes. However to improve recognition particularly of rounded shapes, an increase in the system’s resolution is recommended. Finally, a primary investigation was made to study the capability of method to represent metaphoric colors. After a training period up to five minutes, all subjects could readily detect a set of three colors in six different conditions. It was proven that this method has more features than other methods used in 2D shapes exploration, and it is capable of achieving high accuracy; making it a potential method to be utilized in fields like human–computer interaction, scientific visualization, assistive technology and gaming.

 
 

Make a free website with Yola