The Stanford-designed app was trained on a database of several facial photos, in which people were exhibiting those expressions
New study conducted by Scientists at Stanford University developed new an approach that utilizes Google Glass smart glasses to recognize emotions. A paper on the research was published in the journal npj Digital Medicine in July 2018. Children suffering from Autism Spectrum Disorder (ASD) face challenges in identifying other people’s emotions based on their facial expressions, which can create communicating problems. Conventionally therapists use flash cards of faces to teach ASD kids different emotions. As a part of new study, children suffering from autism wore Google Glass headsets, which were wirelessly connected to a machine learning-based app on smartphone.
That app works by analyzing the view from the glasses’ forward-facing camera, gauging the expressions/emotions of the people with which the children were interacting. Furthermore, it determined eight core facial expressions, which represented happiness, sadness, anger, disgust, surprise, fear, neutrality or contempt. As a part of study, 14 families having 3 to 17-year-old ASD children use it at home for at least three 20-minute sessions per week, over an average course of 10 weeks. It used in ‘free play’ mode, in which it simply identified other peoples’ expressions. Furthermore, in order mode child would try to guess the expression being shown by a parent or the child would try to get the parent to display an emotion by describing the associated expression.
After the 10 weeks, 12 families reported that their children stated making significantly more eye contact. Moreover, on the basis of questionnaires completed by the parents before and after the treatment period, it was found that the children experienced an average decrease of 7.38 points on the SRS-2 autism traits scale – that means their symptoms were less severe. In fact, six of the children moved down one step in their autism severity classification.