Kristy Johnson: MIT Media Lab Statement of Purpose
Occasionally, we experience moments in life that emerge like pins on a paper map. These moments transform our two-dimensional life into one with structure and character, defining both our past and our future.
One such moment was the day we received the diagnosis that our son had a rare genetic disorder. His birth had been transformative in all the usual ways, but his diagnosis was the paradigm shift. His diagnosis was the day that I realized I would spend the rest of my life searching for ways to help him and exceptional individuals like him.
Prior to his birth, I had been on a traditional academic path, working toward a Ph.D. in physics. My specialty was nonlinear dynamics – complex systems with unpredictable and often counterintuitive outcomes. I sought to apply this mentality and training to enhance my son’s development, but I became increasingly dismayed with the limited resources available for individuals with severe cognitive disabilities. In an age of bionic legs and optogenetics, crowd-sourced neural maps, and finger rings that can read text to the blind, why do we not yet have dynamic new programs to overcome cognitive deficits?
The world needs a radical new approach to cognitive augmentation and an innovative answer to cognitive disabilities. To this end, I propose a broad initiative for cognitive neurotechnology. I do not mean multiplication tables on a tablet or computer games to enliven rote learning; I am seeking inventive, expansive, revolutionary technology that seamlessly exploits the intersection of the digital and physical worlds. I envision a new educational platform, anchored in work pioneered at the Media Lab, that employs affect to interactively teach skills and fluidly respond to subtle physical cues while dynamically tailoring the learning activities to maximize motivation. By first refining this approach for individuals with significant cognitive challenges, it is my hope that this dynamic and explorative neurotechnology will usher in a new era of interactive learning for every person, with any ability.
With my background in experimental physics, several years spent studying cognitive neuroscience and developmental disorders, and the relentless ambition that perhaps only a parent can possess, I want to ignite this change. The spark will involve uniting three cutting-edge disciplines: interactive cognitive enhancement, open-source digital-physical interfaces, and wearable affective technology. This statement outlines some fundamental principles and potential ideas derived from this unification that I hope to pursue during graduate studies at the Media Lab.
Motivation-Driven Learning
Motivation drives discovery and learning in nearly every sector of society. We are all stimulated to learn by something, and for the past three years, I have studied the role of motivation, particularly for individuals with atypical neurology (e.g., persons with Autism Spectrum Disorder (ASD) and other developmental disorders). For neurotypical individuals, this “something” is often a desire to please – a parent, a teacher, themselves – and is generally inherent. However, for many children with neuro-differences, intrinsic motivation is nonexistent or insufficient to overcome the environmental distractions or substantial motor challenges required to learn or complete a task.
Yet, children with ASD and other developmental disorders often show intense, specific affinities for particular items or topics that can be leveraged to teach skills or ideas. In fact, well-established therapy techniques, such as Applied Behavioral Analysis (ABA), pair empirical analysis of a child’s behaviors with highly-motivating personal rewards to achieve ambitious therapeutic goals. In contrast, therapies or games that show negligible efficacy in randomized trials often lack customized motivation. Not every child will drill phonics or trace letters for hours just because they internally wish to improve their speech or literacy, especially if the physical act of doing so is onerous or overwhelming.
In short, motivation must be highly individualized, immediate, and adaptive. Previous projects of the Affective Computing group, such as the auditory desensitization games developed by Rob Morris and the speech enhancement games developed by Ehsan Hoque, have already taken strides to develop a fresh and personalized approach to motivation-driven learning. Below, I outline some possible extensions through interactive games, toys, and devices that could begin to elevate these ideas from their nascent implementations to a fully-integrated educational and therapeutic platform.
Affective Data + HCI Devices = New Cognitive Neurotech
A second facet of my proposed work is to incorporate affective feedback into human-computer interaction (HCI) devices. By integrating the work of Drs. Patti Maes and Roz Picard, we can create new neurotechnology that responds to a user’s current physiological state, produces customized real-time learning strategies, and provides tailored positive reinforcement for specific tasks. In past research, Picard’s group has demonstrated that skin conductance (hereafter, electrodermal activity or EDA) is a clinically reliable indicator of the body’s physiological arousal, often providing insight into a user's internal state that is otherwise unapparent, even to trained observers. Through my work with Dr. Matthew Goodwin (a former Media Lab post-doc) and the EDA sensors initially developed by Dr. Picard's group and subsequent company (Affectiva’s Q Sensor), I have developed an intimate understanding of skin conductance data and its potential for affect-based technology.
One straightforward example of synergistic EDA-HCI technology would be to program a physical device to indicate the user's state of arousal while simultaneously providing personalized positive reinforcement (e.g., lights, music, or a coveted video clip) for achieving desired arousal states. A more ambitious application of EDA-HCI would be to incorporate arousal information into predictive language capabilities on an individual’s augmentative assistive communication (AAC) device. These devices enable non-verbal individuals to speak electronically. Coupled with an EDA sensor and predictive algorithms, the AAC could anticipate a user's needs and provide emotionally contextual suggestions. For advanced learners, changing levels of arousal might prompt the user with a question about their current emotions and provide affect-appropriate vocabulary. When paired with the unique expertise and resources of the Media Lab, my experience with biosensors like the Q, AAC equipment, and device fabrication and development could provide significant momentum and direction to these initiatives.
Adaptive HCI: Educational Toys
Therapeutic toys provide another exciting application of HCI to teach and cultivate skills. Indeed, current mass-market toys often need only minor – but specific – modifications to make them functional and educational for individuals with special needs. For instance, a classic shape sorter may hold no innate interest to some children – it requires too much mental effort and advanced fine motor skills to manipulate uninteresting wooden shapes into an unresponsive box. Commercial shape sorters with electronic rewards (e.g., lights or music) are designed in ways that make “cheating” easier than proper use.
To overcome these limitations, I have been designing and building Arduino-based toys with individualized, highly motivating rewards triggered via embedded electronics. For example, in my adaptive shape sorter, properly placing the shapes activates a dancing light show with an optional LCD screen and speakers to enable customized rewards. These toys are merely a precursor to fully affect-integrated systems, but their deceptively simple implementation could be used to teach colors, numbers, letters, sequencing – nearly anything. My online portfolio explores additional applications of this motivation-driven neurotechnology.
In sum, by coupling affective data with HCI physical devices, we can foster development, expression, and cognitive augmentation. My son already presents a striking demonstration of the potential efficacy: They said he would never walk, but he now happily explores obstacle courses in pursuit of a highly-motivating toy. They said he would never talk, but he uses an AAC device to request favorite songs and activities. He is an example of the hope and promise for the special-needs community, and by drawing our inspiration from children with the most rare or marked special needs, we can build foundational new neurotechnology that transforms the potential of every individual.