stub Lama Nachman, Intel Fellow & Director of Anticipatory Computing Lab - Interview Series - Unite.AI
Connect with us

Interviews

Lama Nachman, Intel Fellow & Director of Anticipatory Computing Lab – Interview Series

mm
Updated on

Lama Nachman, is an Intel Fellow & Director of Anticipatory Computing Lab. Lama is best known for her work with Prof. Stephen Hawking, she was instrumental in building an assistive computer system to assist Prof. Stephen Hawking in communicating. Today she is assisting British roboticist Dr. Peter Scott-Morgan to communicate. In 2017,  Dr. Peter Scott-Morgan received a diagnosis of motor neurone disease (MND), also known as ALS or Lou Gehrig’s disease. MND attacks the brain and nerves and eventually paralyzes all muscles, even those that enable breathing and swallowing.

Dr. Peter Scott-Morgan once stated: “I will continue to evolve, dying as a human, living as a cyborg.”

What attracted you to AI?

I have always been drawn to the idea that technology can be the great equalizer. When developed responsibly, it has the potential to level the playing field, address social inequities and amplify human potential. Nowhere is this truer than with AI. While much of the industry conversation around AI and humans positions the relationship between the two as adversarial, I believe that there are unique things machines and people are good at, so I prefer to view the future through the lens of Human-AI collaboration rather than human-AI competition.  I lead the Anticipatory Computing Lab at Intel Labs where—across all our research efforts—we have a singular focus on delivering computing innovation that scales for broad societal impact. Given how pervasive AI already is and its growing footprint in every facet of our life, I see tremendous promise in the research my team is undertaking to make AI more accessible, more context-aware, more responsible and ultimately bringing technology solutions at scale to assist people in the real world.

You have worked closely with legendary physicist Prof. Stephen Hawking to create an AI system that assisted him with communicating and with tasks that most of us would consider routine. What were some of these routine tasks?

Working with Prof. Stephen Hawking was the most meaningful and challenging endeavor of my life. It fed my soul and really hit home how technology can profoundly improve people’s lives. He lived with ALS, a degenerative neurological disease, that strips away over time the patient’s ability to perform the simplest of activities. In 2011, we began working with him to explore how to improve the assistive computer system that enabled him to interact with the world. In addition to using his computer for talking to people, Stephen used his computer like all of us do, editing documents, surfing the web, giving lectures, reading/writing emails, etc. Technology enabled Stephen to continue to actively participate in and inspire the world for years after his physical abilities diminished rapidly. That—to me—is what meaningful impact of technology on somebody’s life looks like!

What are some of the key insights that you took away from working with Prof. Stephen Hawking?

Our computer screen is truly our doorway into the world. If people can control their PC, they can control all aspects of their lives (consuming content, accessing the digital world, controlling their physical environment, navigating their wheelchair, etc).  For people with disabilities who can still speak, advances in speech recognition lets them have full control of their devices (and to a large degree, their physical environment). However, those who can’t speak and unable to move are truly impaired in not being able to exercise much independence. What the experience with Prof. Hawking taught me is that assistive technology platforms need to be tailored to the specific needs of the user.  For example, we can’t just assume that a single solution will work for people with ALS, because the disease impacts different abilities across patients. So, we need technologies that can be easily configured and adapted to the individual’s needs.  This is why we built ACAT (Assistive Context Aware Toolkit), a modular, open-source software platform that can enable developers to innovate and build different capabilities on top of it.

I also learned that it’s important to understand every user’s comfort threshold around giving up control in exchange for more efficiency (this is not limited to people with disabilities). For example, AI may be capable of taking away more control from the user in order to do a task faster or more efficiently, but every user has a different level of risk averseness. Some are willing to give up more control, while other users want to maintain more of it. Understanding those thresholds and how far people are willing to go has a big impact on how these systems can be designed. We need to rethink system design in terms of user comfort level rather than only objective measures of efficiency and accuracy.

More recently, you have been working with a famous UK scientist Peter Scott Morgan who is suffering from motor neuron disease and has the goal of becoming the world’s first full cyborg. What are some of the ambitious goals that Peter has?

One of the issues with AAC (Assistive and Augmentative communication) is the “silence gap”.  Many people with ALS (including Peter) use gaze control to choose letters / words on the screen to speak to others.  This results in a long silence after someone finishes their sentence while the person gazes at their computer and start formulating their letters and words to respond.  Peter wanted to reduce this silence gap as much as possible to bring verbal spontaneity back to the communication.  He also wants to preserve his voice and personality and use a text to speech system that expresses his unique style of communication (for e.g. his quips, his quick-witted sarcasm, his emotions).

British roboticist Dr. Peter Scott-Morgan, who has motor neurone disease, began in 2019 to undergone a series of operations to extend his life using technology. (Credit: Cardiff Productions)

Could you discuss some of the technologies that are currently being used to assist Dr. Peter Scott-Morgan?

Peter is using ACAT (Assistive Context Aware Toolkit), the platform that we built during our work with Dr. Hawking and later released to open source. Unlike Dr. Hawking who used the muscles in his cheek as a “input trigger” to control the letters on his screen, Peter is using gaze control (a capability we added to the existing ACAT) to speak to and control his PC, which interfaces with a Text-to-Speech (TTS) solution from a company called CereProc that was customized for him and enables him to express different emotions/emphasis. The system also controls an avatar that was customized for him.

We are currently working on a response generation system for ACAT that can allow Peter to interact with the system at a higher-level using AI capabilities.  This system will listen to Peter’s conversations over time and suggest responses for Peter to choose on the screen. The goal is that over time the AI system will learn from Peter’s data and enable him to “nudge” the system to provide him the best responses using just some keywords (similar to how searches work on the web today). Our goal with the response generation system is to reduce the silence gap in communication referenced above and empower Peter and future users of ACAT to communicate at a pace that feels more “natural.”

You’ve also spoken about the importance of transparency in AI, how big of an issue is this?

It is a big issue especially when it is deployed in decision making systems or human/AI collaborative systems.  For example, in the case of Peter’s assistive system, we need to understand what is causing the system to make these recommendations and how to impact the learning of this system to more accurately express his ideas.

In the larger context of decision making systems, whether it is helping with diagnosis based on medical imaging or making recommendations on granting loans, AI systems need to provide human interpretable information on how they arrived at decisions, what attributes or features were most impactful on that decision, what confidence does the system have in the inference made, etc.  This increases trust in the AI systems and enables better collaboration between humans and AI in mixed decision-making scenarios.

AI bias specifically when it comes to racism and sexism is a huge issue, but how do you identify other types of bias when you have no idea what biases you are looking for?

It is a very hard problem and one that can’t be solved with technology alone.  We need to bring more diversity into the development of AI systems (racial, gender, culture, physical ability, etc.).  This is clearly a huge gap in the population building these AI systems today.  In addition, it is critical to have multi-disciplinary teams engaged in the definition and development of these systems, bringing social science, philosophy, psychology, ethics and policy to the table (not just computer science), and engaging in the inquiry process in the context of the specific projects and problems.

You’ve spoken before about using AI to amplify human potential. What are some areas that show the most promise for this amplification of human potential?

An obvious area is enabling people with disabilities to live more independently, to communicate with loved ones and to continue to create and contribute to the society.  I see a big potential in education, in understanding student engagement and personalizing the learning experience to the individual needs and capabilities of the student to improve engagement, empower teachers with this knowledge and improve learning outcomes.  The inequity in education today is so profound and there is a place for AI to help reduce some of this inequity if we do it right.  There are endless opportunities for AI to bring a lot of value by creating human/AI collaborative systems in so many sectors (healthcare, manufacturing, etc) because what humans and AI bring to the table are very complementary. For this to happen, we need innovation at the intersection of social science, HCI and AI.  Robust multi-modal perception, context awareness, learning from limited data, physically situated HCI and interpretability are some of the key challenges that we need to focus on to bring this vision to fruition.

You’ve also spoken about how important emotion recognition is to the future of AI? Why should the AI industry focus more on this area of research?

Emotion recognition is a key capability of human/AI systems for multiple reasons.  One aspect is that human emotion offers key human context for any proactive system to understand before it can act.

More importantly, these types of systems need to continue to learn in the wild and adapt based on interactions with users, and while direct feedback is a key signal for learning, indirect signals are very important and they’re free (less work for the user).  For example, a digital assistant can learn a lot from the frustration in a user’s voice and use that as a feedback signal for learning what to do in the future, instead of asking the user for feedback every time.  This information can be used for active learning AI systems to continue to improve over time.

Is there anything else that you would like to share about what you are working on at the Anticipatory Computing Lab or other issues that we have discussed?

When building assistive systems, we really need to think about how to build these systems responsibly and how to enable people to understand what information is being collected and how to control these systems in a practical way.  As AI researchers, we are often fascinated by data and wanting to have as much data as possible to improve these systems, however, there is a tradeoff between the type and amount of data we want and the privacy of the user.  We really need to limit the data we collect to what is absolutely needed to perform the inference task, make the users aware of exactly what data we are collecting and enable them to tune this tradeoff in meaningful and usable ways.

Thank you for the fantastic interview, readers who wish to learn more about this project should read the article Intel’s Lama Nachman and Peter Scott-Morgan: Two Scientists, One a ‘Human Cyborg’.

Intel’s Anticipatory Computing Lab team that developed Assistive Context-Aware Toolkit includes (from left) Alex Nguyen, Sangita Sharma, Max Pinaroc, Sai Prasad, Lama Nachman and Pete Denman. Not pictured are Bruna Girvent, Saurav Sahay and Shachi Kumar. (Credit: Lama Nachman)

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.