Connect with us

Brain Machine Interface

Elon Musk Demonstrates Neuralink Brain-Computer Interface Device

Published

 on

Image: Neuralink

On August 28, Elon Musk demonstrated Neuralink’s brain-machine interface device. Musk, who is the co-founder of the company, also announced that the system has received approval from the Food and Drug Administration (FDA) as an experimental medical device. 

The demonstration took place at Neuralink’s headquarters in Fremont, California, and it relied on the help of three pigs. 

The Demonstration

The focus point of the demonstration, which was live streamed on Youtube, was a pig equipped with the brain-machine interface device. The main pig, named Gertrude, had the implant for two months during the time of the demo. The device recorded signals from the area of the brain that was linked to her snout.

Another one of the pigs, Joyce, has yet to have the surgery, and she is a healthy and happy pig by all measures. Dorthoy, the last of the three, had the surgery which implanted the device, but it was removed afterwards to show the safety of it. Dorothy’s main point was to demonstrate how the device can be put in or taken out at will, while also allowing for the hardware to be upgraded at some point.

Back to the main pig, Gertrude. The demonstration showed that when she smelled and touched things, singlas were sent from her snout and recorded by the device. Noises and dots indicated when neurons were shooting off, and because a large portion of a pig’s brain is linked to the snout, it is an extremely sensitive part of their body. 

The Device (Image: Neuralink)

Automated Surgical System and Link 0.9 Chip

Neuralink’s automated surgical system, which is responsible for implanting the device into a user’s brain, debuted last year. It will be capable of sewing up to 1,024 electrodes into a person’s brain. The electrodes are extremely thin, at just 5 microns wide. 

The automated surgical system currently goes into the brain’s cortical surface, with the company hoping that it can move deeper in the future. This would allow the device to monitor deep brain functions. 

The device, called the “Link 0.9” chip, is just 23mm x 8mm. It is a sealed unit that is inserted into a small hole, which is created by the automated surgical system in the user’s skull. It can then collect the signals that are picked up by the electrodes.

The device is capable of measuring a patient’s temperature, pressure, and movement, which according to Musk, can help prevent heart attacks or stroke. 

The device will be flush with the skull, and data can be transmitted wirelessly. Other features include inductive charging and a full-day battery life, which will allow it to be recharged during sleep. According to Musk, the process to implant the device in a user’s skull will take just under one hour to complete. 

Clinical Trials and AI Symbiosis

The first clinical trials of the device will take place in a small number of patients suffering from spinal cord injuries. Musk said last year that he had hoped for human trials to take place in 2020.

One of the eventual goals of the device is to be paired with a second one on a patient’s spine, allowing the patient to regain full motion. 

Besides medical breakthroughs like the world has never seen, Musk also hopes for the device to allow users to achieve “AI symbiosis.” This idea is one that is being increasingly talked about with the evolution of AI technology, and it is based on the human brain merging with artificial intelligence. 

“Such that the future of the world is controlled by the combined will of the people of Earth — I think that’s obviously gonna be the future we want,” he said during the event.

The company is currently expanding its team to include robotics, electrical, and software engineers in order to continue improving the device and implant procedure. 

Here is the full live stream of the event:

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Brain Machine Interface

Lama Nachman, Intel Fellow & Director of Anticipatory Computing Lab – Interview Series

mm

Published

 on

Lama Nachman, is an Intel Fellow & Director of Anticipatory Computing Lab. Lama is best known for her work with Prof. Stephen Hawking, she was instrumental in building an assistive computer system to assist Prof. Stephen Hawking in communicating. Today she is assisting British roboticist Dr. Peter Scott-Morgan to communicate. In 2017,  Dr. Peter Scott-Morgan received a diagnosis of motor neurone disease (MND), also known as ALS or Lou Gehrig’s disease. MND attacks the brain and nerves and eventually paralyzes all muscles, even those that enable breathing and swallowing.

Dr. Peter Scott-Morgan once stated: “I will continue to evolve, dying as a human, living as a cyborg.”

What attracted you to AI?

I have always been drawn to the idea that technology can be the great equalizer. When developed responsibly, it has the potential to level the playing field, address social inequities and amplify human potential. Nowhere is this truer than with AI. While much of the industry conversation around AI and humans positions the relationship between the two as adversarial, I believe that there are unique things machines and people are good at, so I prefer to view the future through the lens of Human-AI collaboration rather than human-AI competition.  I lead the Anticipatory Computing Lab at Intel Labs where—across all our research efforts—we have a singular focus on delivering computing innovation that scales for broad societal impact. Given how pervasive AI already is and its growing footprint in every facet of our life, I see tremendous promise in the research my team is undertaking to make AI more accessible, more context-aware, more responsible and ultimately bringing technology solutions at scale to assist people in the real world.

You have worked closely with legendary physicist Prof. Stephen Hawking to create an AI system that assisted him with communicating and with tasks that most of us would consider routine. What were some of these routine tasks?

Working with Prof. Stephen Hawking was the most meaningful and challenging endeavor of my life. It fed my soul and really hit home how technology can profoundly improve people’s lives. He lived with ALS, a degenerative neurological disease, that strips away over time the patient’s ability to perform the simplest of activities. In 2011, we began working with him to explore how to improve the assistive computer system that enabled him to interact with the world. In addition to using his computer for talking to people, Stephen used his computer like all of us do, editing documents, surfing the web, giving lectures, reading/writing emails, etc. Technology enabled Stephen to continue to actively participate in and inspire the world for years after his physical abilities diminished rapidly. That—to me—is what meaningful impact of technology on somebody’s life looks like!

What are some of the key insights that you took away from working with Prof. Stephen Hawking?

Our computer screen is truly our doorway into the world. If people can control their PC, they can control all aspects of their lives (consuming content, accessing the digital world, controlling their physical environment, navigating their wheelchair, etc).  For people with disabilities who can still speak, advances in speech recognition lets them have full control of their devices (and to a large degree, their physical environment). However, those who can’t speak and unable to move are truly impaired in not being able to exercise much independence. What the experience with Prof. Hawking taught me is that assistive technology platforms need to be tailored to the specific needs of the user.  For example, we can’t just assume that a single solution will work for people with ALS, because the disease impacts different abilities across patients. So, we need technologies that can be easily configured and adapted to the individual’s needs.  This is why we built ACAT (Assistive Context Aware Toolkit), a modular, open-source software platform that can enable developers to innovate and build different capabilities on top of it.

I also learned that it’s important to understand every user’s comfort threshold around giving up control in exchange for more efficiency (this is not limited to people with disabilities). For example, AI may be capable of taking away more control from the user in order to do a task faster or more efficiently, but every user has a different level of risk averseness. Some are willing to give up more control, while other users want to maintain more of it. Understanding those thresholds and how far people are willing to go has a big impact on how these systems can be designed. We need to rethink system design in terms of user comfort level rather than only objective measures of efficiency and accuracy.

More recently, you have been working with a famous UK scientist Peter Scott Morgan who is suffering from motor neuron disease and has the goal of becoming the world’s first full cyborg. What are some of the ambitious goals that Peter has?

One of the issues with AAC (Assistive and Augmentative communication) is the “silence gap”.  Many people with ALS (including Peter) use gaze control to choose letters / words on the screen to speak to others.  This results in a long silence after someone finishes their sentence while the person gazes at their computer and start formulating their letters and words to respond.  Peter wanted to reduce this silence gap as much as possible to bring verbal spontaneity back to the communication.  He also wants to preserve his voice and personality and use a text to speech system that expresses his unique style of communication (for e.g. his quips, his quick-witted sarcasm, his emotions).

British roboticist Dr. Peter Scott-Morgan, who has motor neurone disease, began in 2019 to undergone a series of operations to extend his life using technology. (Credit: Cardiff Productions)

Could you discuss some of the technologies that are currently being used to assist Dr. Peter Scott-Morgan?

Peter is using ACAT (Assistive Context Aware Toolkit), the platform that we built during our work with Dr. Hawking and later released to open source. Unlike Dr. Hawking who used the muscles in his cheek as a “input trigger” to control the letters on his screen, Peter is using gaze control (a capability we added to the existing ACAT) to speak to and control his PC, which interfaces with a Text-to-Speech (TTS) solution from a company called CereProc that was customized for him and enables him to express different emotions/emphasis. The system also controls an avatar that was customized for him.

We are currently working on a response generation system for ACAT that can allow Peter to interact with the system at a higher-level using AI capabilities.  This system will listen to Peter’s conversations over time and suggest responses for Peter to choose on the screen. The goal is that over time the AI system will learn from Peter’s data and enable him to “nudge” the system to provide him the best responses using just some keywords (similar to how searches work on the web today). Our goal with the response generation system is to reduce the silence gap in communication referenced above and empower Peter and future users of ACAT to communicate at a pace that feels more “natural.”

You’ve also spoken about the importance of transparency in AI, how big of an issue is this?

It is a big issue especially when it is deployed in decision making systems or human/AI collaborative systems.  For example, in the case of Peter’s assistive system, we need to understand what is causing the system to make these recommendations and how to impact the learning of this system to more accurately express his ideas.

In the larger context of decision making systems, whether it is helping with diagnosis based on medical imaging or making recommendations on granting loans, AI systems need to provide human interpretable information on how they arrived at decisions, what attributes or features were most impactful on that decision, what confidence does the system have in the inference made, etc.  This increases trust in the AI systems and enables better collaboration between humans and AI in mixed decision-making scenarios.

AI bias specifically when it comes to racism and sexism is a huge issue, but how do you identify other types of bias when you have no idea what biases you are looking for?

It is a very hard problem and one that can’t be solved with technology alone.  We need to bring more diversity into the development of AI systems (racial, gender, culture, physical ability, etc.).  This is clearly a huge gap in the population building these AI systems today.  In addition, it is critical to have multi-disciplinary teams engaged in the definition and development of these systems, bringing social science, philosophy, psychology, ethics and policy to the table (not just computer science), and engaging in the inquiry process in the context of the specific projects and problems.

You’ve spoken before about using AI to amplify human potential. What are some areas that show the most promise for this amplification of human potential?

An obvious area is enabling people with disabilities to live more independently, to communicate with loved ones and to continue to create and contribute to the society.  I see a big potential in education, in understanding student engagement and personalizing the learning experience to the individual needs and capabilities of the student to improve engagement, empower teachers with this knowledge and improve learning outcomes.  The inequity in education today is so profound and there is a place for AI to help reduce some of this inequity if we do it right.  There are endless opportunities for AI to bring a lot of value by creating human/AI collaborative systems in so many sectors (healthcare, manufacturing, etc) because what humans and AI bring to the table are very complementary. For this to happen, we need innovation at the intersection of social science, HCI and AI.  Robust multi-modal perception, context awareness, learning from limited data, physically situated HCI and interpretability are some of the key challenges that we need to focus on to bring this vision to fruition.

You’ve also spoken about how important emotion recognition is to the future of AI? Why should the AI industry focus more on this area of research?

Emotion recognition is a key capability of human/AI systems for multiple reasons.  One aspect is that human emotion offers key human context for any proactive system to understand before it can act.

More importantly, these types of systems need to continue to learn in the wild and adapt based on interactions with users, and while direct feedback is a key signal for learning, indirect signals are very important and they’re free (less work for the user).  For example, a digital assistant can learn a lot from the frustration in a user’s voice and use that as a feedback signal for learning what to do in the future, instead of asking the user for feedback every time.  This information can be used for active learning AI systems to continue to improve over time.

Is there anything else that you would like to share about what you are working on at the Anticipatory Computing Lab or other issues that we have discussed?

When building assistive systems, we really need to think about how to build these systems responsibly and how to enable people to understand what information is being collected and how to control these systems in a practical way.  As AI researchers, we are often fascinated by data and wanting to have as much data as possible to improve these systems, however, there is a tradeoff between the type and amount of data we want and the privacy of the user.  We really need to limit the data we collect to what is absolutely needed to perform the inference task, make the users aware of exactly what data we are collecting and enable them to tune this tradeoff in meaningful and usable ways.

Thank you for the fantastic interview, readers who wish to learn more about this project should read the article Intel’s Lama Nachman and Peter Scott-Morgan: Two Scientists, One a ‘Human Cyborg’.

Intel’s Anticipatory Computing Lab team that developed Assistive Context-Aware Toolkit includes (from left) Alex Nguyen, Sangita Sharma, Max Pinaroc, Sai Prasad, Lama Nachman and Pete Denman. Not pictured are Bruna Girvent, Saurav Sahay and Shachi Kumar. (Credit: Lama Nachman)

Spread the love
Continue Reading

Brain Machine Interface

New Approach Could Lead to Thought-Controlled Electronic Prostheses

Published

 on

Current neural implants are capable of recording massive amounts of neural activity, which is then transmitted through wires to a computer. Researchers have attempted to develop wireless brain-computer interfaces to complete this action, but that requires a large amount of power. Because of this high amount of power, too much heat is generated, making the implants unsafe for patients. 

Now, a new study coming out of Stanford aims to resolve this issue. Researchers at the university have been constantly working on technology that could lead to paralysis patients regaining control of their limbs. Specifically, they have been aiming toward technology that would allow these patients to control prostheses and interact with computers using their thoughts. 

Brain-Computer Interface

In order to achieve this, the team has focused on improving a brain-computer interface, which is a device that is implanted on the surface of a patient’s brain, just beneath the skull. The implant connects the human nervous system to an electronic device, which could help restore motor control to an individual who suffered a spinal cord injury or neurological condition.

Current devices record large amounts of neural activity and transmit it through wires to a computer, and when researchers try to create wireless brain-computer interfaces, that is when too much heat is generated. 

The team of electrical engineers and neuroscientists, including Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have demonstrated a possible way to achieve a wireless device that is able to gather and transmit accurate neural signals, all while using a tenth of the power required by current systems. 

The suggested wireless devices would appear to be more natural than the ones with wires, and patients would have a greater range of motion.

The approach was detailed by graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD in a paper published in Nature Biomedical Engineering

Isolating Neural Signals

The neuroscientists were able to identify specific neural signals which were required in order to control a prosthetic device. The device could be anything ranging from a robotic arm to a computer cursor. 

The electrical engineers then created the circuitry that would lead to a wireless brain-computer interface able to process and transmit the identified neural signals. By isolating signals, less power was required, which made the devices safe to be implanted on the surface of the brain.

The team tested their approach by using collected neuronal data from three nonhuman primates and one human participant. In the clinical trial, subjects performed movement tasks like positioning a cursor on a computer screen. They then recorded measurements, and the team was able to determine that by recording a subset of action-specific brain signals, an individual’s motion could be controlled by a wireless interface. 

The main separating factor between this device and the wired device is isolation, with the wired device collecting brain signals in bulk. 

The team of researchers will now construct an implant based on the new approach and design.

 

Spread the love
Continue Reading

Brain Machine Interface

Artificial Intelligence Used to Analyze Opinions Through Brain Activity

Published

 on

Researchers from the University of Helsinki have developed a new technique that utilizes artificial intelligence (AI) and the brain activity of groups of people in order to analyze opinions and draw conclusions. The researchers termed the technique “brainsourcing,” and it can help classify images or recommend content. 

What is Crowdsourcing

Crowdsourcing is used whenever there is a complex task that needs to be broken up into smaller, more manageable ones. Those are then distributed to large groups of people that solve the problems individually. An example of this would be if people were asked if an object appears in an image, and the responses would then be used to train an image recognition system. Today’s top image recognition systems that are based on AI are still not yet fully automated. Because of this, the opinions of several people on the content of multiple sample images must be used as training data.

The researchers wanted to try to implement crowdsourcing by analyzing the electroencephalograms (EEGs) of individuals, and they used AI techniques to do it. This would allow the information to be extracted from the EEG instead of people having to give their opinions. 

Tuukka Ruotsalo is an Academy Research Fellow from the University of Helsinki. 

“We wanted to investigate whether crowdsourcing can be applied to image recognition by utilising the natural reactions of people without them having to carry out any manual tasks with a keyboard or mouse,” says Ruotsalo.

The Study

The study involved 30 volunteers who were shown a computer display with human faces. The participants then labeled the faces in their mind based on what was in the images, such as a blond or dark-haired individual, or whether an individual was smiling or not. The big difference from conventional crowdsourcing tasks was that the participants did not need to take any further action besides observing the images presented to them.

Electroencephalography was then used to collect the brain activity of each participant, and the AI algorithm used this to learn to recognize images relevant to the task, like when an image of a person with certain features appeared on-screen.

The researchers found that the computer was capable of interpreting these mental labels directly from the EEG, and brain sourcing can be used in recognition tasks. 

As for the future of this technique, student and research assistant Keith Davis says, “Our approach is limited by the technology available.”

“Current methods to measure brain activity are adequate for controlled setups in a laboratory, but the technology needs to improve for everyday use. Additionally, these methods only capture a very small percentage of total brain activity. As brain imaging technologies improve, it may become possible to capture preference information directly from the brain. Instead of using conventional ratings or like buttons, you could simply listen to a song or watch a show, and your brain activity alone would be enough to determine your response to it.”

The results can be used in interfaces that combine brain and computer activity, such as those that require lightweight EEG equipment as wearable electronics. Lightweight wearables capable of measuring EEG are undergoing development.

This type of technology is allowing AI to be used to extract valuable information with very little effort on the human’s part. As it continues to improve, it can be expected that this trend will only continue and participation from the individual will be unnecessary in many cases.

 

Spread the love
Continue Reading