Connect with us

Brain Machine Interface

AI Brings New Potential for Prosthetics with 3D-Printed Hand

Updated

 on

A new 3D-printed prosthetic hand paired with AI has been developed by Biological Systems Lab at Hiroshima University in Japan. This new technology can dramatically change the way prosthetics work. It is another step in the direction of combining both the physical human body with artificial intelligence, something that we are most definitely heading towards. 

The 3D-printed prosthetic hand has been paired with a computer interface to create the lightest and cheapest model yet. This version is the most reactive to motion intent that we have seen. Before the current model, they were normally made from metal which caused them to be both heavier and more expensive. The way this new technology works is by a neural network that is trained to recognize certain combined signals, these signals have been named “muscle synergies” by the engineers working on the project. 

The prosthetic hand has five independent fingers that can make complex movements. Compared to previous models, these fingers are able to move around more as well as all at the same time. These developments make it possible for the hand to be used for tasks like holding items such bottles and pens. Whenever the user of the technology wants to move the hand or fingers in a certain way, they only have to imagine it. Professor Toshio Tsuji of the Graduate School of Engineering at Hiroshima University explained the way a user can move the 3D-printed hand. 

“The patient just thinks about the motion of the hand and then the robot automatically moves. The robot is like a part of his body. You can control the robot as you want. We will combine the human body and machine like one living body.”

The 3D-printed hand works when electrodes in the prosthetic measures electrical signals that come from nerves through the skin. It can be compared to the way ECG and heart rates work. The measured signals are then sent to a computer within five milliseconds at which point the computer recognizes the desired movement. The computer then sends the signal back to the hand. 

There is a neural network that helps the computer learn the different complex movements, it has been named Cybernetic Interface. It can differentiate between the 5 fingers so that there can be individual movements. Professor Tsuji also spoke on this aspect of the new technology.

“This is one of the distinctive features of this project. The machine can learn simple basic motions and then combine and then produce complicated motions.”

The technology was tested among seven people, and one of the seven was an amputee who has been wearing a prosthesis for 17 years. The patients performed daily tasks, and they had a 95% accuracy rate for single simple motion and a 93% rate for complex movements. The prosthetics that were used in this specific test were only trained for 5 different movements with each finger; there could be many more complex movements in the future. With just these 5 trained movements, the amputee patient was able to pick up and put down things like bottles an notebooks. 

There are numerous possibilities for this technology. It could decrease cost while providing extremely functional prosthetic hands to amputee patients. There are still some problems like muscle fatigue and the capability of software recognizing many complex movements. 

This work was completed by Hiroshima University Biological Systems Engineering Lab along with patients from the Robot Rehabilitation Center in the Hygo Institute of Assistive Technology, Kobe. The company Kinki Gishi was responsible for creating the socket which was used on the arm of the amputee patient. 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Brain Machine Interface

Computer Uses Human Brain Signals to Model Visual Perception

Updated

 on

In what is the first of its kind, researchers at the University of Helsinki have demonstrated a new technique where a computer monitors human brain signals in order to model visual perception. In other words, the computer attempts to recreate what a human is thinking about in their head. This newly developed technique results in the computer being able to produce completely new information and fictional images that had never appeared before. 

The new study was published in September in the Scientific Reports journal, which is an open-access online journal covering multiple disciplines.

The researchers based the technique on a novel brain-computer interface, which traditionally is only capable of one-way communication from the brain to the computer. This results in for example, letters being spelled or a cursor being moved. 

The work was the first to demonstrate both the computer’s presentation of the information and brain signals being modelled at the same time through the use of artificial intelligence (AI) methods. Human brain responses and a generative neural network interacted and generated images that represented the visual characteristics of what participants were focusing on.

Neuroadaptive Generative Modeling

The method is called neuroadaptive generative modeling, and its effectiveness was tested with 31 participants. These participants were shown hundreds of AI-generated images of a diverse range of people, and the participants’ EEG was recorded while viewing the images.

The participants were told to focus on certain features in the images, like distinct faces and expressions. They then were rapidly presented a series of face images while the EEGs were fed to a neural network. This neural network then inferred whether an image was detected by the brain as matching what the participants were focusing on.

Using this data, the neural network was able to make an estimation about what kind of faces people were thinking of, and the computer-generated images were evaluated by the participants. The results demonstrated that the images almost perfectly matched what they were focusing on, and the accuracy rate of the experiment was 83%. 

Tuukka Ruotsalo is an Academy of Finland Research Fellow at the University of Helsinki, Finland, as well as an Associate Professor at the University of Copenhagen, Denmark.

“The technique combines natural human responses with the computer’s ability to create new information. In the experiment, the participants were only asked to look at the computer-generated images. The computer, in turn, modelled the images displayed and the human reaction toward the images by using human brain responses. From this, the computer can create an entirely new image that matches the user’s intention,” says Ruotsalo.

Other Potential Benefits

Besides generating images of the human face, this new study demonstrated how computers could augment human creativity.

“If you want to draw or illustrate something but are unable to do so, the computer may help you to achieve your goal. It could just observe the focus of attention and predict what you would like to create,” Ruotsalo says. However, the researchers believe that the technique may be used to gain understanding of perception and the underlying processes in our mind.

“The technique does not recognise thoughts but rather responds to the associations we have with mental categories. Thus, while we are not able to find out the identity of a specific ‘old person’ a participant was thinking of, we may gain an understanding of what they associate with old age. We, therefore, believe it may provide a new way of gaining insight into social, cognitive and emotional processes,” says Senior Researcher Michiel Spapé.

Spapé also believes that these results could be used within psychology.

“One person’s idea of an elderly person may be very different from another’s. We are currently uncovering whether our technique might expose unconscious associations, for example by looking if the computer always renders old people as, say, smiling men.”

 

Spread the love
Continue Reading

Brain Machine Interface

Elon Musk Demonstrates Neuralink Brain-Computer Interface Device

Updated

 on

Image: Neuralink

On August 28, Elon Musk demonstrated Neuralink’s brain-machine interface device. Musk, who is the co-founder of the company, also announced that the system has received approval from the Food and Drug Administration (FDA) as an experimental medical device. 

The demonstration took place at Neuralink’s headquarters in Fremont, California, and it relied on the help of three pigs. 

The Demonstration

The focus point of the demonstration, which was live streamed on Youtube, was a pig equipped with the brain-machine interface device. The main pig, named Gertrude, had the implant for two months during the time of the demo. The device recorded signals from the area of the brain that was linked to her snout.

Another one of the pigs, Joyce, has yet to have the surgery, and she is a healthy and happy pig by all measures. Dorthoy, the last of the three, had the surgery which implanted the device, but it was removed afterwards to show the safety of it. Dorothy’s main point was to demonstrate how the device can be put in or taken out at will, while also allowing for the hardware to be upgraded at some point.

Back to the main pig, Gertrude. The demonstration showed that when she smelled and touched things, singlas were sent from her snout and recorded by the device. Noises and dots indicated when neurons were shooting off, and because a large portion of a pig’s brain is linked to the snout, it is an extremely sensitive part of their body. 

The Device (Image: Neuralink)

Automated Surgical System and Link 0.9 Chip

Neuralink’s automated surgical system, which is responsible for implanting the device into a user’s brain, debuted last year. It will be capable of sewing up to 1,024 electrodes into a person’s brain. The electrodes are extremely thin, at just 5 microns wide. 

The automated surgical system currently goes into the brain’s cortical surface, with the company hoping that it can move deeper in the future. This would allow the device to monitor deep brain functions. 

The device, called the “Link 0.9” chip, is just 23mm x 8mm. It is a sealed unit that is inserted into a small hole, which is created by the automated surgical system in the user’s skull. It can then collect the signals that are picked up by the electrodes.

The device is capable of measuring a patient’s temperature, pressure, and movement, which according to Musk, can help prevent heart attacks or stroke. 

The device will be flush with the skull, and data can be transmitted wirelessly. Other features include inductive charging and a full-day battery life, which will allow it to be recharged during sleep. According to Musk, the process to implant the device in a user’s skull will take just under one hour to complete. 

Clinical Trials and AI Symbiosis

The first clinical trials of the device will take place in a small number of patients suffering from spinal cord injuries. Musk said last year that he had hoped for human trials to take place in 2020.

One of the eventual goals of the device is to be paired with a second one on a patient’s spine, allowing the patient to regain full motion. 

Besides medical breakthroughs like the world has never seen, Musk also hopes for the device to allow users to achieve “AI symbiosis.” This idea is one that is being increasingly talked about with the evolution of AI technology, and it is based on the human brain merging with artificial intelligence. 

“Such that the future of the world is controlled by the combined will of the people of Earth — I think that’s obviously gonna be the future we want,” he said during the event.

The company is currently expanding its team to include robotics, electrical, and software engineers in order to continue improving the device and implant procedure. 

Here is the full live stream of the event:

Spread the love
Continue Reading

Brain Machine Interface

Lama Nachman, Intel Fellow & Director of Anticipatory Computing Lab – Interview Series

mm

Updated

 on

Lama Nachman, is an Intel Fellow & Director of Anticipatory Computing Lab. Lama is best known for her work with Prof. Stephen Hawking, she was instrumental in building an assistive computer system to assist Prof. Stephen Hawking in communicating. Today she is assisting British roboticist Dr. Peter Scott-Morgan to communicate. In 2017,  Dr. Peter Scott-Morgan received a diagnosis of motor neurone disease (MND), also known as ALS or Lou Gehrig’s disease. MND attacks the brain and nerves and eventually paralyzes all muscles, even those that enable breathing and swallowing.

Dr. Peter Scott-Morgan once stated: “I will continue to evolve, dying as a human, living as a cyborg.”

What attracted you to AI?

I have always been drawn to the idea that technology can be the great equalizer. When developed responsibly, it has the potential to level the playing field, address social inequities and amplify human potential. Nowhere is this truer than with AI. While much of the industry conversation around AI and humans positions the relationship between the two as adversarial, I believe that there are unique things machines and people are good at, so I prefer to view the future through the lens of Human-AI collaboration rather than human-AI competition.  I lead the Anticipatory Computing Lab at Intel Labs where—across all our research efforts—we have a singular focus on delivering computing innovation that scales for broad societal impact. Given how pervasive AI already is and its growing footprint in every facet of our life, I see tremendous promise in the research my team is undertaking to make AI more accessible, more context-aware, more responsible and ultimately bringing technology solutions at scale to assist people in the real world.

You have worked closely with legendary physicist Prof. Stephen Hawking to create an AI system that assisted him with communicating and with tasks that most of us would consider routine. What were some of these routine tasks?

Working with Prof. Stephen Hawking was the most meaningful and challenging endeavor of my life. It fed my soul and really hit home how technology can profoundly improve people’s lives. He lived with ALS, a degenerative neurological disease, that strips away over time the patient’s ability to perform the simplest of activities. In 2011, we began working with him to explore how to improve the assistive computer system that enabled him to interact with the world. In addition to using his computer for talking to people, Stephen used his computer like all of us do, editing documents, surfing the web, giving lectures, reading/writing emails, etc. Technology enabled Stephen to continue to actively participate in and inspire the world for years after his physical abilities diminished rapidly. That—to me—is what meaningful impact of technology on somebody’s life looks like!

What are some of the key insights that you took away from working with Prof. Stephen Hawking?

Our computer screen is truly our doorway into the world. If people can control their PC, they can control all aspects of their lives (consuming content, accessing the digital world, controlling their physical environment, navigating their wheelchair, etc).  For people with disabilities who can still speak, advances in speech recognition lets them have full control of their devices (and to a large degree, their physical environment). However, those who can’t speak and unable to move are truly impaired in not being able to exercise much independence. What the experience with Prof. Hawking taught me is that assistive technology platforms need to be tailored to the specific needs of the user.  For example, we can’t just assume that a single solution will work for people with ALS, because the disease impacts different abilities across patients. So, we need technologies that can be easily configured and adapted to the individual’s needs.  This is why we built ACAT (Assistive Context Aware Toolkit), a modular, open-source software platform that can enable developers to innovate and build different capabilities on top of it.

I also learned that it’s important to understand every user’s comfort threshold around giving up control in exchange for more efficiency (this is not limited to people with disabilities). For example, AI may be capable of taking away more control from the user in order to do a task faster or more efficiently, but every user has a different level of risk averseness. Some are willing to give up more control, while other users want to maintain more of it. Understanding those thresholds and how far people are willing to go has a big impact on how these systems can be designed. We need to rethink system design in terms of user comfort level rather than only objective measures of efficiency and accuracy.

More recently, you have been working with a famous UK scientist Peter Scott Morgan who is suffering from motor neuron disease and has the goal of becoming the world’s first full cyborg. What are some of the ambitious goals that Peter has?

One of the issues with AAC (Assistive and Augmentative communication) is the “silence gap”.  Many people with ALS (including Peter) use gaze control to choose letters / words on the screen to speak to others.  This results in a long silence after someone finishes their sentence while the person gazes at their computer and start formulating their letters and words to respond.  Peter wanted to reduce this silence gap as much as possible to bring verbal spontaneity back to the communication.  He also wants to preserve his voice and personality and use a text to speech system that expresses his unique style of communication (for e.g. his quips, his quick-witted sarcasm, his emotions).

British roboticist Dr. Peter Scott-Morgan, who has motor neurone disease, began in 2019 to undergone a series of operations to extend his life using technology. (Credit: Cardiff Productions)

Could you discuss some of the technologies that are currently being used to assist Dr. Peter Scott-Morgan?

Peter is using ACAT (Assistive Context Aware Toolkit), the platform that we built during our work with Dr. Hawking and later released to open source. Unlike Dr. Hawking who used the muscles in his cheek as a “input trigger” to control the letters on his screen, Peter is using gaze control (a capability we added to the existing ACAT) to speak to and control his PC, which interfaces with a Text-to-Speech (TTS) solution from a company called CereProc that was customized for him and enables him to express different emotions/emphasis. The system also controls an avatar that was customized for him.

We are currently working on a response generation system for ACAT that can allow Peter to interact with the system at a higher-level using AI capabilities.  This system will listen to Peter’s conversations over time and suggest responses for Peter to choose on the screen. The goal is that over time the AI system will learn from Peter’s data and enable him to “nudge” the system to provide him the best responses using just some keywords (similar to how searches work on the web today). Our goal with the response generation system is to reduce the silence gap in communication referenced above and empower Peter and future users of ACAT to communicate at a pace that feels more “natural.”

You’ve also spoken about the importance of transparency in AI, how big of an issue is this?

It is a big issue especially when it is deployed in decision making systems or human/AI collaborative systems.  For example, in the case of Peter’s assistive system, we need to understand what is causing the system to make these recommendations and how to impact the learning of this system to more accurately express his ideas.

In the larger context of decision making systems, whether it is helping with diagnosis based on medical imaging or making recommendations on granting loans, AI systems need to provide human interpretable information on how they arrived at decisions, what attributes or features were most impactful on that decision, what confidence does the system have in the inference made, etc.  This increases trust in the AI systems and enables better collaboration between humans and AI in mixed decision-making scenarios.

AI bias specifically when it comes to racism and sexism is a huge issue, but how do you identify other types of bias when you have no idea what biases you are looking for?

It is a very hard problem and one that can’t be solved with technology alone.  We need to bring more diversity into the development of AI systems (racial, gender, culture, physical ability, etc.).  This is clearly a huge gap in the population building these AI systems today.  In addition, it is critical to have multi-disciplinary teams engaged in the definition and development of these systems, bringing social science, philosophy, psychology, ethics and policy to the table (not just computer science), and engaging in the inquiry process in the context of the specific projects and problems.

You’ve spoken before about using AI to amplify human potential. What are some areas that show the most promise for this amplification of human potential?

An obvious area is enabling people with disabilities to live more independently, to communicate with loved ones and to continue to create and contribute to the society.  I see a big potential in education, in understanding student engagement and personalizing the learning experience to the individual needs and capabilities of the student to improve engagement, empower teachers with this knowledge and improve learning outcomes.  The inequity in education today is so profound and there is a place for AI to help reduce some of this inequity if we do it right.  There are endless opportunities for AI to bring a lot of value by creating human/AI collaborative systems in so many sectors (healthcare, manufacturing, etc) because what humans and AI bring to the table are very complementary. For this to happen, we need innovation at the intersection of social science, HCI and AI.  Robust multi-modal perception, context awareness, learning from limited data, physically situated HCI and interpretability are some of the key challenges that we need to focus on to bring this vision to fruition.

You’ve also spoken about how important emotion recognition is to the future of AI? Why should the AI industry focus more on this area of research?

Emotion recognition is a key capability of human/AI systems for multiple reasons.  One aspect is that human emotion offers key human context for any proactive system to understand before it can act.

More importantly, these types of systems need to continue to learn in the wild and adapt based on interactions with users, and while direct feedback is a key signal for learning, indirect signals are very important and they’re free (less work for the user).  For example, a digital assistant can learn a lot from the frustration in a user’s voice and use that as a feedback signal for learning what to do in the future, instead of asking the user for feedback every time.  This information can be used for active learning AI systems to continue to improve over time.

Is there anything else that you would like to share about what you are working on at the Anticipatory Computing Lab or other issues that we have discussed?

When building assistive systems, we really need to think about how to build these systems responsibly and how to enable people to understand what information is being collected and how to control these systems in a practical way.  As AI researchers, we are often fascinated by data and wanting to have as much data as possible to improve these systems, however, there is a tradeoff between the type and amount of data we want and the privacy of the user.  We really need to limit the data we collect to what is absolutely needed to perform the inference task, make the users aware of exactly what data we are collecting and enable them to tune this tradeoff in meaningful and usable ways.

Thank you for the fantastic interview, readers who wish to learn more about this project should read the article Intel’s Lama Nachman and Peter Scott-Morgan: Two Scientists, One a ‘Human Cyborg’.

Intel’s Anticipatory Computing Lab team that developed Assistive Context-Aware Toolkit includes (from left) Alex Nguyen, Sangita Sharma, Max Pinaroc, Sai Prasad, Lama Nachman and Pete Denman. Not pictured are Bruna Girvent, Saurav Sahay and Shachi Kumar. (Credit: Lama Nachman)

Spread the love
Continue Reading