Connect with us

Brain Machine Interface

AI Brings New Potential for Prosthetics with 3D-Printed Hand

Published

 on

A new 3D-printed prosthetic hand paired with AI has been developed by Biological Systems Lab at Hiroshima University in Japan. This new technology can dramatically change the way prosthetics work. It is another step in the direction of combining both the physical human body with artificial intelligence, something that we are most definitely heading towards. 

The 3D-printed prosthetic hand has been paired with a computer interface to create the lightest and cheapest model yet. This version is the most reactive to motion intent that we have seen. Before the current model, they were normally made from metal which caused them to be both heavier and more expensive. The way this new technology works is by a neural network that is trained to recognize certain combined signals, these signals have been named “muscle synergies” by the engineers working on the project. 

The prosthetic hand has five independent fingers that can make complex movements. Compared to previous models, these fingers are able to move around more as well as all at the same time. These developments make it possible for the hand to be used for tasks like holding items such bottles and pens. Whenever the user of the technology wants to move the hand or fingers in a certain way, they only have to imagine it. Professor Toshio Tsuji of the Graduate School of Engineering at Hiroshima University explained the way a user can move the 3D-printed hand. 

“The patient just thinks about the motion of the hand and then the robot automatically moves. The robot is like a part of his body. You can control the robot as you want. We will combine the human body and machine like one living body.”

The 3D-printed hand works when electrodes in the prosthetic measures electrical signals that come from nerves through the skin. It can be compared to the way ECG and heart rates work. The measured signals are then sent to a computer within five milliseconds at which point the computer recognizes the desired movement. The computer then sends the signal back to the hand. 

There is a neural network that helps the computer learn the different complex movements, it has been named Cybernetic Interface. It can differentiate between the 5 fingers so that there can be individual movements. Professor Tsuji also spoke on this aspect of the new technology.

“This is one of the distinctive features of this project. The machine can learn simple basic motions and then combine and then produce complicated motions.”

The technology was tested among seven people, and one of the seven was an amputee who has been wearing a prosthesis for 17 years. The patients performed daily tasks, and they had a 95% accuracy rate for single simple motion and a 93% rate for complex movements. The prosthetics that were used in this specific test were only trained for 5 different movements with each finger; there could be many more complex movements in the future. With just these 5 trained movements, the amputee patient was able to pick up and put down things like bottles an notebooks. 

There are numerous possibilities for this technology. It could decrease cost while providing extremely functional prosthetic hands to amputee patients. There are still some problems like muscle fatigue and the capability of software recognizing many complex movements. 

This work was completed by Hiroshima University Biological Systems Engineering Lab along with patients from the Robot Rehabilitation Center in the Hygo Institute of Assistive Technology, Kobe. The company Kinki Gishi was responsible for creating the socket which was used on the arm of the amputee patient. 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Brain Machine Interface

Artificial Intelligence Used to Analyze Opinions Through Brain Activity

Published

on

Researchers from the University of Helsinki have developed a new technique that utilizes artificial intelligence (AI) and the brain activity of groups of people in order to analyze opinions and draw conclusions. The researchers termed the technique “brainsourcing,” and it can help classify images or recommend content. 

What is Crowdsourcing

Crowdsourcing is used whenever there is a complex task that needs to be broken up into smaller, more manageable ones. Those are then distributed to large groups of people that solve the problems individually. An example of this would be if people were asked if an object appears in an image, and the responses would then be used to train an image recognition system. Today’s top image recognition systems that are based on AI are still not yet fully automated. Because of this, the opinions of several people on the content of multiple sample images must be used as training data.

The researchers wanted to try to implement crowdsourcing by analyzing the electroencephalograms (EEGs) of individuals, and they used AI techniques to do it. This would allow the information to be extracted from the EEG instead of people having to give their opinions. 

Tuukka Ruotsalo is an Academy Research Fellow from the University of Helsinki. 

“We wanted to investigate whether crowdsourcing can be applied to image recognition by utilising the natural reactions of people without them having to carry out any manual tasks with a keyboard or mouse,” says Ruotsalo.

The Study

The study involved 30 volunteers who were shown a computer display with human faces. The participants then labeled the faces in their mind based on what was in the images, such as a blond or dark-haired individual, or whether an individual was smiling or not. The big difference from conventional crowdsourcing tasks was that the participants did not need to take any further action besides observing the images presented to them.

Electroencephalography was then used to collect the brain activity of each participant, and the AI algorithm used this to learn to recognize images relevant to the task, like when an image of a person with certain features appeared on-screen.

The researchers found that the computer was capable of interpreting these mental labels directly from the EEG, and brain sourcing can be used in recognition tasks. 

As for the future of this technique, student and research assistant Keith Davis says, “Our approach is limited by the technology available.”

“Current methods to measure brain activity are adequate for controlled setups in a laboratory, but the technology needs to improve for everyday use. Additionally, these methods only capture a very small percentage of total brain activity. As brain imaging technologies improve, it may become possible to capture preference information directly from the brain. Instead of using conventional ratings or like buttons, you could simply listen to a song or watch a show, and your brain activity alone would be enough to determine your response to it.”

The results can be used in interfaces that combine brain and computer activity, such as those that require lightweight EEG equipment as wearable electronics. Lightweight wearables capable of measuring EEG are undergoing development.

This type of technology is allowing AI to be used to extract valuable information with very little effort on the human’s part. As it continues to improve, it can be expected that this trend will only continue and participation from the individual will be unnecessary in many cases.

 

Spread the love
Continue Reading

Brain Machine Interface

Chris Aimone, Co-Founder and Chief Technology Officer at Muse – Interview Series

mm

Published

on

Chris Aimone co-founded Muse with an ethos to create technology that expands our perspective of ourselves and the world around us.

An artist and inventor at heart, Chris’ creative and design practice has spanned many fields, including architecture, augmented reality, computer vision, music and robotics. Looking to bring innovative new experiences to life, Chris has built installations for the Ontario Science Centre and contributed to major technology art projects featured around the world (including Burning Man).

Can you share with us how your love of Robotics and Brain-Machine Interfaces (BMI) began?

When I was very young, instead of playing with popular/trendy children’s toys, I was interested in tools – so much so, that my favorite book was actually a catalogue of tools (at 18 months) and I wanted a sewing machine for Christmas when I was 3.

I was interested in what tools could do – how they could extend my reach into the impossible, and my love for robotics and BMI was simply an extension of that. I was so curious about what lay just beyond the limits of my body’s capabilities, just beyond the range of my senses.  It makes a lot of sense in a way, as I believe we humans love to figure things out whether it’s through our senses or through applying our knowledge and our tools together to explore and make sense of our experiences.

I didn’t start building robots or BMIs until much later, I’m pretty sure it was just a question of access.   Computers weren’t so affordable (or approachable) in the 80s. I learned to program on a Commodore 64, but I didn’t want my creations to only live in a computer. I learned to wire things into the parallel port, but it was frustrating and tedious. There was no Arduino, no raspberry pie, no next day deliveries from Digikey.

The coolest thing I built back then was a mask with some computer-controlled flashing lights that I could pulsate into my eyes at different frequencies. I had noticed that my perception got a little weird looking at flickering LEDs in my tinkering, so I was curious about what would happen if I affected my entire vision that way. Clearly I had a latent interest in consciousness and the brain-machine interface. I’m really curious about what I might have built if I had access to Muse or other hackable technologies of today back then!

 

What were some of the first robots that you worked on?

I built a really cool wall-climbing robot with a couple of friends. It had four vacuum cups for hands and a big vacuum belly. The only use we could think of for it was autonomous window cleaning.  It was a super fun project enabled by the kindness of automation vendors who gave us parts when we cold-called them with a crazy idea… but it actually worked! The project also taught us a lot about electromagnetic interference and the strength of the drywall in the house.

Following that, I built a painting robot one summer that painted on a huge 6×8 wall canvas using a brush mounted to a mutant commodore 64 printer. It was a monstrosity that used every bit of tech junk I could find including a barbecue tank, computer mice and my old rollerblades. It had a webcam from the mid-90s and attempted to draw what it saw. It was so ridiculous… I still miss it’s patient, humorous personality.

When I was doing my masters, I built a similarly whimsical robot with some friends that was the size of a house. We were interested in what would happen if a building changed shape and personality in response to the people who were in it. It was super cool…and the building felt alive!  It moved and made noise. You became so aware of yourself, it felt like being in an empty cathedral.

 

For over a decade you essentially became a cyborg. Can you share your story of how this journey began?

By the time I finished my undergraduate degree computers had become pretty capable. I could afford a computer that could do simple processing of video at 15 frames per second, Linux was almost installable by the uninitiated. I loved the memory and speed of computers and it lead me to ask: What if I had similar abilities?

I met this professor at UofT named Steve Mann who was a wild inventor, and still a member of the InteraXon advisory board today. He walked around with a computer on his head and sent laser images into his eyes.  It was exactly what I was looking for! If you love tools, what better thing to do than encrust yourself with them?

Steve and I started working a lot together. We were both interested in extending our overall perception. We worked a lot with computer vision and built very early augmented reality devices. In many ways, they still amaze me more than the AR that’s available today. Steve had invented a way of creating perfect optical alignment between computer graphics and your natural view of the world.  This allowed us to do beautiful things like melding information from a far-infrared camera seamlessly into your vision. Walking around and being able to see heat is really interesting.

 

You scaled back your cyborg ambitions, as it caused you to distance yourself from others. Could you share some details about this transition in your mindset?

I had imagined a deep and seamless integration with computing technology: Information always available, instant communication, AI assistants, and extended-sensory abilities.  I really believed in technology always being there so I could have it when needed.

Things changed for me when I started broadcasting images to a website. A local telecom company donated a bunch of mobile phones with serial data connections to our lab at the university.  We could slowly upload images, about one every few seconds at low fidelity. We started a challenge to see who could stream the most. It was a super interesting experiment. I wore computers for months streaming my life to the internet, making sure to post every few seconds whenever I was doing something interesting — living my life through a camera view.

The truth is, it was exciting to feel like I wasn’t alone, posting to an imagined audience.  Sound familiar? We all got a taste of present-day social media, 20 years ago. And what did I learn?

Being stuck in a computer, trying to connect with others by broadcasting a virtual life, kept me from being present with others… and I found myself feeling more alone than ever. Woah.

I walked around with constant information overload with a computer terminal in front of my face signalling anytime an email came in, and when an image was uploaded a text web browser would open with something I was researching – it was a lot.

Though I was interested in computers helping me solve problems, I began to experience less freedom of thought. I felt constantly interrupted, being triggered by what was bubbling up through cyberspace. I discovered the challenge of staying in touch with who you are and the loss of ability to tune into your spark of creativity when you are always in a state of information overload.

I was interested in technology that made me feel expansive, creative, and unfettered, but somehow, I painted myself into a corner with much of the opposite.

 

You did a really remarkable societal experiment, where users across Canada could use their minds to control lights on the CN Tower and Niagara Falls using their minds. Could you describe this?

This was a special opportunity we had early on in the journey of Muse at the winter Olympics in 2010, in an effort to connect the various parts of Canada to the global event.

While it’s not yet understood, we know that our brainwaves synchronize in interesting ways, especially when we do things in a close relationship, like communicate with each other, when we dance or when making music. What happens when you project the brain activity of an individual in a way for it to be experienced by many?

We created an experience where people attending the games on the west coast of Canada could affect the experience of thousands of people, 3000 miles away.  By wearing a brain-sensing device, participants connected their consciousness to huge real-time lighting display that illuminated Niagara Falls, downtown Toronto via CN Tower, and the Canadian parliament buildings in Ottawa.

You sat in front of a huge screen with a real-time view of the light displays so you could see the live effect of your mind in this larger than life experience. People would call up friends in Toronto and get them to watch as the patterns of activity in their brain lit up the city with a dramatic play of light.

 

You’ve described Muse as a ‘happy accident’. Could you share the details behind this happy accident, and what you learned from the experience?

I often forget the beauty of tinkering as building tech can be really tedious. You have to get rigid, but so much great stuff happens when you can break out the patch cables, plug a bunch of random stuff together and just see what happens… just like how Muse was created!

The first seed of Muse was planted when we wrote some code to connect to an old medical EEG system and streamed the data over a network. We had to find a computer chassis that supported ISA cards and we made a makeshift headband. We wanted to get EEG data feeding into our wearable computers. Could we upload images automatically when we saw something interesting?  We had heard that when you closed your eyes your alpha brainwaves would become larger… could this be how we sense if we were interested in what we saw?

We hacked together some signal processing with some basic FFT spectral analysis and hooked up the result to a simple graphic that was like one of those vertical light dimmer sliders. Simple idea,  but it was a pretty elaborate setup. What happened next was super interesting. We took turns wearing the device, closing and opening our eyes.  Sure enough, the slider went up and down, but it would wander around in curious ways. When we closed our eyes it went up, but not all the way up and still wandered around… What was happening?

We spent hours playing with it, trying to understand what made it wander and if we could we control it. We hooked the output to an audible sound so we could hear it go up and down when we had our eyes closed. I remember sitting there for ages, eyes closed, exploring my consciousness and the sound.

I soon discovered I could focus my consciousness in different ways, changing the sound,  but also changing my experience, my perception and the way I felt. We invited other people into the lab and the same thing happened to them. They would close their eyes and go into a deep inner exploration (sounds kind of like meditation doesn’t it?!). It was wild – we completely forgot about our original idea as this was so much more interesting. That was the happy accident – I can say I discovered meditation and mindfulness through technology, by accident!

 

Can you explain some of the technology that enables Muse to detect brainwaves?

The brain has billions of neurons, and each individual neuron connects (on average) to thousands of others. Communication happens between them through small electrical currents that travel along the neurons and throughout enormous networks of brain circuits. When all these neurons are activated they produce electrical pulses – visualize a wave rippling through the crowd at a sports arena –  this synchronized electrical activity results in a “brainwave”.

When many neurons interact in this way at the same time, this activity is strong enough to be detected even outside the brain. By placing electrodes on the scalp, this activity can be amplified, analyzed, and visualized. This is electroencephalography, or EEG – a fancy word that just means electric brain graph. (Encephalon, the brain, is derived from the ancient Greek “enképhalos,” meaning within the head.)

Muse has been tested and validated against EEG systems that are exponentially more expensive, and it’s used by neuroscientists around the world in real-world neuroscience research inside and outside the lab. Using 7 finely calibrated sensors – 2 on the forehead, 2 behind the ears plus 3 reference sensors – Muse is a next-generation, state of the art EEG system that uses advanced algorithms to train beginner and intermediate meditators at controlling their focus. It teaches users how to manipulate their brain states and how to change the characteristics of their brains.

The Muse algorithm technology is more complex than traditional neurofeedback.  In creating the Muse app, we started from these brainwaves and then spent years doing intensive research on higher-order combinations of primary, secondary and tertiary characteristics of raw EEG data and how they interact with focused-attention meditation.

 

What are some of the noticeable meditative or mental improvements that you have personally noticed from using Muse?

My attention is more agile and it’s stronger.  It sounds simple, but I know how to relax. I understand my emotions better and I’m more in tune with others.  It’s truly life changing.

 

Outside of people that meditate, what other segments of the population are avid users of Muse?

There are a lot of biohackers and scientists – some of which have done some really awesome things.  Prof. Krigolson from UVic has been using Muse in the Mars habitat, and he’s done experiments on Mount Everest with the monks who live in the monasteries on the mountain.  There are also some awesome folks at the MIT media lab who are using Muse while sleeping to affect dreams.  So cool.

 

Is there anything else that you would like to share about Muse?

Entering the world of sleep with our latest product release Muse S has been infinitely interesting from a product and research perspective, and very exciting when it comes to the positive applications Muse can have for so many people who are looking to get a better night’s sleep.

Also, I personally love how Muse can render your brain activity as sound. From years of studying biosignals, something I’ve never grown tired of is the beauty in these waves that flow within us.   Like the waves of the ocean, they are infinitely complex, yet simple and familiar. I love that we are beautiful inside, and I love the challenge of bringing that out and celebrating it as sound and music.

Thank you for the great interview, I look forward to getting my hands on the Muse, anyone who wishes to learn more or to order a unit should visit the Muse website.

Spread the love
Continue Reading

Brain Machine Interface

Researchers Demonstrate Flexible Brain Interfaces

Published

on

A new project led by a team of researchers has demonstrated how an ultrathin, flexible neural interface can be implanted into the brain. The interface consists of thousands of electrodes and can last over six years. 

The results were published last month in the journal Science Translational Medicine. The team of researchers includes Jonathan Viventi, an assistant professor of biomedical engineering at Duke University; John Rogers, the Louis Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery at Northwestern University; and Bijan Pesaran, a professor of neural science at NYU. 

Challenges Surrounding Sensors in the Brain

Viventi spoke about the difficulty of getting sensors to work in the brain. 

“Trying to get these sensors to work in the brain is like tossing your foldable, flexible smartphone in the ocean and expecting it to work for 70 years,” said Viventi. “Except we’re making devices that are much thinner and much more flexible than the phones currently on the market. That’s the challenge.”

There are many difficult challenges when it comes to introducing foreign objects into the brain. They have to be able to exist in a corrosive, salty environment, and surrounding tissues and the immune system attacks the object. 

The difficulty is increased even more when talking about electrical devices. Most long-term implantable devices are hermetically sealed with laser-welded titanium casings. 

“Building water-tight, bulk enclosures for such types of implants represents one level of engineering challenge,” Rogers said. “We’re reporting here the successful development of materials that provide similar levels of isolation, but with thin, flexible membranes that are one hundred times thinner than a sheet of paper.”

Because of the layout of the human brain, space and flexibility are extremely important. The human brain consists of tens of billions of neurons, but existing neural interfaces can only sample around a hundred sites. This specific challenge has led the team of researchers to develop new approaches. 

“You need to move the electronics to the sensors themselves and develop local intelligence that can handle multiple incoming signals,” said Viventi. “This is how digital cameras work. You can have tens of millions of pixels without tens of millions of wires because many pixels share the same data channels.”

The researchers were able to come up with flexible neural devices that are 25 micrometers thick, consisting of 360 electrodes. 

“We tried a bunch of strategies before. Depositing polymers as thin as is required resulted in defects that caused them to fail, and thicker polymers didn’t have the flexibility that was required,” said Viventi. “But we finally found a strategy that outlasts them all and have now made it work in the brain.”

Layer of Silicon Dioxide

The paper demonstrates how a layer of silicon dioxide less than a micrometer thick, which is thermally grown, can help tame the environment within the brain. The rate of degradation is 0.46 nanometers per day, but the small amounts can dissolve into the body without creating any problems. 

The researchers also demonstrated how the electrodes within the device can use capacitive sensing to detect neural activity. 

The new developments are just one of the beginning steps to furthering this technology. The team is now working on increasing the prototype from 1,000 electrodes to over 65,000. 

“One of our goals is to create a new type of visual prosthetic that interacts directly with the brain that can restore at least some sight capacity for people with damaged optic nerves,” said Viventi. “But we can also use these types of devices to control other types of prosthetics or in a wide range of neuroscience research projects.”

 

Spread the love
Continue Reading