Connect with us

Brain Machine Interface

Facial Expressions Of Mice Analyze With Artificial Intelligence

mm

Published

 on

Facial Expressions Of Mice Analyze With Artificial Intelligence

According to Nature, a team of researchers has recently employed artificial intelligence to analyze and interpret the facial expressions of mice. Laboratory mice are some of the most commonly used laboratory animals, but little is known about how they express themselves with their faces. The research could also help scientists understand which neurons impact specific facial expressions in humans.

The study of animal expressions is an old idea but a relatively new discipline. Darwin initially hypothesized that animal facial expressions might grant us insight about their emotions, but just recently has science and technology advanced ot the point where it is possible to study such expressions and emotions.

David Anderson, a neuroscientist at the California Institute of Technology in Pasadena, explained that the study was an important step in demystifying how the brain manifests certain emotions and how those emotions might be expressed in facial muscles. Meanwhile, Nadine Gogalla, a neuroscientist at the Max Planck Institute of Neurobiology in Germany, explained the rational behind the study. Gogalla led the study and was inspired by a 2014 paper written by Anderson and colleagues. In their paper, Anderson and colleagues hypothesize that emotions and other brain states should display certain mensurable attributes, theorizing that the strength of the stimulus should impact the severity of the emotion and that emotions should be persistent, continuing on for a while even after the stimulus responsible for them has ended.

As Inverse explained, Gogolla and the other researchers filmed the faces of mice as they were exposed to a variety of stimuli, both pleasant and unpleasant. For instance, they were given either bitter or sweet fluids. The researchers stated that mice can shift their expressions by altering facial structures like nose, eyes, ears, and cheeks. However, there wasn’t a method of easily linking different facial expressions to different emotions. The research team dealt with this problem by taking the videos of the mice faces and splitting them up into short  clips, which were then fed into a machine learning algorithm.

Camilla Bellone at the University of Geneva in Switzerland, says that the AI driven method of examining facial expressions is valuable “because it avoids any biases of the experimenter”.

The AI algorithm was reportedly able to recognize the various facial expressions of the mice, as movement of different facial muscles are correlated with different emotions. A mouse shows that it is experiencing pleasure by pulling its jaw and ears forward and pulling the tip of the nose downward towards the mouth. Moreover, when analyzing how the expressions manifested in response to stimuli, the research team found the expressions were both persistent and correlated with the stimuli strength, just as Anderson and colleagues theorized.

The team of researchers then used a technique dubbed optogenetics to try and determine which brain cells are responsible for these emotions. The research team examined the individual neural circuits associated with certain emotions in animals. When these circuits were stimulated, the mice made the corresponding facial expressions.

The research team also utilized a technique referred to as two-photon calcium imaging, which can track individual neurons. Using this technique they identified neurons in the brains of the mice that activated only when certain facial expressions, and therefore emotions, were witnessed. Gogolla theorized that these neurons might represent part of a coding for emotions in the brain, an encoding that possible conserved throughout the evolutionary history of mammals, and therefore mice and humans may share some common features in this encoding.

Spread the love

Brain Machine Interface

Chris Aimone, Co-Founder and Chief Technology Officer at Muse – Interview Series

mm

Published

on

Chris Aimone, Co-Founder and Chief Technology Officer at Muse - Interview Series

Chris Aimone co-founded Muse with an ethos to create technology that expands our perspective of ourselves and the world around us.

An artist and inventor at heart, Chris’ creative and design practice has spanned many fields, including architecture, augmented reality, computer vision, music and robotics. Looking to bring innovative new experiences to life, Chris has built installations for the Ontario Science Centre and contributed to major technology art projects featured around the world (including Burning Man).

Can you share with us how your love of Robotics and Brain-Machine Interfaces (BMI) began?

When I was very young, instead of playing with popular/trendy children’s toys, I was interested in tools – so much so, that my favorite book was actually a catalogue of tools (at 18 months) and I wanted a sewing machine for Christmas when I was 3.

I was interested in what tools could do – how they could extend my reach into the impossible, and my love for robotics and BMI was simply an extension of that. I was so curious about what lay just beyond the limits of my body’s capabilities, just beyond the range of my senses.  It makes a lot of sense in a way, as I believe we humans love to figure things out whether it’s through our senses or through applying our knowledge and our tools together to explore and make sense of our experiences.

I didn’t start building robots or BMIs until much later, I’m pretty sure it was just a question of access.   Computers weren’t so affordable (or approachable) in the 80s. I learned to program on a Commodore 64, but I didn’t want my creations to only live in a computer. I learned to wire things into the parallel port, but it was frustrating and tedious. There was no Arduino, no raspberry pie, no next day deliveries from Digikey.

The coolest thing I built back then was a mask with some computer-controlled flashing lights that I could pulsate into my eyes at different frequencies. I had noticed that my perception got a little weird looking at flickering LEDs in my tinkering, so I was curious about what would happen if I affected my entire vision that way. Clearly I had a latent interest in consciousness and the brain-machine interface. I’m really curious about what I might have built if I had access to Muse or other hackable technologies of today back then!

 

What were some of the first robots that you worked on?

I built a really cool wall-climbing robot with a couple of friends. It had four vacuum cups for hands and a big vacuum belly. The only use we could think of for it was autonomous window cleaning.  It was a super fun project enabled by the kindness of automation vendors who gave us parts when we cold-called them with a crazy idea… but it actually worked! The project also taught us a lot about electromagnetic interference and the strength of the drywall in the house.

Following that, I built a painting robot one summer that painted on a huge 6×8 wall canvas using a brush mounted to a mutant commodore 64 printer. It was a monstrosity that used every bit of tech junk I could find including a barbecue tank, computer mice and my old rollerblades. It had a webcam from the mid-90s and attempted to draw what it saw. It was so ridiculous… I still miss it’s patient, humorous personality.

When I was doing my masters, I built a similarly whimsical robot with some friends that was the size of a house. We were interested in what would happen if a building changed shape and personality in response to the people who were in it. It was super cool…and the building felt alive!  It moved and made noise. You became so aware of yourself, it felt like being in an empty cathedral.

 

For over a decade you essentially became a cyborg. Can you share your story of how this journey began?

By the time I finished my undergraduate degree computers had become pretty capable. I could afford a computer that could do simple processing of video at 15 frames per second, Linux was almost installable by the uninitiated. I loved the memory and speed of computers and it lead me to ask: What if I had similar abilities?

I met this professor at UofT named Steve Mann who was a wild inventor, and still a member of the InteraXon advisory board today. He walked around with a computer on his head and sent laser images into his eyes.  It was exactly what I was looking for! If you love tools, what better thing to do than encrust yourself with them?

Steve and I started working a lot together. We were both interested in extending our overall perception. We worked a lot with computer vision and built very early augmented reality devices. In many ways, they still amaze me more than the AR that’s available today. Steve had invented a way of creating perfect optical alignment between computer graphics and your natural view of the world.  This allowed us to do beautiful things like melding information from a far-infrared camera seamlessly into your vision. Walking around and being able to see heat is really interesting.

 

You scaled back your cyborg ambitions, as it caused you to distance yourself from others. Could you share some details about this transition in your mindset?

I had imagined a deep and seamless integration with computing technology: Information always available, instant communication, AI assistants, and extended-sensory abilities.  I really believed in technology always being there so I could have it when needed.

Things changed for me when I started broadcasting images to a website. A local telecom company donated a bunch of mobile phones with serial data connections to our lab at the university.  We could slowly upload images, about one every few seconds at low fidelity. We started a challenge to see who could stream the most. It was a super interesting experiment. I wore computers for months streaming my life to the internet, making sure to post every few seconds whenever I was doing something interesting — living my life through a camera view.

The truth is, it was exciting to feel like I wasn’t alone, posting to an imagined audience.  Sound familiar? We all got a taste of present-day social media, 20 years ago. And what did I learn?

Being stuck in a computer, trying to connect with others by broadcasting a virtual life, kept me from being present with others… and I found myself feeling more alone than ever. Woah.

I walked around with constant information overload with a computer terminal in front of my face signalling anytime an email came in, and when an image was uploaded a text web browser would open with something I was researching – it was a lot.

Though I was interested in computers helping me solve problems, I began to experience less freedom of thought. I felt constantly interrupted, being triggered by what was bubbling up through cyberspace. I discovered the challenge of staying in touch with who you are and the loss of ability to tune into your spark of creativity when you are always in a state of information overload.

I was interested in technology that made me feel expansive, creative, and unfettered, but somehow, I painted myself into a corner with much of the opposite.

 

You did a really remarkable societal experiment, where users across Canada could use their minds to control lights on the CN Tower and Niagara Falls using their minds. Could you describe this?

This was a special opportunity we had early on in the journey of Muse at the winter Olympics in 2010, in an effort to connect the various parts of Canada to the global event.

While it’s not yet understood, we know that our brainwaves synchronize in interesting ways, especially when we do things in a close relationship, like communicate with each other, when we dance or when making music. What happens when you project the brain activity of an individual in a way for it to be experienced by many?

We created an experience where people attending the games on the west coast of Canada could affect the experience of thousands of people, 3000 miles away.  By wearing a brain-sensing device, participants connected their consciousness to huge real-time lighting display that illuminated Niagara Falls, downtown Toronto via CN Tower, and the Canadian parliament buildings in Ottawa.

You sat in front of a huge screen with a real-time view of the light displays so you could see the live effect of your mind in this larger than life experience. People would call up friends in Toronto and get them to watch as the patterns of activity in their brain lit up the city with a dramatic play of light.

 

You’ve described Muse as a ‘happy accident’. Could you share the details behind this happy accident, and what you learned from the experience?

I often forget the beauty of tinkering as building tech can be really tedious. You have to get rigid, but so much great stuff happens when you can break out the patch cables, plug a bunch of random stuff together and just see what happens… just like how Muse was created!

The first seed of Muse was planted when we wrote some code to connect to an old medical EEG system and streamed the data over a network. We had to find a computer chassis that supported ISA cards and we made a makeshift headband. We wanted to get EEG data feeding into our wearable computers. Could we upload images automatically when we saw something interesting?  We had heard that when you closed your eyes your alpha brainwaves would become larger… could this be how we sense if we were interested in what we saw?

We hacked together some signal processing with some basic FFT spectral analysis and hooked up the result to a simple graphic that was like one of those vertical light dimmer sliders. Simple idea,  but it was a pretty elaborate setup. What happened next was super interesting. We took turns wearing the device, closing and opening our eyes.  Sure enough, the slider went up and down, but it would wander around in curious ways. When we closed our eyes it went up, but not all the way up and still wandered around… What was happening?

We spent hours playing with it, trying to understand what made it wander and if we could we control it. We hooked the output to an audible sound so we could hear it go up and down when we had our eyes closed. I remember sitting there for ages, eyes closed, exploring my consciousness and the sound.

I soon discovered I could focus my consciousness in different ways, changing the sound,  but also changing my experience, my perception and the way I felt. We invited other people into the lab and the same thing happened to them. They would close their eyes and go into a deep inner exploration (sounds kind of like meditation doesn’t it?!). It was wild – we completely forgot about our original idea as this was so much more interesting. That was the happy accident – I can say I discovered meditation and mindfulness through technology, by accident!

 

Can you explain some of the technology that enables Muse to detect brainwaves?

The brain has billions of neurons, and each individual neuron connects (on average) to thousands of others. Communication happens between them through small electrical currents that travel along the neurons and throughout enormous networks of brain circuits. When all these neurons are activated they produce electrical pulses – visualize a wave rippling through the crowd at a sports arena –  this synchronized electrical activity results in a “brainwave”.

When many neurons interact in this way at the same time, this activity is strong enough to be detected even outside the brain. By placing electrodes on the scalp, this activity can be amplified, analyzed, and visualized. This is electroencephalography, or EEG – a fancy word that just means electric brain graph. (Encephalon, the brain, is derived from the ancient Greek “enképhalos,” meaning within the head.)

Muse has been tested and validated against EEG systems that are exponentially more expensive, and it’s used by neuroscientists around the world in real-world neuroscience research inside and outside the lab. Using 7 finely calibrated sensors – 2 on the forehead, 2 behind the ears plus 3 reference sensors – Muse is a next-generation, state of the art EEG system that uses advanced algorithms to train beginner and intermediate meditators at controlling their focus. It teaches users how to manipulate their brain states and how to change the characteristics of their brains.

The Muse algorithm technology is more complex than traditional neurofeedback.  In creating the Muse app, we started from these brainwaves and then spent years doing intensive research on higher-order combinations of primary, secondary and tertiary characteristics of raw EEG data and how they interact with focused-attention meditation.

 

What are some of the noticeable meditative or mental improvements that you have personally noticed from using Muse?

My attention is more agile and it’s stronger.  It sounds simple, but I know how to relax. I understand my emotions better and I’m more in tune with others.  It’s truly life changing.

 

Outside of people that meditate, what other segments of the population are avid users of Muse?

There are a lot of biohackers and scientists – some of which have done some really awesome things.  Prof. Krigolson from UVic has been using Muse in the Mars habitat, and he’s done experiments on Mount Everest with the monks who live in the monasteries on the mountain.  There are also some awesome folks at the MIT media lab who are using Muse while sleeping to affect dreams.  So cool.

 

Is there anything else that you would like to share about Muse?

Entering the world of sleep with our latest product release Muse S has been infinitely interesting from a product and research perspective, and very exciting when it comes to the positive applications Muse can have for so many people who are looking to get a better night’s sleep.

Also, I personally love how Muse can render your brain activity as sound. From years of studying biosignals, something I’ve never grown tired of is the beauty in these waves that flow within us.   Like the waves of the ocean, they are infinitely complex, yet simple and familiar. I love that we are beautiful inside, and I love the challenge of bringing that out and celebrating it as sound and music.

Thank you for the great interview, I look forward to getting my hands on the Muse, anyone who wishes to learn more or to order a unit should visit the Muse website.

Spread the love
Continue Reading

Brain Machine Interface

Researchers Demonstrate Flexible Brain Interfaces

Published

on

Researchers Demonstrate Flexible Brain Interfaces

A new project led by a team of researchers has demonstrated how an ultrathin, flexible neural interface can be implanted into the brain. The interface consists of thousands of electrodes and can last over six years. 

The results were published last month in the journal Science Translational Medicine. The team of researchers includes Jonathan Viventi, an assistant professor of biomedical engineering at Duke University; John Rogers, the Louis Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery at Northwestern University; and Bijan Pesaran, a professor of neural science at NYU. 

Challenges Surrounding Sensors in the Brain

Viventi spoke about the difficulty of getting sensors to work in the brain. 

“Trying to get these sensors to work in the brain is like tossing your foldable, flexible smartphone in the ocean and expecting it to work for 70 years,” said Viventi. “Except we’re making devices that are much thinner and much more flexible than the phones currently on the market. That’s the challenge.”

There are many difficult challenges when it comes to introducing foreign objects into the brain. They have to be able to exist in a corrosive, salty environment, and surrounding tissues and the immune system attacks the object. 

The difficulty is increased even more when talking about electrical devices. Most long-term implantable devices are hermetically sealed with laser-welded titanium casings. 

“Building water-tight, bulk enclosures for such types of implants represents one level of engineering challenge,” Rogers said. “We’re reporting here the successful development of materials that provide similar levels of isolation, but with thin, flexible membranes that are one hundred times thinner than a sheet of paper.”

Because of the layout of the human brain, space and flexibility are extremely important. The human brain consists of tens of billions of neurons, but existing neural interfaces can only sample around a hundred sites. This specific challenge has led the team of researchers to develop new approaches. 

“You need to move the electronics to the sensors themselves and develop local intelligence that can handle multiple incoming signals,” said Viventi. “This is how digital cameras work. You can have tens of millions of pixels without tens of millions of wires because many pixels share the same data channels.”

The researchers were able to come up with flexible neural devices that are 25 micrometers thick, consisting of 360 electrodes. 

“We tried a bunch of strategies before. Depositing polymers as thin as is required resulted in defects that caused them to fail, and thicker polymers didn’t have the flexibility that was required,” said Viventi. “But we finally found a strategy that outlasts them all and have now made it work in the brain.”

Layer of Silicon Dioxide

The paper demonstrates how a layer of silicon dioxide less than a micrometer thick, which is thermally grown, can help tame the environment within the brain. The rate of degradation is 0.46 nanometers per day, but the small amounts can dissolve into the body without creating any problems. 

The researchers also demonstrated how the electrodes within the device can use capacitive sensing to detect neural activity. 

The new developments are just one of the beginning steps to furthering this technology. The team is now working on increasing the prototype from 1,000 electrodes to over 65,000. 

“One of our goals is to create a new type of visual prosthetic that interacts directly with the brain that can restore at least some sight capacity for people with damaged optic nerves,” said Viventi. “But we can also use these types of devices to control other types of prosthetics or in a wide range of neuroscience research projects.”

 

Spread the love
Continue Reading

Brain Machine Interface

Brain-Computer Interface Technology Restores Sensation to Hand of Individual with Spinal Cord Injury

Published

on

Brain-Computer Interface Technology Restores Sensation to Hand of Individual with Spinal Cord Injury

A team of researchers at Battelle and the Ohio State University Wexner Medical Center report that through the use of Brain-Computer Interface technology, they successfully restored sensation to the hand of an individual with a severe spinal cord injury. This development comes as researchers all around the world are working on technology that is capable of restoring limb function to individuals paralyzed through injury or disease. 

The research was published in the journal Cell on April 23. 

Artificial Sensory Feedback

The new technology relies on the use of unperceivable minuscule neural signals that are enhanced through artificial sensory feedback, which is sent back to the individual. This method results in a great increase in motor function. 

Patrick Ganzer is first author and a principal research scientist at Battelle. 

“We’re taking subperceptual touch events and boosting them into conscious perception,” says Ganzer. “When we did this, we saw several functional improvements. It was a big eureka moment when we first restored the participant’s sense of touch.”

The Participant

The participant was a 28-year-old man who was involved in a driving accident in 2010, which resulted in a severe spinal cord injury. The participant, whose name is Ian Burkhart, has been involved with a project named NeuroLife since 2014 in order to restore function to his right arm. 

The newly developed device uses a system of electrodes that are placed on the skin and a small computer chip implanted in the motor cortex. There are wires which route movement signals from the brain to the muscles, allowing the spinal cord injury to be bypassed. With the device, Burkhart is able to control his arm and complete actions such as lifting a coffee mug, swiping a credit card, and playing video games that require the use of the hands and arms. 

“Until now, at times Ian has felt like his hand was foreign due to lack of sensory feedback,” Ganzer says. “He also has trouble with controlling his hand unless he is watching his movements closely. This requires a lot of concentration and makes simple multitasking like drinking a soda while watching TV almost impossible.”

Whenever the researchers stimulated his skin, there was no sensation, but a neural signal was still present in his brain. The problem was that the neural signal was so small that it could not be perceived. The researchers were able to boost the signal so that the brain was capable of responding. 

Through the use of haptic feedback, the subperceptual touch signals were artificially sent back to Burkhart, which allowed him to perceive them. 

With the new method, Burkhart was able to detect things solely through touch. Another breakthrough was that the system is the first BCI to allow restoration of movement and touch at once, which provides a greater sense of control. Lastly, the method allows the BCI system to sense the right amount of pressure for handling an object. 

The team of researchers want to create a BCI system that can be used in the home. A next-generation sleeve is currently being developed, which contains electrodes and sensors that could be put on and taken off. They are also working on a system capable of being controlled by a tablet instead of a computer. 

“It has been amazing to see the possibilities of sensory information coming from a device that was originally created to only allow me to control my hand in a one-way direction,” Burkhart says.

 

Spread the love
Continue Reading