Chris Aimone co-founded Muse with an ethos to create technology that expands our perspective of ourselves and the world around us.
An artist and inventor at heart, Chris’ creative and design practice has spanned many fields, including architecture, augmented reality, computer vision, music and robotics. Looking to bring innovative new experiences to life, Chris has built installations for the Ontario Science Centre and contributed to major technology art projects featured around the world (including Burning Man).
Can you share with us how your love of Robotics and Brain-Machine Interfaces (BMI) began?
When I was very young, instead of playing with popular/trendy children’s toys, I was interested in tools – so much so, that my favorite book was actually a catalogue of tools (at 18 months) and I wanted a sewing machine for Christmas when I was 3.
I was interested in what tools could do – how they could extend my reach into the impossible, and my love for robotics and BMI was simply an extension of that. I was so curious about what lay just beyond the limits of my body’s capabilities, just beyond the range of my senses. It makes a lot of sense in a way, as I believe we humans love to figure things out whether it’s through our senses or through applying our knowledge and our tools together to explore and make sense of our experiences.
I didn’t start building robots or BMIs until much later, I’m pretty sure it was just a question of access. Computers weren’t so affordable (or approachable) in the 80s. I learned to program on a Commodore 64, but I didn’t want my creations to only live in a computer. I learned to wire things into the parallel port, but it was frustrating and tedious. There was no Arduino, no raspberry pie, no next day deliveries from Digikey.
The coolest thing I built back then was a mask with some computer-controlled flashing lights that I could pulsate into my eyes at different frequencies. I had noticed that my perception got a little weird looking at flickering LEDs in my tinkering, so I was curious about what would happen if I affected my entire vision that way. Clearly I had a latent interest in consciousness and the brain-machine interface. I’m really curious about what I might have built if I had access to Muse or other hackable technologies of today back then!
What were some of the first robots that you worked on?
I built a really cool wall-climbing robot with a couple of friends. It had four vacuum cups for hands and a big vacuum belly. The only use we could think of for it was autonomous window cleaning. It was a super fun project enabled by the kindness of automation vendors who gave us parts when we cold-called them with a crazy idea… but it actually worked! The project also taught us a lot about electromagnetic interference and the strength of the drywall in the house.
Following that, I built a painting robot one summer that painted on a huge 6×8 wall canvas using a brush mounted to a mutant commodore 64 printer. It was a monstrosity that used every bit of tech junk I could find including a barbecue tank, computer mice and my old rollerblades. It had a webcam from the mid-90s and attempted to draw what it saw. It was so ridiculous… I still miss it’s patient, humorous personality.
When I was doing my masters, I built a similarly whimsical robot with some friends that was the size of a house. We were interested in what would happen if a building changed shape and personality in response to the people who were in it. It was super cool…and the building felt alive! It moved and made noise. You became so aware of yourself, it felt like being in an empty cathedral.
For over a decade you essentially became a cyborg. Can you share your story of how this journey began?
By the time I finished my undergraduate degree computers had become pretty capable. I could afford a computer that could do simple processing of video at 15 frames per second, Linux was almost installable by the uninitiated. I loved the memory and speed of computers and it lead me to ask: What if I had similar abilities?
I met this professor at UofT named Steve Mann who was a wild inventor, and still a member of the InteraXon advisory board today. He walked around with a computer on his head and sent laser images into his eyes. It was exactly what I was looking for! If you love tools, what better thing to do than encrust yourself with them?
Steve and I started working a lot together. We were both interested in extending our overall perception. We worked a lot with computer vision and built very early augmented reality devices. In many ways, they still amaze me more than the AR that’s available today. Steve had invented a way of creating perfect optical alignment between computer graphics and your natural view of the world. This allowed us to do beautiful things like melding information from a far-infrared camera seamlessly into your vision. Walking around and being able to see heat is really interesting.
You scaled back your cyborg ambitions, as it caused you to distance yourself from others. Could you share some details about this transition in your mindset?
I had imagined a deep and seamless integration with computing technology: Information always available, instant communication, AI assistants, and extended-sensory abilities. I really believed in technology always being there so I could have it when needed.
Things changed for me when I started broadcasting images to a website. A local telecom company donated a bunch of mobile phones with serial data connections to our lab at the university. We could slowly upload images, about one every few seconds at low fidelity. We started a challenge to see who could stream the most. It was a super interesting experiment. I wore computers for months streaming my life to the internet, making sure to post every few seconds whenever I was doing something interesting — living my life through a camera view.
The truth is, it was exciting to feel like I wasn’t alone, posting to an imagined audience. Sound familiar? We all got a taste of present-day social media, 20 years ago. And what did I learn?
Being stuck in a computer, trying to connect with others by broadcasting a virtual life, kept me from being present with others… and I found myself feeling more alone than ever. Woah.
I walked around with constant information overload with a computer terminal in front of my face signalling anytime an email came in, and when an image was uploaded a text web browser would open with something I was researching – it was a lot.
Though I was interested in computers helping me solve problems, I began to experience less freedom of thought. I felt constantly interrupted, being triggered by what was bubbling up through cyberspace. I discovered the challenge of staying in touch with who you are and the loss of ability to tune into your spark of creativity when you are always in a state of information overload.
I was interested in technology that made me feel expansive, creative, and unfettered, but somehow, I painted myself into a corner with much of the opposite.
You did a really remarkable societal experiment, where users across Canada could use their minds to control lights on the CN Tower and Niagara Falls using their minds. Could you describe this?
This was a special opportunity we had early on in the journey of Muse at the winter Olympics in 2010, in an effort to connect the various parts of Canada to the global event.
While it’s not yet understood, we know that our brainwaves synchronize in interesting ways, especially when we do things in a close relationship, like communicate with each other, when we dance or when making music. What happens when you project the brain activity of an individual in a way for it to be experienced by many?
We created an experience where people attending the games on the west coast of Canada could affect the experience of thousands of people, 3000 miles away. By wearing a brain-sensing device, participants connected their consciousness to huge real-time lighting display that illuminated Niagara Falls, downtown Toronto via CN Tower, and the Canadian parliament buildings in Ottawa.
You sat in front of a huge screen with a real-time view of the light displays so you could see the live effect of your mind in this larger than life experience. People would call up friends in Toronto and get them to watch as the patterns of activity in their brain lit up the city with a dramatic play of light.
You’ve described Muse as a ‘happy accident’. Could you share the details behind this happy accident, and what you learned from the experience?
I often forget the beauty of tinkering as building tech can be really tedious. You have to get rigid, but so much great stuff happens when you can break out the patch cables, plug a bunch of random stuff together and just see what happens… just like how Muse was created!
The first seed of Muse was planted when we wrote some code to connect to an old medical EEG system and streamed the data over a network. We had to find a computer chassis that supported ISA cards and we made a makeshift headband. We wanted to get EEG data feeding into our wearable computers. Could we upload images automatically when we saw something interesting? We had heard that when you closed your eyes your alpha brainwaves would become larger… could this be how we sense if we were interested in what we saw?
We hacked together some signal processing with some basic FFT spectral analysis and hooked up the result to a simple graphic that was like one of those vertical light dimmer sliders. Simple idea, but it was a pretty elaborate setup. What happened next was super interesting. We took turns wearing the device, closing and opening our eyes. Sure enough, the slider went up and down, but it would wander around in curious ways. When we closed our eyes it went up, but not all the way up and still wandered around… What was happening?
We spent hours playing with it, trying to understand what made it wander and if we could we control it. We hooked the output to an audible sound so we could hear it go up and down when we had our eyes closed. I remember sitting there for ages, eyes closed, exploring my consciousness and the sound.
I soon discovered I could focus my consciousness in different ways, changing the sound, but also changing my experience, my perception and the way I felt. We invited other people into the lab and the same thing happened to them. They would close their eyes and go into a deep inner exploration (sounds kind of like meditation doesn’t it?!). It was wild – we completely forgot about our original idea as this was so much more interesting. That was the happy accident – I can say I discovered meditation and mindfulness through technology, by accident!
Can you explain some of the technology that enables Muse to detect brainwaves?
The brain has billions of neurons, and each individual neuron connects (on average) to thousands of others. Communication happens between them through small electrical currents that travel along the neurons and throughout enormous networks of brain circuits. When all these neurons are activated they produce electrical pulses – visualize a wave rippling through the crowd at a sports arena – this synchronized electrical activity results in a “brainwave”.
When many neurons interact in this way at the same time, this activity is strong enough to be detected even outside the brain. By placing electrodes on the scalp, this activity can be amplified, analyzed, and visualized. This is electroencephalography, or EEG – a fancy word that just means electric brain graph. (Encephalon, the brain, is derived from the ancient Greek “enképhalos,” meaning within the head.)
Muse has been tested and validated against EEG systems that are exponentially more expensive, and it’s used by neuroscientists around the world in real-world neuroscience research inside and outside the lab. Using 7 finely calibrated sensors – 2 on the forehead, 2 behind the ears plus 3 reference sensors – Muse is a next-generation, state of the art EEG system that uses advanced algorithms to train beginner and intermediate meditators at controlling their focus. It teaches users how to manipulate their brain states and how to change the characteristics of their brains.
The Muse algorithm technology is more complex than traditional neurofeedback. In creating the Muse app, we started from these brainwaves and then spent years doing intensive research on higher-order combinations of primary, secondary and tertiary characteristics of raw EEG data and how they interact with focused-attention meditation.
What are some of the noticeable meditative or mental improvements that you have personally noticed from using Muse?
My attention is more agile and it’s stronger. It sounds simple, but I know how to relax. I understand my emotions better and I’m more in tune with others. It’s truly life changing.
Outside of people that meditate, what other segments of the population are avid users of Muse?
There are a lot of biohackers and scientists – some of which have done some really awesome things. Prof. Krigolson from UVic has been using Muse in the Mars habitat, and he’s done experiments on Mount Everest with the monks who live in the monasteries on the mountain. There are also some awesome folks at the MIT media lab who are using Muse while sleeping to affect dreams. So cool.
Is there anything else that you would like to share about Muse?
Entering the world of sleep with our latest product release Muse S has been infinitely interesting from a product and research perspective, and very exciting when it comes to the positive applications Muse can have for so many people who are looking to get a better night’s sleep.
Also, I personally love how Muse can render your brain activity as sound. From years of studying biosignals, something I’ve never grown tired of is the beauty in these waves that flow within us. Like the waves of the ocean, they are infinitely complex, yet simple and familiar. I love that we are beautiful inside, and I love the challenge of bringing that out and celebrating it as sound and music.
Thank you for the great interview, I look forward to getting my hands on the Muse, anyone who wishes to learn more or to order a unit should visit the Muse website.
- Researchers Mimic Sea Slug Strategies in Quantum Material
- Do Conversational Agents Like Alexa Affect How Children Communicate?
- Hobbling Computer Vision Datasets Against Unauthorized Use
- Faisal Ahmed. Co-Founder & CTO at Knockri – Interview Series
- The Shortcomings of Amazon Mechanical Turk May Threaten Natural Language Generation Systems