stub Afshin Mehin, Founder of Card79 - Interview Series - Unite.AI
Connect with us

Interviews

Afshin Mehin, Founder of Card79 – Interview Series

mm
Updated on

Afshin Mehin is the founder of Card79 (previously known as WOKE), a creative studio specializing in product experiences that blur the boundaries between our digital and physical lives. Card79 had the privilege to partner with Elon Musk to design Neuralink – the world’s first brain wearable device. The studio designed the Link which was the part of the system that a person would wear daily.

You started your studies as an Engineer, how did you pivot your career towards designing for future tech?

Design was always on my radar. As a teenager I discovered the field of industrial design as a possible career and thought it could be a good fit for me since I loved creating new products and solutions for everyday problems. But as is the case with many first-generation immigrant families, design was not a familiar career path. So I did the next best things and completed my bachelor’s degree in Mechanical Engineering the University of British Columbia in Vancouver. That education ended up being one of the best things I did since it gave me an appreciation of the hard problems that need to get solved for new technical advancements to be brought into the world. After I completed my engineering studies, I reoriented myself back to my passion for design and began to pursue further education in Human Computer Interaction and Industrial Design Engineering, taking had a side of me that was more interested in the human experience of that technology, completing a masters at the Royal College of Art in London and an internship at M.I.T. Media Lab Europe in Dublin. After my education was complete, I moved to San Francisco and began working for different design companies such as IDEO and Whipsaw.

You were approached by the team at Neuralink in 2019 to introduce a design to their brain-machine interface, could you discuss this initial engagement?

We got a call from the President of Neuralink. We'd worked on head worn wearables before so we were comfortable with the challenges of designing something that could be worn on the head. What we didn’t expect was that we would also be designing something that went inside the head as well. This was the first time we had worked on a project where we’d be sitting in a room with an electrical engineer, mechanical engineers along with neurosurgeons and neural engineers who could explain how to operate and interface with the brain. We not only worked on defining the form factor – something discrete so as to not attract unwanted attention – but also discussed possible locations of the wearable and implantable with the Neuralink team. We eventually designed a wearable device that would be worn behind the ear and would transfer data and power to a wireless receiver that would be implanted underneath the scalp behind the ear as well. The wearable was designed to be easily hot swapped since battery life for the first generation was estimated to be not much more than a couple of hours. Our second engagement was to help develop the outer enclosure design (the Industrial Design) for the surgical robot that would get it ready for use in clinical trials. After these two engagements, our curiosity was sparked around what the potential user experience of a BMI could be. The idea of using our thoughts to control things was such a new and exciting concept that we wanted to explore it further.

What are the different components of the Neuralink that’s designed by Card79?

At our core, we are a design studio and our expertise and value is in understanding how to create a desirable and appealing. This is sometimes achieved by making a product more visually appealing, other times it’s by making a product easier to use, and other times it’s by exposing more capabilities. With our work for Neuralink, we came to help with two of the main devices, the first-generation Link wearable as well as the R1 Neuralink Surgical Robot. Our contribution on both projects was to understand how to make the product as suitable for it’s human context as possible. For the Link, it was important to solve problems around ergonomics in order to make sure that the device fit different people’s heads and was comfortable and discrete to wear. For the R1 robot, it was critical that the robot was capable of being easily maintained in the operating room and was safe for staff and surgeons to work with.

Could you describe the approach of designing a user experience for a Brain Computer Interface?

There are two user experiences that will be important to consider. Firstly, there's the physical user experience – how easily the technology can be maintained, recharged, upgraded as an augmentation of our bodies.

Then there's the digital user experience and we break it out into two different camps.

The first camp is the UX that is driven by the present state of the art. This involves understanding the technical capabilities of the sensing technology, model training, the variation in neuroanatomy and psychology that impacts the robustness of the BMI experience and the intended function or use case that is intended to be addressed. Depending on whether the UX is for research purposes or for a shipping product, the priorities would shift. Also, if it’s an invasive BMI the level of complexity of the surgery and the access to those patients becomes trickier to carry out user testing to validate the proposed UX.

The second camp of designing user experiences is for BMI’s that aren’t technically possible yet but could have huge societal implications if achieved. We attempt to follow the science up to where we are and then start to make educated guesses around what feels like potentially amazing or disastrous applications that could arise if the high speed/high bandwidth future scenario comes to fruition. We hope that by continuing to chip away at these future UX scenarios we will be armed with design proposals if and when that future arrives.

What are some technical challenges behind designing for a brain-machine interface?

So, there are a lot of challenges. Getting a good signal is one of the hardest things. To get really low signal-to-noise you need to get invasive with the sensing technologies. There are lots of great noninvasive technologies that are safer and less risky to use but suffer from the same lack of quality signal. Without a good signal, it’s like talking to Alexa through a muffled microphone or trying to user a mouse with a broken laser that jumps erratically when you’re trying to use it, it’s just not reading you at the level of detail you want.

The other challenge from a UX standpoint is the neuroanatomical and psychological variation over time within an individual and across individuals. That basically means that every time the same user or a new user wants to start using a BMI, they need to go through a calibration session which in itself is often frustrating and demotivating for users. There are UX opportunities to simplify and streamline that calibration process, but the long-term hope is that the amount and frequency of having to need to calibrate a system can be reduced.

Also with BCI systems driven by users' intentional motor imagery (MI) the way that you might prompt a user to imagine the motor movement can impact the ability for the machine learning model to effectively decipher the intended movement. Great research published in 2021 by Frank Willets et al prompted paralysis patients to imagine they were handwriting (as opposed to moving a cursor or typing keys on a keyboard). That input technique was able to outperform other previously tested techniques partially because of the fact that the task of handwriting was an easy one for users to imagine, and partially because the ML could effectively decipher between different handwritten characters – very much like when Palm Pilot first introduced their “graffiti” handwriting language back in the early 2000’s.

Could you describe how brain-machine interfaces will be able to use visual or other types of thinking modalities outside of simply thinking in words?

As UX designers working in this rapidly evolving field, we’re trying to follow the science closely to see where it takes us. When we’ve envisioned some of our future scenarios, we’ve tried to lean on research that is both near and long term. In the near term, there have been a lot of progress made developing out BMI’s that leverage Intended Motor Imagery where someone imagines they are moving an object in order to manipulate some form of technology. This modality allows for direct manipulation of objects with thoughts.

At a more ambitious level, the ability to control voice and create words that symbolize an object is one level of control more advanced. This research has been coming out of the Edward Chang’s Lab at UCSF and started to inspire a lot of the types of interactions we were imagining, whether it was a person being able to ask their AI assistant something via their thoughts or two people being able to converse back and forth with their thoughts.

The visual cortex is a more advanced system than voice or movement. Early research indicates that there is a high level of consistency in the way that the visual cortex functions between individuals. One paper published back in 2004 indicates that when the researchers show the same visual input to different people that there was a “striking level of voxel-by-voxel synchronization between individuals”. There was also another project published by researchers at the University of Kyoto where researchers have found that activity in the higher order brain regions could accurately predict the content of the participants’ dreams. Supporting visual thinking has huge potential, enabling people to augment their power of imagination.

At the end of the day alot of this will come down to which of these new inputs succeed will depend on the ease with which they can be learned, how robustly they work and how much they benefit the end user, whether it’s by allowing people to do things they haven’t been able to do before or do things faster than they’ve ever been able to them.

Could you discuss how brain-machine interfaces will be able to understand a person’s emotional state?

Emotions can presently be captured with EEG’s at a macroscopic level and categorized into the big emotional buckets such as anger, sadness, happiness, disgust and fear. There are two ways that we could see the emotional state of a person impacting future BMI’s. They could firstly inspire actual features, informing a meditation app or informing a therapist of their client’s emotional history since their last appointment. Alternatively, because this information is more macroscopic and qualitative than other BMI controls that capturing movement, language or visuals it would make sense to use that data to change the “flavour” of an interface, adjusting a specific BMI to take into account the person’s emotions, similar to how “Night Mode” is able to adjust a screen’s brightness depending on the time of days.

What are some of the use cases for brain-machine interfaces that most excite you?

I am first and foremost fascinated to learn more about how the brain actually works. It feels like we have a lot of different efforts to try and understand the brains’ inner workings but no holistic model het. That’s why applying UX principles this topic is so exciting for me! What comes out of that will ideally be something that will actually be a high bandwidth/high speed UX that is improving people’s lives. The idea of accelerating what we do as a species sounds amazing and what gets me super excited about this topic. On the flip side, having our humanity and our independence challenged is haunting and needs to be approached with the utmost vigilance.

What is your vision for the future of brain-machine interfaces?

One where people are benefitting from the technology, in control of it, but at the same time able to connect with others and information in ways that we’re presently unable to imagine. The idea of being networked in a way that puts our humanity first. One of the risks that we are all aware of is that we fear our thoughts won’t be private anymore or we’ll all become walking zombies with mind control. With the way that Web 2.0 has had to compromise on people’s privacy in order to sustain itself it’s no wonder people are skeptical! Despite the fact that the science is very far away from ever making that a reality, I want to play an active role in making sure it never heads in that direction. Knowing that there are so many stakeholders, from governments to Venture Capitalists there’s no guarantee that it won’t head into a dark direction. That’s why as a UX designer, I feel it’s SO critical to get in there early and start getting some stakes in the ground around what is in the best interests of the people who will actually be using this technology.

Thank you for the great interview, readers who wish to learn more should visit Card79 or Neuralink.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.