Bruno Zamborlin, CEO and Chief Scientist at Hypersurfaces – Interview Series
Bruno Zamborlin, PhD is an Italian AI researcher and entrepreneur based in London, UK.
Visiting researcher at Goldsmiths University, Bruno pioneered the concept of transforming physical objects into touch-sensitive, interactive surfaces using vibration sensors and Artificial Intelligence.
He is the founder of Mogees Limited, the London-Los Angeles based start-up whose products enable users to transform everyday objects into musical instruments and games using a vibration sensor and a mobile phone (more than 100,000 units sold worldwide).
He recently founded HyperSurfaces, a technology platform which converts objects of any material, shape and form into data-enabled, interactive surfaces just using a vibration sensor and a coin-sized chipset.
Your journey as an entrepreneur evolved from your passion in music. Could you share the story of how you came up for the concept of your first startup Mogees?
I’ve always been passionate about the idea of creating technologies which are site-specific, capable of leveraging and altering the environments around us as opposed to creating something from scratch. I often do so using ‘Interactive Machine Learning’, which is a branch of AI which focuses on enabling the end user to program the algorithms themselves as they please, instead of using AI as a blackbox with hardcoded rules. It has been the common ground in most of my work.
Mogees aims to democratise this process for sound creation. It effectively enables anyone to alter acoustic properties of physical objects around us to make them musical. It’s composed of a tiny vibration sensor which you place on the object you want to play and a smartphone app, which turns the vibrations into musical sound. The app enables users to alter the acoustic parameters of the physical object you play as well as recognising specific gestures. Anyone from professional performers to primary school kids can reprogram the world around them to make it sound as they please.
Can you discuss the genesis story behind your second startup Hypersurfaces?
Technology revolutionized many aspects of our lives, from the way we communicate with each other to the way we do shopping, we drive, we study and so on.
The physical world around us however didn’t really evolve at the same pace. Think about your kitchen table, a school classroom, a park; they are still pretty much the same as 30 years ago.
HyperSurfaces is a technology that can transform any surface made of any rigid material, shape and size into a data-enabled HyperSurface, capable of understanding any event which happens to its surface and react to it accordingly, at the right time. It does so thanks to a tiny vibration sensor and a chipset where an AI algorithm runs locally. Think about surfaces that are aware of when they are touched, swiped, moved, hit, when a liquid is dropped onto them, etc and react to such events accordingly.
And even more, with our cloud platform users can program such surfaces by themselves in less than an hour, without having to write a single line of code. HyperSurfaces is somehow a natural extension of Mogees. There are some commonalities, namely Interactive Machine Learning and vibrations, although these are brought to the next level.
Why is it so important to augment different surfaces to respond to human gestures?
If you think about the way our bodies are designed, it is really unnatural to spend all day in front of a touchscreen or a keyboard communicating with technologies using just our fingers. Imagine if technology could be spread ubiquitously around us to simplify our interactions with the real world. Imagine a floor that knows when someone falls and gets the voice assistant to ask you whether to call for an emergency, or a window that knows if someone breaks in, a kitchen that tracks your cooking actions (placing pots, mixing, water boiling, etc) and controls appliances accordingly, etc. Or the city of the future, capable of monitoring vehicles and pedestrians without having to employ invasive cameras and microphones. Now scale this to an entire forest, with solar powered hyper trees capable of communicating when there is a fire, or poaching, etc. To much more artistic applications, like trees in a park which highlight themselves differently based on how many people hugged them that day. The examples are endless.
Could you elaborate on the machine learning technology that is used to instantly interpret vibrational patterns such as human gestures and convert them into any digital command?
We use standard inexpensive hardware: a coin-sized vibration sensor and a chipset, which creators can place onto or underneath the surface they want to augment. When an event happens on such a surface, the corresponding vibration is captured and sent to the chipset, where our algorithm interprets it. If it corresponds to one of the events the algorithm has been pre-trained with, a corresponding message is generated. Such messages can be used either locally, for example when connected to the central system of a vehicle, or a smart assistant, or an ad-hoc Raspberry Pie, etc, or sent to the cloud.
Designers can use our cloud-based platform to define any number of events they want; the platform automatically generates the firmware which is then loaded onto the chipset, no one single line of code needs to be written.
What are some of the use cases for this technology in the retail space?
Despite the current situation, there is a lot of attention regarding the future of retail right now. It needs to be somehow different than e-commerce, capable of offering a real experience.
Designers have been experimenting with HyperSurfaces to create interactive products which display digital content according to how visitors interact with a certain product, offering both a physical and digital experience at the same time.
Hypersurfaces could be used in various applications, what are some of the applications that you personally believe have a high possibility of becoming popular?
Given the current situation, there is a lot of attention in smart home applications. If those are camera-free and microphone-free, and data are processed locally, then even better.
But there are many other applications we will see very soon.
What type of data can be collected from hypersurfaces?
Quite a lot. We can divide them into three broad categories: human interactions (like touching something), acoustic events (like the water boiling) and faults prevention (like detecting the sound of an engine before it breaks). For each one of these categories, HyperSurfaces is capable of telling multiple properties at the same time, from the intensity of the event to the type of material used to provoke the interaction, and much more.
How can AI models use this data to detect specific events as they happen in real-time?
We developed a real ‘universe’ of vibrational events recorded across hundreds of surfaces of all kinds. Vibrations are an incredibly rich type of information to work with.
When using our platform, users can record an incredibly small amount of observations for each event they want to define, because our AI is capable of extracting so much information from them thanks to this ‘universe’ we built.
Finally, could you tell us why you choose to study computer science?
I have been fascinated by math since I was 4. For me it’s the main tool to describe the world we live in, free from the biases that language entails. When I bought my first computer, I learnt that through Computer Science it was possible to use math to modify such a world in different ways. I see sensors as a pair of eyes to capture what’s in the world and algorithms like a brush that extracts something new from it.
Thank you for the great interview, readers who wish to learn more should visit HyperSurfaces.