Connect with us

Brain Machine Interface

Elon Musk, Neuralink, and Brain-Machine Interfaces

Published

 on

Elon Musk, Neuralink, and Brain-Machine Interfaces

On Tuesday night, thousands watched the internet live-stream presentation of Neuralink. The three-hour event was the company’s first public presentation. Musk and his team showed the different aspects of the world-changing technology. The new brain-AI connection is something that has been pursued by scientists for years. While it was always strictly research, it will now be used on humans. 

Neuralink was founded in 2016 by Elon Musk with the hope of one day integrating human and AI as one through implantable brain-machine interfaces (BMIs). The company has employed many high-profile neuroscientists and researchers, a lot of them coming from Universities. By July 2019, the company had $158 million in funding, much of it coming from Elon Musk himself. They currently employ about 90 employees. 

The new Neuralink chip will collect signals in the brain with many thin wires. The company has produced what they call a safe and extremely small interface that is able to be implemented into the brain. It is small enough that it won’t cause any damage or trauma. Before Neuralink, brain-interface has shown results, especially with parlyzed individuals being able to move robotic limbs with their minds, but it was very complicated, included big wires, and had to be supervised by a scientist. Neuralink aims to make it safe, small, and able to be used without supervision. 

The processor is a very small computer chip that is able to take electrical noise from neurons and turn it into clear digital signals. The chip only has one job, so it is very efficient and uses a small amount of energy. There is no need to change batteries or anything of the kind; it can last a long time. According to Andrew Hires, assistant professor of biological sciences at the University of Southern California, Neuralink “has taken a bunch of cutting-edge stuff and put it together.” 

Electrodes were created by the scientists and engineers that are made from extremely thin and flexible polymer wires. These will be implanted into the brain. In tests done with rats, Neuralink was able to record about 1,000 neurons. This is a lot more than what is needed for things such as moving cursors on computer screens with your mind. 

This new technology will be interfaced with the human brain by a new state of the art neurosurgery robot that has been developed by Neuralink. The robot very precisely inserts the wires into the brain. It is able to map out where all of the blood vessels are so that none of them will be pierced, causing damage or trauma. The robot is able to implant six of the strings per minute. 

One of the first areas where this technology will be tested is among paralyzed individuals. Neuralink’s president, Max Hodak, wants to try the new technology on five different paralyzed people. They will initially try to type on a computer with their minds. These types of experiments have been done before, but it won’t stop there for Neuralink. The goal is for individuals to eventually be able to regain control of paralyzed limbs. Individuals who are not able to speak will also be able to access the part of the brain that is responsible for speech. 

In the presentation, Musk talked about how they want the technology to be controlled by an app on your smartphone. This was a big point for them as they believe if someone has to go to a lab full of scientists every time to use it, that would defeat one of the main purposes which is giving people immediate access to brain integrated AI. 

According to Neuralink, the procedure will be nothing like the image everyone has of brain surgery. There will be no clamps on the skull or need to be put to sleep. The technology will be able to be implemented while the individual receives only local anesthetic to the spot, there will be no heavy anesthesia and all of the complications or side effects that sometimes follow. There will also be no need to shave an individual’s hair, and the area where the robot implants the technology will only be a small hole that will easily cover up. 

After the long awaited announcement that Elon Musk has eluded to, everyone now sees what Neuralink has achieved. Neuralink is looking to test the new technology among parlyzed volunteers by the end of 2020, and Musk then wants to turn to the rest of the public after that.

Spread the love

Artificial General Intelligence

Noah Schwartz, Co-Founder & CEO of Quorum – Interview Series

mm

Published

on

Noah Schwartz, Co-Founder & CEO of Quorum - Interview Series

Noah is an AI systems architect. Prior to founding Quorum, Noah spent 12 years in academic research, first at the University of Southern California and most recently at Northwestern as the Assistant Chair of Neurobiology. His work focused on information processing in the brain and he has translated his research into products in augmented reality, brain-computer interfaces, computer vision, and embedded robotics control systems.

Your interest in AI and robotics started as a little boy. How were you first introduced to these technologies?

The initial spark came from science fiction movies and a love for electronics. I remember watching the movie, Tron, as an 8-year old, followed by Electric Dreams, Short Circuit, DARYL, War Games, and others over the next few years. Although it was presented through fiction, the very idea of artificial intelligence blew me away. And even though I was only 8-years old, I felt this immediate connection and an intense pull toward AI that has never diminished in the time since.

 

How did your passions for both evolve?

My interest in AI and robotics developed in parallel with a passion for the brain. My dad was a biology teacher and would teach me about the body, how everything worked, and how it was all connected. Looking at AI and looking at the brain felt like the same problem to me – or at least, they had the same ultimate question, which was, How is that working? I was interested in both, but I didn’t get much exposure to AI or robotics in school. For that reason, I initially pursued AI on my own time and studied biology and psychology in school.

When I got to college, I discovered the Parallel Distributed Processing (PDP) books, which was huge for me. They were my first introduction to actual AI, which then led me back to the classics such as Hebb, Rosenblatt, and even McCulloch and Pitts. I started building neural networks based on neuroanatomy and what I learned from biology and psychology classes in school. After graduating, I worked as a computer network engineer, building complex, wide-area-networks, and writing software to automate and manage traffic flow on those networks – kind of like building large brains. The work reignited my passion for AI and motivated me to head to grad school to study AI and neuroscience, and the rest is history.

 

Prior to founding Quorum, you spent 12 years in academic research, first at the University of Southern California and most recently at Northwestern as the Assistant Chair of Neurobiology. At the time your work focused on information processing in the brain. Could you walk us through some of this research?

In a broad sense, my research was trying to understand the question: How does the brain do what it does using only what it has available? For starters, I don’t subscribe to the idea that the brain is a type of computer (in the von Neumann sense). I see it as a massive network that mostly performs stimulus-response and signal-encoding operations. Within that massive network there are clear patterns of connectivity between functionally specialized areas. As we zoom in, we see that neurons don’t care what signal they’re carrying or what part of the brain they’re in – they operate based on very predictable rules. So if we want to understand the function of these specialized areas, we need to ask a few questions: (1) As an input travels through the network, how does that input converge with other inputs to produce a decision? (2) How does the structure of those specialized areas form as a result of experience? And (3) how do they continue to change as we use our brains and learn over time? My research tried to address these questions using a mixture of experimental research combined with information theory and modeling and simulation – something that could enable us to build artificial decision systems and AI. In neurobiology terms, I studied neuroplasticity and microanatomy of specialized areas like the visual cortex.

 

You then translated your work into augmented reality, and brain-computer interfaces. What were some of the products you worked on?

Around 2008, I was working on a project that we would now call augmented reality, but back then, it was just a system for tracking and predicting eye movements, and then using those predictions to update something on the screen. To make the system work in realtime, I built a biologically-inspired model that predicted where the viewer would based on their microsaccades – tiny eye movements that occur just before you move your eye. Using this model, I could predict where the viewer would look, then update the frame buffer in the graphics card while their eyes were still in motion. By the time their eyes reached that new location on the screen, the image was already updated. This ran on an ordinary desktop computer in 2008, without any lag. The tech was pretty amazing, but the project didn’t get through to the next round of funding, so it died.

In 2011, I made a more focused effort at product development and built a neural network that could perform feature discovery on streaming EEG data that we measured from the scalp. This is the core function of most brain-computer interface systems. The project was also an experiment in how small of a footprint could we get this running on? We had a headset that read a few channels of EEG data at 400Hz that were sent via Bluetooth to an Android phone for feature discovery and classification, then sent to an Arduino-powered controller that we retrofitted into an off-the-shelf RC car. When in use, an individual who was wearing the EEG headset could drive and steer the car by changing their thoughts from doing mental math to singing a song. The algorithm ran on the phone and created a personalized brain “fingerprint” for each user, enabling them to switch between a variety of robotic devices without having to retrain on each device. The tagline we came up with was “Brain Control Meets Plug-and-Play.”

In 2012, we extended the system so it operated in a much more distributed manner on smaller hardware. We used it to control a multi-segment, multi-joint robotic arm in which each segment was controlled by an independent processor that ran an embedded version of the AI. Instead of using a centralized controller to manipulate the arm, we allowed the segments to self-organize and reach their target in a swarm-like, distributed manner. In other words, like ants forming an ant bridge, the arm segments would cooperate to reach some target in space.

We continued moving in this same direction when we first launched Quorum AI – originally known as Quorum Robotics – back in 2013. We quickly realized that the system was awesome because of the algorithm and architecture, not the hardware, so in late 2014, we pivoted completely into software. Now, 8 years later, Quorum AI is coming full-circle, back to those robotics roots by applying our framework to the NASA Space Robotics Challenge.

 

Quitting your job as a professor to launch a start-up had to have been a difficult decision. What inspired you to do this?

It was a massive leap for me in a lot of ways, but once the opportunity came up and the path became clear, it was an easy decision. When you’re a professor, you think in multi-year timeframes and you work on very long-range research goals. Launching a start-up is the exact opposite of that. However, one thing that academic life and start-up life have in common is that both require you to learn and solve problems constantly. In a start-up, that could mean trying to re-engineer a solution to reduce product development risk or maybe studying a new vertical that could benefit from our tech. Working in AI is the closest thing to a “calling” as I’ve ever felt, so despite all the challenges and the ups and downs, I feel immensely lucky to be doing the work that I do.

 

You’ve since then developed Quorum AI, which develops realtime, distributed artificial intelligence for all devices and platforms. Could you elaborate on what exactly this AI platform does?

The platform is called the Environment for Virtual Agents (EVA), and it enables users to build, train, and deploy models using our Engram AI Engine. Engram is a flexible and portable wrapper that we built around our unsupervised learning algorithms. The algorithms are so efficient that they can learn in realtime, as the model is generating predictions. Because the algorithms are task-agnostic, there is no explicit input or output to the model, so predictions can be made in a Bayesian manner for any dimension without retraining and without suffering from catastrophic forgetting. The models are also transparent and decomposable, meaning they can be examined and broken apart into individual dimensions without losing what has been learned.

Once built, the models can be deployed through EVA to any type of platform, ranging from custom embedded hardware or up to the cloud. EVA (and the embeddable host software) also contain several tools to extend the functionality of each model. A few quick examples: Models can be shared between systems through a publication/subscription system, enabling distributed systems to achieve federated learning over both time and space. Models can also be deployed as autonomous agents to perform arbitrary tasks, and because the model is task-agnostic, the task can be changed during runtime without retraining. Each individual agent can be extended with a private “virtual” EVA, enabling the agent to simulate models of other agents in a scale-free manner. Finally, we’ve created some wrappers for deep learning and reinforcement learning (Keras-based) systems to enable these models to operate on the platform, in concert with more flexible Engram-based systems.

 

You’ve previously described the Quorum AI algorithms as “mathematical poetry”. What did you mean by this?

When you’re building a model, whether you’re modeling the brain or you’re modeling sales data for your enterprise, you start by taking an inventory of your data, then you try out known classes of models to try and approximate the system. In essence, you are creating rough sketches of the system to see what looks best. You don’t expect things to fit the data very well, and there’s some trial and error as you test different hypotheses about how the system works, but with some finesse, you can capture the data pretty well.

As I was modeling neuroplasticity in the brain, I started with the usual approach of mapping out all the molecular pathways, transition states, and dynamics that I thought would matter. But I found that when I reduced the system to its most basic components and arranged those components in a particular way, the model got more and more accurate until it fit the data almost perfectly. It was like every operator and variable in the equations were exactly what they needed to be, there was nothing extra, and everything was essential to fitting the data.

When I plugged the model into larger and larger simulations, like visual system development or face recognition, for instance, it was able to form extremely complicated connectivity patterns that matched what we see in the brain. Because the model was mathematical, those brain patterns could be understood through mathematical analysis, giving new insight into what the brain is learning. Since then, we’ve solved and simplified the differential equations that make up the model, improving computational efficiency by multiple orders of magnitude. It may not be actual poetry, but it sure felt like it!

 

Quorum AI’s platform toolkit enables devices to connect to one another to learn and share data without needing to communicate through cloud-based servers. What are the advantages of doing it this way versus using the cloud?

We give users the option of putting their AI anywhere they want, without compromising the functionality of the AI. The status quo in AI development is that companies are usually forced to compromise security, privacy, or functionality because their only option is to use cloud-based AI services. If companies do try to build their own AI in-house, it often requires a lot of money and time, and the ROI is rarely worth the risk. If companies want to deploy AI to individual devices that are not cloud-connected, the project quickly becomes impossible. As a result, AI adoption becomes a fantasy.

Our platform makes AI accessible and affordable, giving companies a way to explore AI development and adoption without the technical or financial overhead. And moreover, our platform enables users to go from development to deployment in one seamless step.

Our platform also integrates with and extends the shelf-life of other “legacy” models like deep learning or reinforcement learning, helping companies repurpose and integrate existing systems into newer applications. Similarly, because our algorithms and architectures are unique, our models are not black boxes, so anything that the system learns can be explored and interpreted by humans, and then extended to other areas of business.

 

It’s believed by some that Distributed Artificial Intelligence (DAI), could lead the way to Artificial General Intelligence (AGI). Do you subscribe to this theory?

I do, and not just because that’s the path we’ve set out for ourselves! When you look at the brain, it’s not a monolithic system. It’s made up of separate, distributed systems that each specialize in a narrow range of brain functions. We may not know what a particular system is doing, but we know that its decisions depend significantly on the type of information it’s receiving and how that information changes over time. (This is why neuroscience topics like the connectome are so popular.)

In my opinion, if we want to build AI that is flexible and that behaves and performs like the brain, then it makes sense to consider distributed architectures like those that we see in the brain. One could argue that deep learning architectures like multi-layer networks or CNNs can be found in the brain, and that’s true, but those architectures are based on what we knew about the brain 50 years ago.

The alternative to DAI is to continue iterating on monolithic, inflexible architectures that are tightly coupled to a single decision space, like those that we see in deep learning or reinforcement learning (or any supervised learning method, for that matter). I would suggest that these limitations are not just a matter of parameter tweaking or adding layers or data conditioning – these issues are fundamental to deep learning and reinforcement learning, at least as we define them today, so new approaches are required if we’re going to continue innovating and building the AI of tomorrow.

 

Do you believe that achieving AGI using DAI is more likely than reinforcement learning and/or deep learning methods that are currently being pursued by companies such as OpenAI and DeepMind?

Yes, although from what they’re blogging about, I suspect OpenAI and DeepMind are using more distributed architectures than they let on. We’re starting to hear more about multi-system challenges like transfer learning or federated/distributed learning, and coincidentally, about how deep learning and reinforcement learning approaches aren’t going to work for these challenges. We’re also starting to hear from pioneers like Yoshua Bengio about how biologically-inspired architectures could bridge the gap! I’ve been working on biologically-inspired AI for almost 20 years, so I feel very good about what we’ve learned at Quorum AI and how we’re using it to build what we believe is the next generation of AI that will overcome these limitations.

 

Is there anything else that you would like to share about Quorum AI?

We will be previewing our new platform for distributed and agent-based AI at the Federated and Distributed Machine Learning Conference in June 2020. During the talk, I plan to present some recent data on several topics, including sentiment analysis as a bridge to achieving empathic AI.

I would like to give a special thank you to Noah for these amazing answers, and I would recommend that you visit the Quorum to learn more.

Spread the love
Continue Reading

AI 101

What is Big Data?

mm

Published

on

What is Big Data?

“Big Data” is one of the commonly used buzz words of our current era, but what does it really mean?

Here’s a quick, simple definition of big data. Big data is data that is too large and complex to be handled by traditional data processing and storage methods. While that’s a quick definition you can use as a heuristic, it would be helpful to have a deeper, more complete understanding of big data. Let’s take a look at some of the concepts that underlie big data, like storage, structure, and processing.

How Big Is Big Data?

It isn’t as simple as saying “any data over the size ‘X ‘is big data”, the environment that the data is being handled in is an extremely important factor in determining what qualifies as big data. The size that data needs to be, in order to be considered big data, is dependant upon the context, or the task the data is being used in. Two datasets of vastly different sizes can be considered “big data” in different contexts.

To be more concrete, if you try to send a 200-megabyte file as an email attachment, you would not be able to do so. In this context, the 200-megabyte file could be considered big data. In contrast, copying a 200-megabyte file to another device within the same LAN may not take any time at all, and in that context, it wouldn’t be regarded as big data.

However, let’s assume that 15 terabytes worth of video need to be pre-processed for use in training computer vision applications. In this case, the video files take up so much space that even a powerful computer would take a long time to process them all, and so the processing would normally be distributed across multiple computers linked together in order to decrease processing time. These 15 terabytes of video data would definitely qualify as big data.

Types Of Big Data Structures

Big data comes in three different categories of structure: un-structured data, semi-structured, and structured data.

Unstructured data is data that possesses no definable structure, meaning the data is essentially just in one large pool. Examples of unstructured data would be a database full of unlabeled images.

Semi-structured data is data that doesn’t have a formal structure, but does exist within a loose structure. For example, email data migtht count as semi-structured data, because you could refer to the data contained in individual emails, but formal data patterns have not been established.

Structured data is data that has a formal structure, with data points categorized by different features. One example of structured data is an excel spreadsheet containing contact information like names, emails, phone numbers, and websites.

If you would like to read more about the differences in these data types, check the link here.

Metrics For Assessing Big Data

Big data can be analyzed in terms of three different metrics: volume, velocity, and variety.

Volume refers to the size of the data. The average size of datasets is often increasing. For example, the largest hard drive in 2006 was a 750 GB hard drive. In contrast, Facebook is thought to generate over 500 terabytes of data in a day and the largest consumer hard drive available today is a 16 terabyte hard drive. What quantifies as big data in one era may not be big data in another. More data is generated today because more and more of the objects surrounding us are equipped with sensors, cameras, microphones, and other data collection devices.

Velocity refers to how fast data is moving, or to put that another way, how much data is generated within a given period of time. Social media streams generate hundreds of thousands of posts and comments every minute, while your own email inbox will probably have much less activity. Big data streams are streams that often handle hundreds of thousands or millions of events in more or less real-time. Examples of these data streams are online gaming platforms and high-frequency stock trading algorithms.

Variety refers to the different types of data contained within the dataset. Data can be made up of many different formats, like audio, video, text, photos, or serial numbers. In general, traditional databases are formatted to handle one, or just a couple, types of data. To put that another way, traditional databases are structured to hold data that is fairly homogenous and of a consistent, predictable structure. As applications become more diverse, full of different features, and used by more people, databases have had to evolve to store more types of data. Unstructured databases are ideal for holding big data, as they can hold multiple data types that aren’t related to each other.

Methods Of Handling Big Data

There are a number of different platforms and tools designed to facilitate the analysis of big data. Big data pools need to be analyzed to extract meaningful patterns from the data, a task that can prove quite challenging with traditional data analysis tools. In response to the need for tools to analyze large volumes of data, a variety of companies have created big data analysis tools. Big data analysis tools include systems like ZOHO Analytics, Cloudera, and Microsoft BI.

Spread the love
Continue Reading

Artificial Neural Networks

AI Used To Recreate Human Brain Waves In Real Time

mm

Published

on

AI Used To Recreate Human Brain Waves In Real Time

Recently, a team of researchers created a neural network that is able to recreate human brain waves in real-time. As reported by Futurism, the research team, comprised of researchers from the Moscow Institute of Physics and Technology (MIPT) and the Neurobotics corporation, were able to visualize a  person’s brain waves by translating the waves with a computer vision neural network, rendering them as images.

The results of the study were published in bioRxiv, and a video was posted alongside the research paper, which showed how the network reconstructed images. The MIPT research team hopes that the study will help them create post-stroke rehabilitation systems that are controlled by brain waves. In order to create rehabilitative devices for stroke victims, neurobiologists have to study the processes the brain uses to encode information. A critical part of understanding these processes is studying how people perceive video information. According to ZME Science, the current methods of extracting images from brain waves typically analyze the signals originating from the neurons, through the use of implants, or extract images using functional MRI.

The research team from Neurbiotics and MIPT utilized electroencephalography, or EEG, which logs brain waves collected from electrodes placed on the scalp.  In such situations, people often wear devices that track their neural signals while they watch a video or look at pictures. The analysis of brain activity yielded input features that could be used in a machine learning system. The machine learning system was able to reconstruct the images a person witnessed, rendering the images on a screen in real-time.

The experiment was divided into multiple parts. In the experiment’s first phase, the researchers had the subjects watch 10-second clips of YouTube videos for around 20 minutes. There were five different categories that the video were divided into: motorsports, human faces, abstract shapes, waterfalls and moving mechanisms. These different categories can contain a variety of objects. For example, the motorsports category contained clips of snowmobiles and motorcycles.

The research team analyzed the EEG data that was collected while the participants watched the videos. The EEGs displayed specific patterns for each of the different video clips, and this meant that the team could potentially interpret what content the participants were seeing on videos in more or less real-time.

The second phase of the experiment had three categories selected at random. Two neural networks were created to work with these two categories. The first network generated random images that belonged to one of three categories, creating them out of random noise that was refined into an image. Meanwhile, the other network generated noise based on the EEG scans. The data in both of the networks were compared and the randomly generated images were updated based on the EEG noise data, until the generated images became similar to the images that the test subjects were seeing.

After the system had been designed, the researchers tested the program’s ability to visualize brain waves by showing the test subjects videos they hadn’t yet seen from the same categories. The EEGs generated during the second round of viewings were given to the networks, and the networks were able to generate images that could be easily placed into the right category 90% of the time.

The researchers noted that the results of their experiment were surprising because for a long time it was assumed that there wasn’t sufficient information in an EEG to reconstruct the images observed by people. However, the results of the research team proved that it can be done.

Vladimir Konyshev, the head of the Neurorobotics Lab at MIPT, explained that although the research team is currently focused on creating assistive technologies for those who are disabled, the technology they are working could be used to create neural control devices for the general population at some point. Konyshev explained to TechXplore:

“We’re working on the Assistive Technologies project of Neuronet of the National Technology Initiative, which focuses on the brain-computer interface that enables post-stroke patients to control an exoskeleton arm for neurorehabilitation purposes, or paralyzed patients to drive an electric wheelchair, for example. The ultimate goal is to increase the accuracy of neural control for healthy individuals, too.”

Spread the love
Continue Reading