Connect with us

Artificial Neural Networks

AI Developed to Translate Brain Activity into Words 

Published

 on

AI Developed to Translate Brain Activity into Words 

Researchers at the University of California, San Francisco have developed artificial intelligence (AI) that can translate brain activity into text. The system works on neural patterns that are detected when someone is speaking, but experts hope that it can eventually be used on individuals who are unable to speak, like people suffering from locked in syndrome. 

Dr. Joseph Makin was co-author of the research. 

“We are not there yet but we think this could be the basis of a speech prosthesis,” said Makin.

The research was published in the journal Nature Neuroscience.

Testing the System

Joseph Makin and his team relied on deep learning algorithms to study the brain signals of four women as they spoke. All of the women have epilepsy, and electrodes were attached to their brains to monitor seizures. 

After the electrodes were attached, each woman then read aloud a set of sentences while her brain activity was measured. The largest amount of unique words used was 250. They could choose from a set of 50 different sentences, including “Tina Turner is a pop singer,” and “Those thieves stole 30 jewels.” 

The brain activity data was then fed to a neural network algorithm, and it was trained to identify regularly occurring patterns. These patterns could then be linked to repeated aspects of speech like vowels or consonants. They were then fed to a second neural network that attempted to convert them into words to form a sentence. 

Each woman was asked to repeat the sentences at least twice, with the final repetition not making it into the training data. This allowed the researchers to test the system. 

“Memorising the brain activity of these sentences wouldn’t help, so the network instead has to learn what’s similar about them so that it can generalise to this final example,” says Makin.

Results

The first results from the system did not make sentences that made sense, but it improved as the system compared each sequence of words with the sentences that were read aloud. 

The team then tested the system by generating written text only from the brain activity during speech. 

There were a lot of mistakes in the translation, but the accuracy rate was still very impressive and much better than previous approaches. Accuracy varied from person to person, but for one individual only 3% of each sentence on average needed corrections. 

The team also learned that the training algorithm on one individual’s data allowed the final user to provide much less. 

According to Dr. Christian Herff, who is from Maastricht University but not involved in the study, it is impressive that the system required less than 40 minutes of training data for each participant and a limited collection of sentences, compared to the millions of hours normally required. 

“By doing so they achieve levels of accuracy that haven’t been achieved so far,” he said.

“Of course this is fantastic research but those people could just use ‘OK Google’ as well,” he said. “This is not translation of thought [but of brain activity involved in speech].”

Another challenge could be that people with speech disabilities might have different brain activity. 

“We want to deploy this in a patient with an actual speech disability,” Makin says, “although it is possible their brain activity may be different from that of the women in this study, making this more difficult.”

There is still a long way to go to translate brain signal data comprehensively. Humans use a massive amount of words, and the study only used a very restricted set of speech. 

 

Spread the love

Artificial Neural Networks

Researchers Develop Method for Artificial Neuronal Networks to Communicate with Biological Ones

Published

on

Researchers Develop Method for Artificial Neuronal Networks to Communicate with Biological Ones

A group of researchers has developed a way for artificial neuronal networks to communicate with biological neuronal networks. The new development is a big step forward for neuroprosthetic devices, which replace damaged neurons with artificial neuronal circuitry. 

The new method relies on the conversion of artificial electrical spiking signals to a visual pattern. That is then used, via optogenetic stimulation, in order to entrain the biological neurons. 

The article titled “Toward neuroprosthetic real-time communication from in silico to biological neuronal network via patterned optogenetic stimulation” was published in Scientific Reports.

Neuroprosthetic Technology

An international team led by Ikerbasque Researcher Paolo Bonifazi from Biocruces Health Research Institute in Bilbao, Spain, set out to create neuroprosthetic technology. He was joined by Timothée Levi from Institute of Industrial Science, The University of Tokyo.

One of the biggest challenges surrounding this technology is that neurons in the brain are extremely precise when communicating. When it comes to electrical neural networks, electrical output is not capable of targeting specific neurons. 

To get around this, the team of researchers converted the electrical signals to light. 

According to Levi, “advances in optogenetic technology allowed us to precisely target neurons in a very small area of our biological neuronal network.”

Optogenetics

Optogenetics is a technology that relies on the light-sensitive proteins that are found in algae and other animals. When these proteins are inserted into neurons, light can be shined onto a neuron to make it active or inactive, depending on the type of protein. 

The researchers used specific proteins that were activated by blue light in the project. The first step was to convert the electrical output of the spiking neuronal network into a checkered pattern made up of blue and black squares. This pattern was then projected by light down onto a 0.8 by 0.8 mm square of the biological neural network, which was growing in a dish. When this happened, only the neurons hit by the light coming from the blue squares were activated. 

Synchronous activity is produced in cultured neurons whenever there is spontaneous activity. This results in a type of rhythm that is based on the way the neurons are connected together, the different types of neurons, and how they adapt and change. 

“The key to our success,” says Levi, “was understanding that the rhythms of the artificial neurons had to match those of the real neurons. Once we were able to do this, the biological network was able to respond to the “melodies” sent by the artificial one. Preliminary results obtained during the European Brainbow project, help us to design these biomimetic artificial neurons.”

The researchers eventually found the best match after the artificial neural network was tuned to different rhythms, and they were able to identify changes in the global rhythms of the biological network.

“Incorporating optogenetics into the system is an advance towards practicality,” says Levi. “It will allow future biomimetic devices to communicate with specific types of neurons or within specific neuronal circuits.”

The future prosthetic devices that are developed with the system could replace damaged brain circuits. They could also restore communication between different regions of the brain. All of this could lead to an extremely impressive generation of neuroprosthesis. 

 

Spread the love
Continue Reading

Artificial Neural Networks

Engineers Develop Energy-Efficient “Early Bird” Method to Train Deep Neural Networks

Published

on

Engineers Develop Energy-Efficient "Early Bird” Method to Train Deep Neural Networks

Engineers at Rice University have developed a new method for training deep neural networks (DNNs) with a fraction of the energy normally required. DNNs are the form of artificial intelligence (AI) that plays a key role in the development of technologies such as self-driving cars, intelligent assistants, facial recognition, and other applications.

Early Bird was detailed in a paper on April 29 by researchers from Rice and Texas A&M University. It took place at the International Conference on Learning Representations, or ICLR 2020. 

The study’s lead authors were Haoran You and Chaojian Li from Rice’s Efficient and Intelligent Computing (EIC) Lab. In one study, they demonstrated how the method could train a DNN at the same level and accuracy as today’s methods, but using 10.7 times less energy. 

The research was led by EIC Lab director Yingyan Lin, Rice’s Richard Baraniuk, and Texas A&M’s Zhangyang Wang. Other co-authors include Pengfei Xu, Yonggan Fu, Yue Wang, and Xiaohan Chen. 

“A major driving force in recent AI breakthroughs is the introduction of bigger, more expensive DNNs,” Lin said. “But training these DNNs demands considerable energy. For more innovations to be unveiled, it is imperative to find ‘greener’ training methods that both address environmental concerns and reduce financial barriers of AI research.”

Expensive to Train DNNs

It can be very expensive to train the world’s best DNNs, and the price-tag continues to increase. In 2019, a study led by the Allen Institute for AI in Seattle found that in order to train a top-flight deep neural network, 300,000 times more computations are needed compared to 2012-2018. Another 2019 study, this time led by researchers at the University of Massachusetts Amherst, found that by training a single, elite DNN, about the same amount of carbon dioxide emissions are released as five U.S. automobiles. 

In order for DNNs to perform their highly specialized tasks, they consist of at least millions of artificial neurons. They are capable of learning how to make decisions, sometimes outperforming humans, by observing large numbers of examples. They can do this without needing explicit programming. 

Prune and Train

Lin is an assistant professor of electrical and computer engineering in Rice’s Brown School of Engineering. 

“The state-of-art way to perform DNN training is called progressive prune and train,” Lin said. “First, you train a dense, giant network, then remove parts that don’t look important — like pruning a tree. Then you retrain the pruned network to restore performance because performance degrades after pruning. And in practice you need to prune and retrain many times to get good performance.”

This method is used since not all of the artificial neurons are needed to complete the specialized task. The connections between neurons are fortified due to the training, and others can be discarded. This pruning method cuts computational costs and reduces model size, which makes fully trained DNNs more affordable. 

“The first step, training the dense, giant network, is the most expensive,” Lin said. “Our idea in this work is to identify the final, fully functional pruned network, which we call the ‘early-bird ticket,’ in the beginning stage of this costly first step.”

The researchers do this by looking for key network connectivity patterns, and they were able to discover these early-bird tickets. This allowed them to quicken the DNN training. 

Early Bird in the Beginning Phase of Training

Lin and the other researchers found that Early Bird could appear one-tenth or less of the way through the beginning phase of training. 

“Our method can automatically identify early-bird tickets within the first 10% or less of the training of the dense, giant networks,” Lin said. “This means you can train a DNN to achieve the same or even better accuracy for a given task in about 10% or less of the time needed for traditional training, which can lead to more than one order savings in both computation and energy.”

Besides being faster and more energy-efficient, the researchers have a strong focus on environmental impact. 

“Our goal is to make AI both more environmentally friendly and more inclusive,” she said. “The sheer size of complex AI problems has kept out smaller players. Green AI can open the door enabling researchers with a laptop or limited computational resources to explore AI innovations.”

The research received support from the National Science Foundation. 

 

Spread the love
Continue Reading

Artificial Neural Networks

AI Models Struggle To Predict People’s Irregular Behavior During Covid-19 Pandemic

mm

Published

on

AI Models Struggle To Predict People's Irregular Behavior During Covid-19 Pandemic

Retail and service companies around the world make use of AI algorithms to predict customer behaviors, take stock of inventory, estimate marketing impacts, and detect possible instances of fraud. The machine learning models used to make these predictions are trained on patterns derived from the normal, everyday activity of people. Unfortunately, our day-to-day activity has changed during the coronavirus pandemic, and as MIT Technology Review reported current machine learning models are being thrown off as a result. The severity of the problem differs from company to company, but many models have been negatively impacted by the sudden change in people’s behavior over the course of the past few weeks.

When the coronavirus pandemic occurred, the purchasing habits of people shifted dramatically. Prior to the onset of the pandemic, the most commonly purchased objects were things like phone cases, phone chargers, headphones, kitchenware. After the start of the pandemic, Amazon’s top 10 search terms became things like Clorox wipes, Lysol spray, paper towels, hand sanitizer, face masks, and toilet paper. Over the course of the last week of February, the top Amazon searches all became related to products people required to shelter themselves from Covid-19. The correlation of Covid-19 related product searches/purchases and the spread of the disease is so reliable that it can be used to track the spread of the pandemic across different geographical regions. Yet machine learning models break down when the model’s input data is too different from the data used to train the model.

The volatility of the situation has made automation of supply chains and inventories difficult. Rael Cline, the CEO of London-based consultancy Nozzle,  explained that companies are trying to optimize for the demand of toiler paper one week ago, while “this week everyone wants to buy puzzles or gym equipment.”

Other companies have their own share of problems. One company provides investment recommendations based on the sentiment of various news articles, but because the sentiment of news articles at the moment is often more pessimistic than usual, the investing advice could be heavily skewed toward the negative. Meanwhile, a streaming video company utilized recommendation algorithms to suggest content to viewers, but as many people suddenly subscribed to the service their recommendations started to fall from the mark. Yet another company responsible for supplying retailers in India with condiments and sauces discovered bulk orders broke their predictive models.

Different companies are handling the problems caused by pandemic behavior patterns in different ways. Some companies are simply revising their estimates downward. People still continue to subscribe to Netflix and purchase products on Amazon, but they have cut back on luxury spending, postponing purchases on big-ticket items. In a sense, people’s spending behaviors can be conceived of as a contraction of their usual behavior.

Other companies have had to get more hand-on with their models and have engineers make important tweaks to the model and it’s training data. For example, Phrasee is an AI firm that utilizes natural language processing and generation models to create copy and advertisements for a variety of clients. Phrasee always has engineers check what text the model generates, and the company has begun manually filtering out certain phrases in its copy. Phrasee has decided to ban the generation of phrases that might encourage dangerous activities during a time of social distancing, phrases like “party wear”. They have also decided to restrict terms that could lead to anxiety, like “brace yourself”, “buckle up”, or “stock up”.

The Covid-19 crisis has demonstrated that freak events can throw off even highly-trained models that are typically reliable, as things can get much worse than the worst-case scenarios that are typically included within training data. Rajeev Sharma, CEO of AI consultancy Pactera Edge, explained to MIT Technology Review that machine learning models could be made more reliable by being trained on freak events like the Covid-19 pandemic and the Great Depression, in addition to the usual upwards and downwards fluctuations.

Spread the love
Continue Reading