Connect with us

Natural Language Processing

Researchers Train An AI To Predict The Smell Of Chemicals

mm

Published

 on

Researchers Train An AI To Predict The Smell Of Chemicals

A recent paper published by researchers at Google Brain demonstrates how researchers managed to train an AI to predict the smell of objects, based on the structure of the chemicals passed into the network. As reported by Wired, the researchers are hopeful that their work could help unravel some of the mysteries surrounding the human sense of smell, which is poorly understood in comparison to our other senses.

The differences between smells are complex and a single atom being changed in a molecule can change a smell from pleasant to unpleasant. It’s difficult for researchers to understand the patterns that cause chemical structures to be interpreted by our olfactory senses as pleasant or aversive. In contrast, the patterns of the electromagnetic spectrum that appear as color to our eyes are much more easily quantifiable, with scientists being able to make precise measurements that will tell them what certain wavelengths of light will look like.

Machine learning algorithms excel at finding patterns within data, and for this reason, AI researchers have attempted to use machine learning to gain better insight into how smells are interpreted by the human brain. Attempts to utilize machine learning algorithms to quantify smell include the DREAM Olfaction Prediction Challenge carried out in 2015. Several studies took the data from the challenge and tried to generate natural language descriptions of mono-molecular odorants.

The recent study, published in Arxiv, catalogs the Google Brain researcher’s attempts to quantify smell using neural networks. The researchers utilized a Graph Neural Network or GNN. Graph Neural Networks are capable of interpreting graph data, which are data structures comprised of nodes and edges. Graphs are commonly used to represent networks or relationships between individual data points. In the context of a social network, a graph would have each person in the network represented by a node or vertex. Such graphs are used by social media companies to predict people on the peripherals of your current network and suggest new friends.

For the purposes of interpreting smells, the researchers trained the network on thousands of molecules, each matched with a natural language descriptor. The GNN was able to interpret the data and pick up on patterns in the structure of the molecules. The descriptors used by the researchers were phrases like “sweet”, “smoky”, or “woody”. Approximately two-thirds of the over 5,000 molecules that were compiled by the researchers were used to train the model, while the remaining third was used to test the model.

The model the researchers trained worked so well that once the first iteration was completed, performance already matched the peak performance achieved by other groups of researchers who tried to assign natural language labels to chemical structures.

Alex Wiltschko, one of the researchers who worked on the project, acknowledges that there are a couple of limitations to their current approach. For one, the AI may distinguish differences between chemical structures that humans would describe as being the same, calling two different chemicals “earthy” or “woody” in nature, even though the AI classifies them differently. Another issue with the classifier is that it doesn’t distinguish between chiral pairs, which are molecules that are mirror images of each other. The different orientations mean they have different smells, but the model currently doesn’t see them as being different.

The research team aims to address these limitations in their future work. The research still has a long way to go, but it is a step towards understanding what features of a molecule correspond with our perception of certain smells. The Google Brain team isn’t the only research team to be working on applications of AI aimed at recognizing scents. Other AI experiments involving scent include IBM’s experiments with AI-generated perfumes and an experiment by Russian scientists to detect potentially toxic mixtures of gas.

Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.

Deep Learning

New Research Shows How AI Can Act as Mediators

Published

on

New Research Shows How AI Can Act as Mediators

New research out of Cornell University shows how artificial intelligence (AI) can play a role in mediating conversations. This comes during a time of social distancing and remote conversations due to a pandemic. 

According to the new study, humans trusted artificial intelligent systems more than the actual people they were talking to when having difficult conversations. The artificial intelligent systems were “smart” reply suggestions in texts. 

The new study is titled “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust.” It was published online in the journal Computers in Human Behavior.

Jess Hohenstein is a doctoral student in the field of information science. He is the paper’s first author.

“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the artificial intelligence system,” said Hohenstein. “This introduces a potential to take AI and use it as a mediator in our conversations.”

Detect When Things Go Bad 

During a conversation, the algorithm can analyze language to detect the moment when things are going bad. It can then suggest certain conflict-resolution strategies, according to Hohenstein.

The study’s main goal was to look at the different subtle and significant ways that AI systems, like smart replies, can alter how humans interact. According to the researchers, something as small as selecting a reply that is not completely accurate can drastically change the different aspects of a conversation. That language is often selected to save time typing, and it can have a direct effect on relationships. 

Malte Jung is co-author of the study and assistant professor of information science. He is also director of the Robots in Groups lab, which studies how robots change group dynamics. 

“Communication is so fundamental to how we form perceptions of each other, how we form and maintain relationships, or how we’re able to accomplish anything working together,” said Jung.

“This study falls within the broader agenda of understanding how these new AI systems mess with our capacity to interact,” Jung continued. “We often think about how the design of systems affects how we interact with them, but fewer studies focus on the question of how the technologies we develop affect how people interact with each other.”

Better Understanding of Human Interaction

The study can help understand the ways in which people perceive and interact with computers. It can also help improve human communication, through the use of subtle guidance and AI reminders.

Hohenstein and Jung wanted to find out if the AI system could absorb the “crash” of a conversation.

“There’s a physical mechanism in the front of the car that’s designed to absorb the force of the impact and take responsibility for minimizing the effects of the crash,” Hohenstein said. “Here we see the AI system absorb some of the moral responsibility.”

The research was supported in part by the National Science Foundation. 

 

Spread the love
Continue Reading

Deep Learning

Uber’s Fiber Is A New Distributed AI Model Training Framework

mm

Published

on

Uber's Fiber Is A New Distributed AI Model Training Framework

According to VentureBeat, AI researchers at Uber have recently posted a paper to Arxiv outlining a new platform intended to assist in the creation of distributed AI models. The platform is called Fiber, and it can be used to drive both reinforcement learning tasks and population-based learning. Fiber is designed to make large-scale parallel computation more accessible to non-experts, letting them take advantage of the power of distributed AI algorithms and models.

Fiber has recently been made open-source on GitHub, and it’s compatible with Python 3.6 or above, with Kubernetes running on a Linux system and running in a cloud environment. According to the team of researchers, the platform is capable of easily scaling up to hundreds or thousands of individual machines.

The team of researchers from Uber explains that many of the most recent and relevant advances in artificial intelligence have been driven by larger models and more algorithms that are trained using distributed training techniques. However, creating population-based models and reinforcement models remains a difficult task for distributed training schemes, as they frequently have issues with efficiency and flexibility. Fiber makes the distributed system more reliable and flexible by combining cluster management software with dynamic scaling and letting users move their jobs from one machine to a large number of machines seamlessly.

Fiber is made out of three different components: an API, a backend, and a cluster layer. The API layer enables users to create things like queues, managers, and processes. The backend layer of Fiber lets the user create and terminate jobs that are being managed by different clusters, and the cluster layer manages the individual clusters themselves along with their resources, which greatly the number of items that Fiber has to keep tabs on.

Fiber enables jobs to be queued and run remotely on one local machine or many different machines, utilizing the concept of job-backed processes. Fiber also makes use of containers to ensure things like input data and dependent packages are self-contained. The Fiber framework even includes built-in error handling so that if a worker crashes it can be quickly revived. FIber is able to do all of this while interacting with cluster managers, letting Fiber apps run as if they were normal apps running on a given computer cluster.

Experimental results showed that on average Fiber’s response time was a few milliseconds and that it also scaled up better than baseline AI techniques when built with 2,048 processor cores/workers. The length of time required to complete jobs decreased gradually as the set number of workers increased. IPyParallel completed 50 iterations of training in approximately 1400 seconds, while Fiber was able to complete the same 50 iterations of training in approximately 50 seconds with 512 workers available.

The coauthors of the Fiber paper explain that Fiber is able to do achieve multiple goals like dynamically scaling algorithms and using large volumes of computing power:

“[Our work shows] that Fiber achieves many goals, including efficiently leveraging a large amount of heterogeneous computing hardware, dynamically scaling algorithms to improve resource usage efficiency, reducing the engineering burden required to make [reinforcement learning] and population-based algorithms work on computer clusters, and quickly adapting to different computing environments to improve research efficiency. We expect it will further enable progress in solving hard [reinforcement learning] problems with [reinforcement learning] algorithms and population-based methods by making it easier to develop these methods and train them at the scales necessary to truly see them shine.”

Spread the love
Continue Reading

Deep Learning

Researchers Develop Computer Algorithm Inspired by Mammalian Olfactory System

Published

on

Researchers Develop Computer Algorithm Inspired by Mammalian Olfactory System

Researchers from Cornell University have created a computer algorithm inspired by the mammalian olfactory system. Scientists have long sought out explanations of how mammals learn and identify smells. The new algorithm provides insight into the workings of the brain, and applying it to a computer chip allows it to quickly and reliably learn patterns better than current machine learning models. 

Thomas Cleland is a professor of psychology and senior author of the study titled “Rapid Learning and Robust Recall in a Neuromorphic Olfactory Circuit,” published in Nature Machine Intelligence on March 16.

“This is a result of over a decade of studying olfactory bulb circuitry in rodents and trying to figure out essentially how it works, with an eye towards things we know animals can do that our machines can’t,” Cleland said. 

“We now know enough to make this work. We’ve built this computational model based on this circuitry, guided heavily by things we know about the biological systems’ connectivity and dynamics,” he continued. “Then we say, if this were so, this would work. And the interesting part is that it does work.”

Intel Computer Chip

Cleland was joined by co-author Nabil Imam, a researcher at Intel, and together they applied the algorithm to an Intel computer chip. The chip is called Loihi, and it is neuromorphic, which means it is inspired by the functions of the brain. The chip has digital circuits that mimic the way in which neurons learn and communicate. 

The Loihi chip relies on parallel cores that communicate via discrete spikes, and each one of these spikes has an effect that can change depending on local activity. This requires different strategies for algorithm design than what is used in existing computer chips. 

Through the use of neuromorphic computer chips, machines could work a thousand times faster than a computer’s central or graphics processing units at identifying patterns and carrying out certain tasks. 

The Loihi research chip can also run certain algorithms while using around a thousand times less power than traditional methods. This is well-suited for the algorithm, which can accept input patterns from various different sensors, learn patterns quickly and sequentially, and identify each of the meaningful patterns even with strong sensory interference. The algorithm is capable of successfully identifying odors, and it can do so when the pattern is an astounding 80% different from the pattern originally learned by the computer. 

“The pattern of the signal has been substantially destroyed,” Cleland said, “and yet the system is able to recover it.”

The Mammalian Brain

The brain of a mammal is able to identify and remember smells extremely well, and there can be thousands of olfactory receptors and complex neural networks working to analyze the patterns associated with odors. One of the things that mammals can do better than artificial intelligence systems is retain what they’ve learned, even after there is new knowledge. In deep learning approaches, the network must be presented with everything at once, since new information can affect or even destroy what the system previously learned. 

“When you learn something, it permanently differentiates neurons,” Cleland said. “When you learn one odor, the interneurons are trained to respond to particular configurations, so you get that segregation at the level of interneurons. So on the machine side, we just enhance that and draw a firm line.”

Cleland spoke about how the team came up with new experimental approaches. 

“When you start studying a biological process that becomes more intricate and complex than you can just simply intuit, you have to discipline your mind with a computer model,” he said. “You can’t fuzz your way through it. And that led us to a number of new experimental approaches and ideas that we wouldn’t have come up with just by eyeballing it.”

 

Spread the love
Continue Reading