Connect with us

Natural Language Processing

Multimodal Learning Is Becoming Prominent Among AI Developers

mm

Published

 on

Multimodal Learning Is Becoming Prominent Among AI Developers

Venture Beat (VB) devoted one of its weekly reports to the advantages of multimodal learning in the development of artificial intelligence. Their prompt was a report by ABI Research on the matter.

The key concept lies in the fact that “data sets are fundamental building blocks of AI systems,” and that without data sets, “models can’t learn the relationships that inform their predictions.” The ABI report predicts that “while the total installed base of AI devices will grow from 2.69 billion in 2019 to 4.47 billion in 2024, comparatively few will be interoperable in the short term.”

This could represent a considerable waste of time, energy and resources, “rather than combine the gigabytes to petabytes of data flowing through them into a single AI model or framework, they’ll work independently and heterogeneously to make sense of the data they’re fed.”

To overcome this, ABI proposes multimodal learning, a methodology that could consolidate data “from various sensors and inputs into a single system. Multimodal learning can carry complementary information or trends, which often only become evident when they’re all included in the learning process.”

VB presents a viable example that considers images and text captions. “ If different words are paired with similar images, these words are likely used to describe the same things or objects. Conversely, if some words appear next to different images, this implies these images represent the same object. Given this, it should be possible for an AI model to predict image objects from text descriptions, and indeed, a body of academic literature has proven this to be the case.”

Despite the possible advantages, ABI notes that even tech giants like  IBM, Microsoft, Amazon, and Google continue to focus predominantly on unimodal systems. One of the reasons being the challenges such a switch would represent.

Still, the ABI researchers anticipate that “the total number of devices shipped will grow from 3.94 million in 2017 to 514.12 million in 2023, spurred by adoption in the robotics, consumer, health care, and media and entertainment segments.” Among the examples of companies that are already implementing multimodal learning they cite Waymo which is using such approaches to build “ hyper-aware self-driving vehicles,” and Intel Labs, where the company’s engineering team is “investigating techniques for sensor data collation in real-world environments.”

Intel Labs principal engineer Omesh Tickoo explained to VB that “What we did is, using techniques to figure out context such as the time of day, we built a system that tells you when a sensor’s data is not of the highest quality. Given that confidence value, it weighs different sensors against each at different intervals and chooses the right mix to give us the answer we’re looking for.”

VB notes that unimodal learning will remain predominant where it is highly effective – in applications like image recognition and natural language processing. At the same time it predicts that “as electronics become cheaper and compute more scalable, multimodal learning will likely only rise in prominence.”

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Natural Language Processing

AI Opens Up New Ways To Fight Illegal Opiod Sales And Other Cybercrime

mm

Published

on

AI Opens Up New Ways To Fight Illegal Opiod Sales And Other Cybercrime

The US HHS (Department of Health and Human Services) and the National Institute on Drug Abuse (NIDA) are investing in the use of AI to curb the illegal sale of opioids and hopefully reduce drug abuse. As Vox reported, NIDA’s AI tool will endeavor to track illegal internet pharmaceutical markets, but the approaches used by the AI could easily be applied to other forms of cybercrime.

One of the researchers responsible for the development of the tool, Timothy Mackey, recently spoke to Vox, where it was explained that the AI algorithms used to track the illegal sale of opioids could also be used to detect other forms of illegal sales, such as counterfeit products and illegal wildlife trafficking.

NIDA’s AI tool must be able to distinguish between general discussion of opioids and attempts to negotiate the sale of opioids. According to Mackey, only a relatively small percentage of tweets referencing opioids are actually related to the illegal sales of opioids. Mackey explained that out of approximately 600,000 tweets referencing one of several different opioids only about 2,000 actually marketed those drugs in any way. The AI-tool must also be robust enough to keep up with changes in the language used to illegally market opioids. People who illegally sell drugs frequently use coded language and non-obvious keywords to sell them, and they quickly change strategies. Mackey explains that misspelled aliases for the names of drugs are commonly used and that images of things other than the drugs in question are often used to creating listings on websites like Instagram.

While Instagram and Facebook ban the marketing of drugs and encourage users to report instances of abuse, the illegal content can be very difficult to catch, precisely because drug sellers tend to change strategies and code words quickly. Mackey explained that these coded posts and hashtags on Instagram typically contain information about how to contact the dealer and purchase illegal drugs from them. Mackey also explained that some illegal sellers represent themselves as legitimate pharmaceutical companies and link to e-commerce platforms. While the FDA has often tried to crack-down on these sites, they remain an issue.

In designing AI tools to detect illegal drug marketing, Mackey and the rest of the research team utilized a combination of deep learning and topic modeling. The research team designed a deep learning model that made use of a Long Short-Term Memory network trained on the text of Instagram posts, with the goal of creating a text classifier that could automatically flag posts that could be related to illegal drug sales. The research team also made use of topic modeling, letting their AI model discern keywords associated with opioids like Fentanyl and Percocet. This can make the model more robust and sophisticated, and it is able to match topics and conversations, not just single words. The topic modeling helped the research team reduce a dataset of around 30,000 tweets regarding fentanyl to just a handful of tweets that seemed to be marketing it.

Markey and the rest of the research team may have developed their AI application for use by NIDA, but social media companies like Facebook, Twitter, Reddit, and YouTube are also investing heavily in the use of AI to flag content that violates their policies. According to Markey, he has been in talks with Twitter and Facebook about such application before, but right now the focus in on creating a commercially available application based off of his research for NIDA, and that he hopes the tool could be used by social media platforms, regulators, and more.

Markey explained that the approach developed for the NIDA research could be generalized to fight other forms of cybercrime, such as the trafficking of animals or the illegal sale of firearms. Instagram has had problems with illegal animal trafficking before, banning the advertising of all animal sales in 2017 as a response. The company also tries to remove any posts related to animal trafficking as soon as they pop up, but despite this there is a continued black market for exotic pets and advertisements for them still show up in Instagram searches.

There are some ethical issues that will have to be negotiated if the NIDA tool is to be implemented. Drug policy experts warn that it could enable the over-criminalization of sales by low-level drug sellers and that it could also give the false impression that the problem is being solved even though such AI tools may not reduce the overall demand for the substance. Nonetheless, if properly used the AI tools could help law enforcement agencies establish links between online sellers and offline supply chains, helping them quantify the scope of the problem. In addition, similar techniques to those used by NIDA could be utilized to help combat opioid addiction, directing people towards rehabilitative sources when searches are made. As with any innovation, there are both risks and opportunities.

Spread the love
Continue Reading

Big Data

Ricky Costa, CEO of Quantum Stat – Interview Series

mm

Published

on

Ricky Costa, CEO of Quantum Stat - Interview Series

Ricky Costa is the CEO of Quantum Stat a company that offers business solutions for NLP and AI Initiatives

What initially got you interested in artificial intelligence?

Randomness. I was reading a book on probability when I came across a famous theorem. At the time, I naively wondered if I could apply this theorem into a natural language problem I was attempting to solve at work. As it turns out, the algorithm already existed unbeknownst to me, it was called the Naïve Bayes, a very famous and simple generative model used in classical machine learning. That theorem was Bayes theorem. I felt this coincidence was a clue, and planted a seed of curiosity to keep learning more.

 

You’re the CEO of Quantum Stat a company which offers solutions for Natural Language Processing. How did you find yourself in this position?

When there’s a revolution in a new technology some companies are most hesitant than others when facing the unknown. I started my company because pursuing the unknown is fun to me.  I also felt it was the right time to venture into the field of NLP given all of the amazing research that has arrived in the past 2 years. The NLP community has the capacity now to achieve a lot more with a lot less given the advent of new NLP techniques that require less data to scale performance.

 

For readers who may not be familiar with this field, could you share with us what Natural Language Processing does?

NLP is a subfield of AI and analytics that attempts to understand natural language in text, speech or multi-modal learning (text and images/video) and computing it to the point where you are driving insight and/or providing a valuable service. Value can arrive from several angles, from information retrieval in a company’s internal file system, to classifying sentiment in the news, or a GPT-2 twitter bot that helps with your social media marketing (like the one we built couple of weeks ago).

 

You have a Bachelor of Arts from Hunter College in Experimental Psychology. Do you feel that understanding the human brain and human psychology is an asset when it comes to understanding and expanding the field of Natural Language Processing?

This is contrarian, but unfortunately, no. The analogy of neurons and deep neural networks is simply for illustration and instilling intuition. One can probably learn a lot more from complexity science and engineering. The difficulty with understanding how the brain works is that we are dealing with a complex system. “Intelligence” is an emergent phenomenon from the brain’s complexity interacting with its environment, and very difficult to pin down. Psychology and other social sciences, which are dependent on “reductionism” (top-down) don’t work under this complex paradigm. Here’s the intuition: imagine someone attempting to reduce the Beatle’s song “Let it Be” to the C Major scale. There’s nothing about that scale that predicts “Let it Be” will emerge from it. The same follows with someone attempting to reduce behavior to neural activity in the brain.

 

Could you share with us why Big Data is so important when it comes to Deep Learning and more specifically Natural Language Processing?

As it stands, because deep learning models interpolate data, the more data you feed into the model the less edge cases it will see when making an inference in the wild. This architecture “incentivizes” large datasets to be computed by models in order to increase accuracy of output. However, if we want to achieve more intelligent behavior by AI models, we need to look beyond how much data we have and more towards how we can improve the ability of model’s ability to reason more efficiently, which intuitively, shouldn’t require lots of data. From a complexity perspective, the cellular automata experiments conducted in the past century by physicists John von Neumann and Stephen Wolfram show that complexity can emerge from simple initial conditions and rules. What these conditions/rules should be with regards to AI, is what everyone’s hunting.

 

You recently launched the ‘Big Bad NLP Database’. What is this database and why does it matter to those in the AI industry?

This database was created for NLP developers to have a seamless access to all the pertinent datasets in the industry. This database helps to index datasets which has a nice secondary effect of being able to be queried by users. Preprocessing data takes the majority of time in the deployment pipeline, and this database attempts to mitigate this problem as much as possible. In addition, it’s a free platform for anyone regardless of whether you are an academic researcher, practitioner, or independent AI guru that wants to get up to speed with NLP data. Link

 

Quantum Stat currently offers end-end solutions. What are some of these solutions?

We help companies facilitate their NLP modeling pipeline by offering development at any stage. We can cover a wide range of services from data cleaning in the preprocessing stage all the way up to model server deployment in production (these services are also highlighted on our homepage). Not all AI projects come to fruition due to the unknown nature of how your specific data/project architecture works with a state-of-the-art model. Given this uncertainty, our services give companies a chance to iterate on their project at the fraction of cost of hiring a full-time ML engineer.

 

What recent advancement in AI do you find the most interesting?

The most important advancement of late is the transformer model, you may have heard of it: BERT, RoBERTa, ALBERT, T5 and so on. These transformer models are very appealing because they allow the researcher to achieve state-of-the-art performance with a smaller datasets. Prior to transformers, a developer would require a very large dataset to train a model from scratch. Since these transformers come pretrained on billions of words, it allows for faster iteration of AI projects and it’s what we are mostly involved with at the moment.

 

Is there anything else that you would like to share about Quantum Stat?

We are working on a new project dealing with financial market sentiment analysis that will be released soon. We have leveraged multiple transformers to give unprecedented insight to how financial news unfolds in real-time. Stay tuned!

To learn more visit Quantum Stat or read our article on the Big Bad NLP Database.

Spread the love
Continue Reading

Natural Language Processing

Quantum Stat Releases “Big Bad NLP Database”

Published

on

Quantum Stat Releases “Big Bad NLP Database”

Quantum Stat has released their “Big Bad NLP Database” in what is a big step forward for natural language processing (NLP). The database contains hundreds of different datasets for machine learning developers to utilize. 

According to the company, they provide solutions to NLP and AI initiatives. They do this through services such as preprocessing to web app development, a multi-faceted approach that includes machine learning and deep neural networks, chatbot and dialogue management, and their new NLP database. 

The company also conducts primary and secondary research to help individuals analyze the developments within the industries. 

Central Hub of NLP Data

The decision to create the database, which is the world’s largest data library in natural language processing, came out of the need for a central hub to hold NLP data. The company aimed to make it more easily accessible and searchable than the alternative, which often requires researchers to search through multiple third-party libraries. 

The company has been developing the database for a number of weeks; they currently have around 200 datasets. There are a variety of different datasets, not just the classics. The company has included those such as CommonCrawl and Penn Treebank. 

Along with a range of different databases comes different NLP tasks. There are those that focus on classifying and question answering, but there are also datasets for text-to-SQL, speech recognition, and multi-modal. 

Quantum Stat wants the database to be community-driven with contributions from users. The company has opened its doors for anyone to send a new dataset or recommend changes. 

Another focus is to add datasets that diversify language, moving away from being strictly English. Their goal is to make the library more global and accessible to others. 

Upon entering the “Big Bad NLP Database,” a user will be confronted with a clean and organized layout. The name of the dataset is listed, followed by the language and a detailed description. It also lists instances, format, task, year created, and the creator. Each database has a download link to follow. 

Various Databases

One will encounter databases such as Historical Newspapers Daily World Time Series dataset, containing daily contents of newspapers in the US and UK from 1836 to 1922; SciQ Dataset, containing 13,679 crowdsourced science exam questions in the fields of Physics, Biology, and Chemistry; CommonCrawl, containing the data from 25 billion web pages; and MovieLens, a dataset containing 22,000,000 ratings and 580,000 tags for 33,000 movies by 240,000 users. 

Quantum Stat’s impressive database comes at a time when researchers require larger and more diverse datasets due to advances in deep learning. Because of the massive amount of data contained within human language, each unique dataset makes it a little easier to process. The advancement of NLP relies on these databases, and Quantum Stat has contributed to quickening that advancement by gathering so many datasets in one space. 

NLP will be important in many aspects of society. It can help predict diseases based on electronic health records and a patient’s speech, help companies find out what customers are saying about a product, and identify fake news in a world where it runs rampant. 

The technology is advancing extremely rapidly, and it will not be long before it is capable of tackling these complex applications. 

 

Spread the love
Continue Reading