Connect with us

Thought Leaders

Expert Predictions For AI’s Trajectory In 2020

mm

Published

 on

VentureBeat recently interviewed five of the most intelligent, expert minds in the AI field and asked them to make their predictions for where AI is heading over the course of the year to come. The individuals interviewed for their predictions were:

  • Soumith Chintala, creator of PyTorch.
  • Celeste Kidd, AI professor at the University of California.
  • Jeff Dean, chief of Google AI.
  • Anima Anandkumar, machine learning research director at Nvidia.
  • Dario Gil, IBM Research director.

Soumith Chintala

Chintala, the creator of Pytorch, which is arguably the most popular machine learning framework at the moment, predicted that 2020 will see a greater need for neural network hardware accelerators and methods of boosting model training speeds. Chintala expected that the next couple of years will see an increased focus on how to use GPUs optimally and how compiling can be done automatically for new hardware. Beyond this, Chintala expected that the AI community will begin pursuing other methods of quantifying AI performance more aggressively, placing less importance on pure accuracy. Factors for consideration include things like the amount of energy needed to train a model, how AI can be used to build the sort of society we want, and how the output of a network can be intuitively explained to human operators.

Celeste Kidd

Celeste Kidd has spent much of her recent career advocating for more responsibility on the part of designers of algorithms, tech platforms, and content recommendation systems. Kidd has often argued that systems that are designed to maximize engagement can end up having serious impacts regarding how people create their opinions and beliefs. More and more attention is being paid to the ethical use of AI algorithms and systems, and Kidd predicted that in 2020 there will be an increased awareness of how tech tools and platforms are influencing people’s lives and decisions, as well as a rejection of the idea that tech tools can be genuinely neutral in design.

“We really need to, as a society and especially as the people that are working on these tools, directly appreciate the responsibility that that comes with,” Kidd said.

Jeff Dean

Jeff Dean, the current head of Google AI, predicted that in 2020 there will be progress in multimodal learning and multitask learning. Multimodel learning is when AI is trained with multiple types of media at one time, while multitask learning endeavors to allow AI to train on multiple tasks at one time. Dean also expected further progress to be made regarding natural language processing models based on Transformer, such as Google’s BERT algorithm and the other models that topped the GLUE leaderboards. Dean also mentioned he would like to see less desire to create the most-advanced state-of-the-art performance models and more desire to create models that are more robust and flexible.

Anima Anandkumar

Anandkumar expected that the AI community will have to grapple with many challenges in 2020, especially the need for more diverse datasets and the need to ensure people’s privacy when training on data. Anandkumar explained that while face recognition often gets the most attention, there are many areas where people’s privacy can be violated and that these issues may come to the forefront of discussion during 2020.

Anandkumar also expected that further advancements will be made regarding Transformer based natural language processing models.

“We are still not at the stage of dialogue generation that’s interactive, that can keep track and have natural conversations. So I think there will be more serious attempts made in 2020 in that direction,” she said.

Finally, Anandkumar expected that the coming year will see more development of the iterative algorithm and self-supervision. These training methods allow AI systems to self-train in some respects, and can potentially help create models that can improve by self-training on data that are unlabeled.

Dario Gil

Gil predicted that in 2020 there will be more progress towards creating AI in a more computationally efficient manner, as the way deep neural networks are currently trained is inefficient in many ways. Because of this, Gil expected that this year will see progress in terms of creating reduced-precision architectures and generally training more efficiently. Much like some of the other experts who were interviewed, Gil predicted that in 2020 researchers will start to focus more on metrics aside from accuracy. Gil expressed an interest in neural symbolic AI, as IBM is examining ways to create probabilistic programming models using neural symbolic approaches. Finally, Gil emphasized the importance of making AI more accessible to those interested in machine learning and getting rid of the perception that only geniuses can work with AI and do data science.

“If we leave it as some mythical realm, this field of AI, that’s only accessible to the select PhDs that work on this, it doesn’t really contribute to its adoption,” Gil said.

Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.

Artificial Neural Networks

AI Model Can Take Blurry Images And Enhance Resolution By 60 Times

mm

Published

on

Researchers from Duke University have developed an AI model capable of taking highly blurry, pixellated images and rendering them with high detail.  According to TechXplore, the model is capable of taking relatively few pixels and scaling the images up to create realistic looking faces that are approximately 64 times the resolution of the original image. The model hallucinates, or imagines, features that are between the lines of the original image.

The research is an example of super-resolution. As Cynthia Rudin from Duke University’s computer science team explained to TechXplore, this research project sets a record for super-resolution, as never before have images been created with such feal from such a small sample of starting pixels. The researchers were careful to emphasize that the model doesn’t actually recreate the face of the person in the original, low-quality image. Instead, it generates new faces, filling in details that weren’t there before. For this reason, the model couldn’t be used for anything like security systems, as it wouldn’t be able to turn out of focus images into images of a real person.

Traditional super-resolution techniques operate by making guesses about what pixels are needed to turn the image into a high-resolution image, based on images that the model has learned about beforehand. Because the added pixels are the result of guesses, not all the pixels will match with their surrounding pixels and certain regions of the image may look fuzzy or warped. The researchers from Duke University used a different method of training their AI model. The model created by the Duke researchers operates by first taking low-resolution images and adding detail to the image over time, referencing high-resolution AI-generated faces as examples. The model references the AI-generated faces and tries to finds ones that resemble the target images when the generated faces are scaled down to the size of the target image.

The research team created a Generative Adversarial Network model to handle the creation of new images. GANs are actually two neural networks that are both trained on the same dataset and pitted against one another. One network is responsible for generating fake images that mimic the real images in the training dataset, while the second network is responsible for detecting the fake images from the genuine ones. The first network is notified when its images have been identified as fake, and it improves until the fake images are hopefully indistinguishable from the genuine images.

The researchers have dubbed their super-resolution model PULSE, and the model consistently produces high-quality images even if given images so blurry that other super-resolution methods can’t create high-quality images from them. The model is even apable of making realistic looking faces from images where the features of the face are almost indistinguishable. For instance, when given an image of a face with 16×16 resolution, it can create a 1024 x 1024 image. More than a million pixels are added during this process, filling in details like strands of hair, wrinkles, and even lighting. When the researchers had people rate 1440 PULSE generated images against images generated by other super-resolution techniques, the PULSE generated images consistently scored the best.

While the researchers used their model on images of people’s faces, the same techniques they use could be applied to almost any object. Low-resolution images of various objects could be used to create high-resolution images of that set of objects, opening-up possible applications for a variety of different industries and fields from microscopy, satellite imagery, education, manufacturing, and medicine.

Spread the love
Continue Reading

Artificial Neural Networks

New Research Suggests Artificial Brains Could Benefit From Sleep

Published

on

New research coming from Los Alamos National Laboratory suggests that artificial brains almost certainly benefit from periods of rest like living brains. 

The research will be presented at the Women in Computer Vision Workshop in Seattle on June 14. 

Yijing Watkins is a Los Alamos National Laboratory computer scientist. 

“We study spiking neural networks, which are systems that learn much as living brains do,” said Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”

Solving Instability in Network Simulations

Watkins and the team found that continuous periods of unsupervised learning led to instability in the network simulations. However, once the team introduced the networks to states that are a result of the waves that living brains experience during sleep, stability was able to be restored. 

“It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.

The team made the discovery when they were working on developing neural networks based on how humans and other biological systems learn to see. The team faced some challenges when it came to stabilizing simulated neural networks that were undergoing unsupervised dictionary training. Unsupervised dictionary training involves classifying objects without having previous examples to use for comparison.

Garrett Kenyon is a computer scientist at Los Alamos and study coauthor.

“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”

Sleep as a Last Resort Solution

According to the researchers, exposing the networks to an artificial analog of sleep was their last resort to stabilizing them. After experimenting with different types of noise, which is similar to the static between stations on a radio, the best results came from the waves of Gaussian noise. This type of noise includes a wide variety and ranges of frequencies and amplitudes. 

The researchers came up with the hypothesis that during slow-wave sleep the noise mimics the input received by biological neurons. The results suggested that slow-wave sleep could play a role in ensuring that cortical neurons do not suffer from hallucinations and maintain their stability. 

The team will now work on implementing the algorithm on Intel’s Loihi neuromorphic chip, hoping that the sleep will help it stably process information from a silicon retina camera in real-time. If the research does determine that artificial brains benefit from sleep, the same is likely true for androids and other intelligent machines.

Source: Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model, CVPR Women in Computer Vision Workshop, 2020-06-14 (Seattle, Washington, United States)

 

Spread the love
Continue Reading

Artificial Neural Networks

AI Model Used To Map Dryness Of Forests, Predict Wildfires

mm

Published

on

A new deep learning model designed by researchers from Stanford University leverages moisture levels across 12 different states in order to assist in the prediction of wildfires and help fire management teams get ahead of potentially destructive wildfires.

Fire management teams aim to predict where the worst blazes might occur, in order that preventative measures like prescribed burns can be carried out. Predicting points of origin and spreading patterns for wildfires mandates information regarding fuel amounts and moisture levels for the target region. Collecting this data and analyzing it at the speed required to be useful to wildfire management teams is difficult, but deep learning models could help automate these critical processes.

As Futurity recently reported, researchers from Stanford University collected climate data and designed a model intended to render detailed maps of moisture levels across 12 western states, including the Pacific Coast states, Texas, Wyoming, Montana, and the southwest states. According to the researchers, although the model is still undergoing refinement it is already capable of revealing areas at high-risk for forest fires where the landscape is unusually dry.

The typical method of collecting data regarding fuel and moisture levels for a target region is by painstakingly comparing dried out vegetation to more moist vegetation. Specifically, researchers collect vegetation samples from trees and weigh them. Afterwards, the vegetation samples are dried out and reweighted. Comparisons are made between the weight of the dry samples and the wet samples to determine the amount of moisture in the vegetation. This process is a long, complex one that is only viable in certain areas and for some species of vegetation. However, the data collected from decades of this process has been used to create the National Fuel Moisture Database, comprised of over 200,000 records. The fuel-moisture content of a region is well known to be linked to the risk of wildfire, though it’s still unknown just how much of a role it plays between ecosystems and from one plant to other plants.

Krishna Rao, PhD student in earth systems science at Stanford was the lead author or the new study, and Rao explained to Futurity that machine learning affords researchers the ability to test assumptions about links between live fuel moisture and weather for different ecosystems. Rao and colleagues trained a recurrent neural network model on data from the National Fuel Moisture Database. The model was then tested by estimating fuel moisture levels based on measurements collected by space sensors. The data included signals from synthetic aperture radar (SAR), which is microwave radar signals that penetrate to the surface, and visible light bouncing off the planet’s surface. The training and validation data for the model consisted of  three years of data for approximately 240 sites across the western US starting in 2015.

The researchers ran analyses on various types of land coverage, including sparse vegetation, grasslands, shrublands, needleleaf evergreen forests, and broadleaf deciduous forests. The model’s predictions were the most accurate, most reliably matched the NFMD measurement, on shrubland regions. This is fortunate, as shrublands comprise approximately 45% of the ecosystems found throughout the US west. Shrublands, particularly chaparral shrublands, are often uniquely susceptible to fire, as seen in many of the fires that burned throughout California over recent years.

The predictions generated by the model have been used to create an interactive map that fire management agencies could one day use to prioritize regions for fire control and discern other relevant patterns. The researchers believe that with further training and refinement the model could.

As Alexandra Konings, assistant professor of earth systems science at Stanford, explained to Futurity:

“Creating these maps was the first step in understanding how this new fuel moisture data might affect fire risk and predictions. Now we’re trying to really pin down the best ways to use it for improved fire prediction.”

Spread the love
Continue Reading