Connect with us

Artificial Neural Networks

AI Teaches Itself Laws of Physics



AI Teaches Itself Laws of Physics

In what is a monumental moment in both AI and physics, a neural network has “rediscovered” that Earth orbits the Sun. The new development could be critical in solving quantum-mechanics problems, and the researchers hope that it can be used to discover new laws of physics by identifying patterns within large data sets. 

The neural network, named SciNet, was fed measurements showing how the Sun and Mars appear from Earth. Scientists at the Swiss Federal Institute of Technology then tasked SciNet with predicting where the Sun and Mars would be at different times in the future. 

The research will be published in Physical Review Letters. 

Designing the Algorithm

The team, including Physicist Renato Renner, set out to make the algorithm capable of distilling large data sets into basic formulae. This is the same system used by physicists when coming up with equations. In order to do this, the researchers had to base the neural network on the human brain. 

The formulas that were generated by SciNet placed the Sun at the center of our solar system. One of the remarkable aspects of this research was that SciNet did this similarly to how astronomer Nicolaus Copernicus discovered heliocentricity. 

The team highlighted this in a paper published on the preprint repository arXiv. 

“In the 16th century, Copernicus measured the angles between a distant fixed star and several planets and celestial bodies and hypothesized that the Sun, and not the Earth, is in the centre of our solar system and that the planets move around the Sun on simple orbits,” the team wrote. “This explains the complicated orbits as seen from Earth.”

The team tried to get SciNet to predict the movements of the Sun and Mars in the simplest way possible, so SciNet uses two sub-networks to send information back and forth. One of the networks analyzes the data and learns from it, and the other one makes predictions and tests accuracy based on that knowledge. Because these networks are connected together by just a few links, information is compressed and communication is simpler. 

Conventional neural networks learn to identify and recognize objects through huge data sets, and they generate features. Those are then encoded in mathematical ‘nodes,’ which are considered the artificial equivalent of neurons. Unlike physicists, neural networks are more unpredictable and difficult to interpret. 

Artificial Intelligence and Scientific Discoveries 

One of the tests involved giving the network simulated data about the movements of Mars and the Sun, as seen from Earth. The orbit of Mars around the Sun appears unpredictable and often reverses its course. It was in the 1500s when Nicolaus Copernicus discovered that simpler formulas could be used to predict the movements of the planets orbiting the Sun. 

When the neural network “discovered” similar formulas for Mar’s trajectory, it rediscovered one of the most important pieces of knowledge in history. 

Mario Krenn is a physicist at the University of Toronto in Canada, and he works on using artificial intelligence to make scientific discoveries. 

SciNet rediscovered “one of the most important shifts of paradigms in the history of science,” he said. 

According to Renner, humans are still needed to interpret the equations and determine how they are connected to the movement of the planets around the Sun. 

Hod Lipson is a roboticist at Columbia University in New York City. 

“This work is important because it is able to single out the crucial parameters that describe a physical system,” he says. “I think that these kinds of techniques are our only hope of understanding and keeping pace with increasingly complex phenomena, in physics and beyond.”


Spread the love

Artificial Neural Networks

Amazon Creates New Tool To Engineer AI Models With Just A Few Lines Of Code




Amazon Creates New Tool To Engineer AI Models With Just A Few Lines Of Code

As efforts to make machine learning easier more accessible increase, different companies are creating tools to make the creation and optimization of deep learning models simpler. As VentureBeat reports, Amazon launched a new tool designed to help create and modify machine learning models in just a few lines of code.

Carrying out machine learning on a dataset is often a long, complex task. The data must be transformed and preprocessed, and then the proper model must be created and customized. Tweaking the hyperparameters of a model and then retraining can take a long time, and to help solve issues like this Amazon has launched AutoGluon. AutoGluon is an attempt to automate much of the overhead that typically comes with the creation of a machine learning system. For instance, not only do machine learning engineers have to decide on an appropriate architecture, they also need to experiment with the hyperparameters of the model. AutoGluon endeavors to make both the creation of the neural net architecture and the selection of appropriate hyperparameters easier.

AutoGluon is based on work initially begun by Microsoft and Amazon in 2017. The original Gluon was a machine learning interface designed to let developers mix and matched optimized components to create their own models, but AutoGluon just creates a model end-to-end, based on the desires of the user. AutoGluon is reportedly capable of producing a model and selecting the hyperparameters for the model, within a range of specified choices, with as few as three lines of code. The developer only has to provide a few arguments like their desired training completion time, and AutoGluon will calculate the best model that will complete within the specified runtime and given the available computation resources.

AutoGluon is currently capable of creating models for image classification, text classification, object detection, and tabular prediction. AutoGluon’s API is also intended to allow more experienced developers to be able to customize the auto-generated model and improve performance. At the moment, AutoGluon is only available for Linux and it requires Python 3.6 or 3.7.

Jonas Mueller, part of the AutoGluon development team, explained the reasoning behind the creation of AutoGluon:

“We developed AutoGluon to truly democratize machine learning, and make the power of deep learning available to all developers. AutoGluon solves this problem as all choices are automatically tuned within default ranges that are known to perform well for the particular task and model.”

AutoGluon is a new method within a long line of methods intended to reduce the expertise and time needed to train machine learning models. Software libraries like Theano automated the calculation of gradient vectors, while Keras let developers easily specify certain desired hyperparameters. Amazon believes that there is still more ground that can be covered when it comes to democratizing machine learning, like making data pre-processing and hyperparameter tuning simpler.

The creation of AutoGluon seems to be part of an effort by Amazon to make training and deploying machine learning systems easier and more accessible. Amazon has also made machine learning-centric changes to its AWS suite. For example, upgrades have been made to the AWS Sagemaker toolkit. The AWS SageMaker toolkit within the AWS suite lets developers train and deploy models to the cloud. SageMaker comes with a variety of tools that let developers automatically choose algorithms, train and validate models, and improve the accuracy of models.

Spread the love
Continue Reading

Artificial Neural Networks

Expert Predictions For AI’s Trajectory In 2020




Expert Predictions For AI's Trajectory In 2020

VentureBeat recently interviewed five of the most intelligent, expert minds in the AI field and asked them to make their predictions for where AI is heading over the course of the year to come. The individuals interviewed for their predictions were:

  • Soumith Chintala, creator of PyTorch.
  • Celeste Kidd, AI professor at the University of California.
  • Jeff Dean, chief of Google AI.
  • Anima Anandkumar, machine learning research director at Nvidia.
  • Dario Gil, IBM Research director.

Soumith Chintala

Chintala, the creator of Pytorch, which is arguably the most popular machine learning framework at the moment, predicted that 2020 will see a greater need for neural network hardware accelerators and methods of boosting model training speeds. Chintala expected that the next couple of years will see an increased focus on how to use GPUs optimally and how compiling can be done automatically for new hardware. Beyond this, Chintala expected that the AI community will begin pursuing other methods of quantifying AI performance more aggressively, placing less importance on pure accuracy. Factors for consideration include things like the amount of energy needed to train a model, how AI can be used to build the sort of society we want, and how the output of a network can be intuitively explained to human operators.

Celeste Kidd

Celeste Kidd has spent much of her recent career advocating for more responsibility on the part of designers of algorithms, tech platforms, and content recommendation systems. Kidd has often argued that systems that are designed to maximize engagement can end up having serious impacts regarding how people create their opinions and beliefs. More and more attention is being paid to the ethical use of AI algorithms and systems, and Kidd predicted that in 2020 there will be an increased awareness of how tech tools and platforms are influencing people’s lives and decisions, as well as a rejection of the idea that tech tools can be genuinely neutral in design.

“We really need to, as a society and especially as the people that are working on these tools, directly appreciate the responsibility that that comes with,” Kidd said.

Jeff Dean

Jeff Dean, the current head of Google AI, predicted that in 2020 there will be progress in multimodal learning and multitask learning. Multimodel learning is when AI is trained with multiple types of media at one time, while multitask learning endeavors to allow AI to train on multiple tasks at one time. Dean also expected further progress to be made regarding natural language processing models based on Transformer, such as Google’s BERT algorithm and the other models that topped the GLUE leaderboards. Dean also mentioned he would like to see less desire to create the most-advanced state-of-the-art performance models and more desire to create models that are more robust and flexible.

Anima Anandkumar

Anandkumar expected that the AI community will have to grapple with many challenges in 2020, especially the need for more diverse datasets and the need to ensure people’s privacy when training on data. Anandkumar explained that while face recognition often gets the most attention, there are many areas where people’s privacy can be violated and that these issues may come to the forefront of discussion during 2020.

Anandkumar also expected that further advancements will be made regarding Transformer based natural language processing models.

“We are still not at the stage of dialogue generation that’s interactive, that can keep track and have natural conversations. So I think there will be more serious attempts made in 2020 in that direction,” she said.

Finally, Anandkumar expected that the coming year will see more development of the iterative algorithm and self-supervision. These training methods allow AI systems to self-train in some respects, and can potentially help create models that can improve by self-training on data that are unlabeled.

Dario Gil

Gil predicted that in 2020 there will be more progress towards creating AI in a more computationally efficient manner, as the way deep neural networks are currently trained is inefficient in many ways. Because of this, Gil expected that this year will see progress in terms of creating reduced-precision architectures and generally training more efficiently. Much like some of the other experts who were interviewed, Gil predicted that in 2020 researchers will start to focus more on metrics aside from accuracy. Gil expressed an interest in neural symbolic AI, as IBM is examining ways to create probabilistic programming models using neural symbolic approaches. Finally, Gil emphasized the importance of making AI more accessible to those interested in machine learning and getting rid of the perception that only geniuses can work with AI and do data science.

“If we leave it as some mythical realm, this field of AI, that’s only accessible to the select PhDs that work on this, it doesn’t really contribute to its adoption,” Gil said.

Spread the love
Continue Reading

AI 101

What are Convolutional Neural Networks?




What are Convolutional Neural Networks?

Perhaps you’ve wondered how Facebook or Instagram is able to automatically recognize faces in an image, or how Google lets you search the web for similar photos just by uploading a photo of your own. These features are examples of computer vision, and they are powered by convolutional neural networks (CNNs). Yet what exactly are convolutional neural networks? Let’s take a deep dive into the architecture of a CNN and understand how they operate.

Defining Neural Networks

Before we begin talking about convolutional neural networks, let’s take a moment to define regular neural networks. There’s another article on the topic of neural networks available, so we won’t go too deep into them here. However, to briefly define them they are computational models inspired by the human brain. A neural network operates by taking in data and manipulating the data by adjusting “weights”, which are assumptions about how the input features are related to each other and the object’s class. As the network is trained the values of the weights are adjusted and they will hopefully converge on weights that accurately capture the relationships between features.

This is how a feed-forward neural network operates, and CNNs are comprised of two halves: a feed-forward neural network and a group of convolutional layers.

What’s A Convolution?

What are the “convolutions” that happen in a convolutional neural network? A convolution is a mathematical operation that creates a set of weights, essentially creating a representation of parts of the image. This set of weights is referred to as a kernel or filter. The filter that is created is smaller than the entire input image, covering just a subsection of the image. The values in the filter are multiplied with the values in the image. The filter is then moved over to form a representation of a new part of the image, and the process is repeated until the entire image has been covered.

Another way to think about this is to imagine a brick wall, with the bricks representing the pixels in the input image. A “window” is being slid back and forth along the wall, which is the filter. The bricks that are viewable through the window are the pixels having their value multiplied by the values within the filter. For this reason, this method of creating weights with a filter is often referred to as the “sliding windows” technique.

The output from the filters being moved around the entire input image is a two-dimensional array representing the whole image. This array is called a “feature map”.

Why Convolutions?

What is the purpose of creating convolutions anyway? Convolutions are necessary because a neural network has to be able to interpret the pixels in an image as numerical values. The function of the convolutional layers is to convert the image into numerical values that the neural network can interpret and then extract relevant patterns from. The job of the filters in the convolutional network is to create a two-dimensional array of values that can be passed into the later layers of a neural network, those that will learn the patterns in the image.

Filters And Channels

What are Convolutional Neural Networks?

Photo: cecebur via Wikimedia Commons, CC BY SA 4.0 (

CNNs don’t use just one filter to learn patterns from the input images. Multiple filters are used, as the different arrays created by the different filters leads to a more complex, rich representation of the input image. Common numbers of filters for CNNs are 32, 64, 128, and 512. The more filters there are, the more opportunities the CNN has to examine the input data and learn from it.

A CNN analyzes the differences in pixel values in order to determine the borders of objects. In a grayscale image, the CNN would only look at the differences in black and white, light-to-dark terms. When the images are color images, not only does the CNN take dark and light into account, but it has to take the three different color channels – red, green, and blue – into account as well. In this case, the filters possess 3 channels, just like the image itself does. The number of channels that a filter has is referred to as its depth, and the number of channels in the filter must match the number of channels in the image.

The Architecture Of A Convolutional Neural Network

Let’s take a look at the complete architecture of a convolutional neural network. A convolutional layer is found at the beginning of every convolutional network, as it’s necessary to transform the image data into numerical arrays. However, convolutional layers can also come after other convolutional layers, meaning that these layers can be stacked on top of one another. Having multiple convolutional layers means that the outputs from one layer can undergo further convolutions and be grouped together in relevant patterns. Practically, this means that as the image data proceeds through the convolutional layers, the network begins to “recognize” more complex features of the image.

The early layers of a ConvNet are responsible for extracting the low-level features, such as the pixels that make up simple lines. Later layers of the ConvNet will join these lines together into shapes. This process of moving from surface-level analysis to deep-level analysis continues until the ConvNet is recognizing complex shapes like animals, human faces, and cars.

After the data has passed through all of the convolutional layers, it proceeds into the densely connected part of the CNN. The densely-connected layers are what a traditional feed-forward neural network looks like, a series of nodes arrayed into layers that are connected to one another. The data proceeds through these densely connected layers, which learns the patterns that were extracted by the convolutional layers, and in doing so the network becomes capable of recognizing objects.

Spread the love
Continue Reading