Connect with us

Artifical Neural Networks

Newly Developed Artificial Neural Network Set To Quickly Solve A Physics Problem

mm

Published

 on

Newly Developed Artificial Neural Network Set To Quickly Solve A Physics Problem

From Sir Isaac Newton on, the so-called three-body problem has confounded mathematicians and physics researchers. As ScienceAlert explains, “the three-body problem involves calculating the movement of three gravitationally interacting bodies – such as the Earth, the Moon, and the Sun, for example – given their initial positions and velocities.”

On the surface, this problem seems a simple one, but in reality, it is an extremely hard one to tackle. One of the results was the introduction of say, marine chronometers to calculate positions at sea, rather than resolving the three-body problem calculating such a position by the Moon and the stars.

With the fast advancement of the study of the Universe, the three-body problem became an important part for researchers when they attempt to figure out “how black hole binaries might interact with single black holes, and from there how some of the most fundamental objects of the Universe interact with each other.”

To make these calculations feasible in a reasonable time, scientists and researchers resorted to the use of ANN, deep artificial neural networks. The new system was developed by a team comprised of researchers from the University of Edinburgh and the University of Cambridge in the UK, the University of Aveiro in Portugal, and Leiden University in the Netherlands.

The ANN this team developed was trained on a database of existing three-body problems as wells as a selection of solutions that scientists have previously figured out.

The results were more than promising. The trained ANN promises to be capable of finding solutions “100 million times faster than existing techniques.”

The resulting research paper, “Newton vs the machine: solving the chaotic three-body problem using deep neural networks” states that, “A trained ANN can replace existing numerical solvers, enabling fast and scalable simulations of many-body systems to shed light on outstanding phenomena such as the formation of black-hole binary systems or the origin of the core-collapse in dense star clusters.”

ScienceAlert notes that “the researchers simplified the process to only include three equal-mass particles in a plane, all starting with zero velocity, and then ran an existing three-body problem solver called Brutus 10,000 times over (9,900 for training and 100 for validation).”

After training, the new ANN came up with impressive results. It was given 5,000 new scenarios to work with and it matched the results Brutus achieved almost completely.

While the study has yet to be peer-reviewed, by scientists with the know and experience in the field and is still more of a proof-of-concept at this stage, it certainly shows that trained neural networks “might be able to work alongside Brutus and similar systems, jumping in when three-body calculations become too complex for our current models to cope with.”

As this team of researchers concludes in their paper, “Eventually, we envision, that network may be trained on richer chaotic problems, such as the 4 and 5-body problem, reducing the computational burden even more.”

 

Spread the love

Deep Learning Specialization on Coursera

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Artifical Neural Networks

New Technique Lets AI Intuitively Understand Some Physics

mm

Published

on

New Technique Lets AI Intuitively Understand Some Physics

Artificial intelligence has been able to develop an understanding of physics through reinforcement learning for some time now, but a new technique developed by researchers at MIT could help engineers design models that demonstrate an intuitive understanding of physics.

Psychological research has shown that, to some extent, humans have an intuitive understanding of the laws of physics. Infants have expectations of how objects should interact and move, and violations of these expectations will have the infants react with surprise. The research conducted by the MIT team has the potential to not only drive new applications of artificial intelligence, but help psychologists understand how infants perceive and learn about the world.

The model designed by the MIT team is called ADEPT,  and it functions by making predictions about how objects should behave in a physical space. The model observes objects and keeps track of a “surprise” metric as it does so. If something unexpected happens, the model responds by increasing its surprise value. Unexpected and seemingly impossible actions such as an object teleporting or vanishing all together will see a dramatic rise in surprise.

The goal of the research team was to get their model to register the same levels of surprise that humans register when they see objects behaving in implausible ways.

ADEPT has two major components to it, a physics engine and an inverse graphics module. The physics engine is responsible for predicting how an object will move, predicting a future representation of an object, from a range of possible states. Meanwhile, the inverse graphics module is responsible for creating the representations of objects that will be fed into the physics engine.

The inverse graphics module tracks several different attributes such as velocity, shape, and orientation of an object, extracting this information from frames of videos. The inverse graphical module only focuses on the most salient details, ignoring details that won’t help the physics engine interpret the object and predict new states. By focusing on only the most important details, the model is better able to generalize to new objects. The physics engine then takes these object descriptions and simulates more complex physical behavior, like fluidity or rigidity, in order to make predictions about how the object should behave.

After this intake process occurs, the model observes the actual next frame in the video, which it uses to recalculate its probability distribution with respect to possible object behaviors. The surprise is inversely proportional to the probability than an event should occur, only registering great surprise when there is a major mismatch between what the model believes should happen next and what actually happens next.

The research team needed some way to compare the surprise of their model to the surprise of people observing the same object behavior. In developmental psychology, researchers often test infants by showing them two different videos. In one video, an object is presented that behaves as you would expect objects to in the real world, not spontaneous play vanishing or teleporting. In the other video and object violates the laws of physics in some fashion. The research team took these same basic concepts and had 60 adults watch 64 different videos of both expected and unexpected physical behavior. The participants were then asked to rate their surprise at various moments in the video on a scale of 1 to 100.

Analysis of the model’s performance demonstrated that it performed quite well on videos where an object was moved behind a wall and disappeared when the wall was removed, typically matching humans surprise levels in these cases. The model also appeared to be surprised by videos where humans didn’t demonstrate surprise but arguably should have. As an example, in order for an object to move behind a wall at a given speed and immediately come out on the other side of the wall, it must have either teleported or experienced a dramatic increase in speed.

When compared to the performance of traditional neural networks that are capable of learning from observation but do not explicitly log the representation of an object, researchers found that the ADEPT network was much more accurate at discriminating between surprising and unsurprising scenes and that ADEPT’s performance aligned with human reactions more closely.

The MIT research team is aiming to do more research and gain deeper insight into how infants observe the world around them and learn from these observations, incorporating their findings into new versions of the ADEPT model.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

mm

Published

on

Google Creates New Explainable AI Program To Enhance Transparency and Debugability

Just recently, Google announced the creation of a new cloud platform intended to make gaining insight into how an AI program renders decisions, making debugging a program easier and enhancing transparency. As reported by The Register, the cloud platform is called Explainable AI, and it marks a major attempt by Google to invest in AI explainability.

Artificial neural networks are employed in many, perhaps most, of the major AI systems employed in the world today. The neural networks that run major AI applications can be extraordinarily complex and large, and as a system’s complexity grows it becomes harder and harder to intuit why a particular decision has been made by the system. As Google explains in their white paper, as AI systems become more powerful, they also become more complex and hence harder to debug. Transparency is also lost when this occurs, which means that biased algorithms can be difficult to recognize and address.

The fact that the reasoning which drives the behavior of complex systems is so hard to interpret often has drastic consequences. In addition to making it hard to combat AI bias, it can make it extraordinarily difficult to tell spurious correlations from genuinely important and interesting correlations.

Many companies and research groups are exploring how to address the “black box” problem of AI and create a system that adequately explains why certain decisions have been made by an AI. Google’s Explainable AI platform represents its own bid to tackle this challenge. Explainable AI is comprised of three different tools. The first tool a system that describes which features have been selected by an AI and it also displays an attribution score which represents the amount of influence that a particular feature has on the final prediction. Google’s report on the tool gives an example of predicting how long a bike ride will last based on variables like rainfall, current temperature, day of the week, and start time. After the network renders the decision, feedback is given that displays which features had the most impact on the predictions.

How does this tool provide such feedback in the case of image data? In this case, the tool produces an overlay that highlights the regions of the image that weighted most heavily on the rendered decision.

Another tool found in the toolkit is the “What-If” tool, which displays potential fluctuations in model performance as individual attributes are manipulated. Finally, the last tool enables can be set up to give sample results to human reviewers on a consistent schedule.

Dr. Andrew Moore, Google’s chief scientist for AI and machine learning, described the inspiration for the project. Moore explained that around five years ago the academic community started to become concerned about the harmful byproducts of AI use and that Google wanted to ensure their systems were only being used in ethical ways. Moore described an incident where the company was trying to design a computer vision program to alert construction workers if someone wasn’t wearing a helmet, but they become concerned that the monitoring could be taken too far and become dehumanizing. Moore said there was a similar reason that Google decided not to release a general face recognition API, as the company wanted to have more control over how their technology was used and ensure it was only being used in ethical ways.

Moore also highlighted why it was so important for AI’s decision to be explainable:

“If you’ve got a safety critical system or a societally important thing which may have unintended consequences if you think your model’s made a mistake, you have to be able to diagnose it. We want to explain carefully what explainability can and can’t do. It’s not a panacea.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Can Avoid Specific Unwanted Behaviors With New Algorithms

mm

Published

on

AI Can Avoid Specific Unwanted Behaviors With New Algorithms

As artificial intelligence algorithms and systems become more sophisticated and take on bigger responsibilities, it becomes more and more important to ensure that AI systems avoid dangerous, unwanted behavior. Recently a team of researchers from the University of Massachusetts Amherst and Stanford published a paper that demonstrates how specific AI behavior can be avoided, through the use of a technique that elicits precise mathematical instructions that can be used to tweak the behavior of an AI.

According to TechXplore, the research was predicated on the assumption that unfair/unsafe behaviors can be defined with mathematical functions and variables. If this is true then it should be possible for researchers to train systems to avoid these specific behaviors. The research team aimed to develop a toolkit that could be employed by users of the AI to specify which behaviors they want the AI to avoid, and enable AI engineers to reliably train a system that will avoid unwanted actions when used in real-world scenarios.

Phillip Thomas, the first author on the paper and assistant computer science professor at U of Michigan Amherst, explained that the research team aims to demonstrate that designers of machine learning algorithms can make it easier for AI utilizers to describe unwanted behaviors and have it be highly likely that the AI system will avoid the behavior.

The research team tested their technique by applying it to a common problem in data science, gender bias. The research team aimed to make the algorithms used to predict college student GPA fairer by reducing gender bias. The research team utilized an experimental dataset and instructed their AI system to avoid the creation of models that across the board underestimated /overestimated GPAs for one gender. As a result of the researcher’s instructions, the algorithm created a model that better-predicted student GPAs and had substantially less systemic gender bias than previously existing models. Previous GPA prediction models suffered from bias because bias reduction models were often too limited to be useful, or no bias reduction was used at all.

A different algorithm was also developed by the research team. This algorithm was implemented in an automated insulin pump, and the algorithm was intended to balance both performance and safety. Automated insulin pumps need to decide how large of an insulin dose a patient should be given  After eating, the pump will ideally deliver a dose of insulin just large enough to keep blood sugar levels in check. The insulin doses that are delivered must be neither too large or too small.

Machine learning algorithms are already proficient at identifying patterns in an individuals response to insulin doses, but these existing analysis methods can’t let doctors specify outcomes that should be avoided, such as low blood sugar crashes. In contrast, the research team was able to develop a method that could be trained to deliver insulin doses that stay within the two extremes, preventing either underdosing or overdosing. While, the system isn’t ready for testing in real patients just yet, a more sophisticated AI based on this approach could improve quality of life for those suffering from diabetes.

In the research paper, the researchers refer to the algorithm as a “Seledonian” algorithm. This is in reference to the three laws of robotics described by the Sci-Fi author Isaac Asimov. The implication is that the AI system “may not injure a human being or, through inaction, allow a human being to come to harm.” The research team hopes that their framework will allow AI researchers and engineers to create a variety of algorithms and systems that avoid dangerous behavior. Emma Brunskill, senior author of the paper and Stanford assistant professor of computer science, explained to TechXplore:

“We want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading