Connect with us

Artifical Neural Networks

Scientists Use Artificial Intelligence to Estimate Dark Matter in the Universe

Published

 on

Scientists Use Artificial Intelligence to Estimate Dark Matter in the Universe

Scientists from the Department of Physics and the Department of Computer Science at ETH Zurich are using artificial intelligence to learn more about our universe. They are contributing to the methods used in order to estimate the amount of dark matter present. The group of scientists developed machine learning algorithms that are similar to those used by Facebook and other social media companies for facial recognition. These algorithms help analyze cosmological data. The new research and results were published in the scientific journal Physical Review D

Tomasz Kacprzak, a researcher from the Institute of Particle Physics and Astrophysics, explained the link between facial recognition and estimating dark matter in the universe. 

“Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy,” he explained. 

Dark matter is not able to be seen directly by telescope images, but it does bend the path of light rays that are coming to earth from other galaxies. This is called weak gravitational lensing, and it distorts the images of those galaxies. 

The distortion that takes place is then used by scientists. They build maps based on mass of the sky, and they show where dark matter is. The scientists then take theoretical predictions of the location of dark matter and compare them to the built maps, and they look for the ones that most match the data.

The described method with maps is traditionally done by using human-designed statistics, which help explain how parts of the maps relate to one another. The problem that arises with this method is that it is not well suited for detecting the complex patterns that are present in such maps. 

“In our recent work, we have used a completely new methodology…Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job,” Alexandre Refregier said. 

Aurelien Lucchi and his team from the Data Analytics Lab at the Department of Computer Science, along with Janis Fluri, a PhD student from Refregier’s group and the lead author of the study, worked together using machine learning algorithms. They used them to establish deep artificial neural networks that are able to learn to extract as much information from the dark matter maps as possible. 

The group of scientists first gave the neural network computer-generated data that simulated the universe. The neural network eventually taught itself which features to look for and to extract large amounts of information.

These neural networks outperformed the human-made analysis. In total, they were 30% more accurate than the traditional methods based on human-made statistical analysis. If cosmologists wanted to achieve the same accuracy rate without using these algorithms, they would have to dedicate at least twice the amount of observation time. 

After these methods were established, the scientists then used them to create dark matter maps based on the KiDS-450 dataset. 

“This is the first time such machine learning tools have been used in this context, and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications,” Fluri said. 

The scientists now want to use this method on bigger image sets such as the Dark Energy Survey, and the neural networks will start to take on new information about dark matter.

 

Spread the love

Deep Learning Specialization on Coursera

Artifical Neural Networks

Legal Tech Company Seeks To Bring AI To Lawyers

mm

Published

on

Legal Tech Company Seeks To Bring AI To Lawyers

Artificial intelligence programs are being used in more applications and more industries all the time. The legal field is an area that could substantially benefit from AI programs, due to the massive amount of documents that have to be reviewed for any given case. As reported by the Observer, one company is aiming to bring AI to the legal fields, with its CEO seeing a wide variety of uses for AI.

Lane Lillquist is the co-founder and CTO of InCloudCounsel, a legal tech firm. Lillquist believes that AI can be used to help lawyers be more efficient and accurate in their jobs. For instance, the massive amount of data that has to be processed by lawyers is usually better processed by a machine learning algorithm, and the insights generated by the AI could be used to make tasks like contract review more accurate. In this sense, the role for AI in the legal space is much like the various other tech tools that we use all the time, things like automatic spelling correction and document searching.

Because of the narrow role Lilliquist expects AI to take, Lilliquist doesn’t see much need to worry that AI will end up replacing lawyers at their jobs, at least not anytime soon. Lilliquist expects that for the near future most of the tasks done by AI will be things like automating many of the high-volume, repetitive tasks that prevent lawyer’s from focusing their attention on more important tasks. These are tasks like data extraction and categorization. Human lawyers will be able to have more time, more bandwidth, to focus on more complex tasks and different forms of work. Essentially, AI could make lawyers more impactful at their jobs, not less.

Lilliquist has made some predictions regarding the role of AI for the near future of the legal field. Lilliquist sees AI accomplishing tasks like automatically filling in certain forms or searching documents for specific terms and phrases relevant to a case.

One example of an application that fills in legal documents is the company DoNotPay, which promises to help users of the platform “fight corporations and beat bureaucracy” with just a few button presses. The app operates by having a chatbot ascertain the legal problems of its users, and then it generates and submits paperwork based on the provided answers. While the app is impressive, Lilliqust doesn’t think that apps like DoNotPay will end up replacing lawyers for a long time.

Lilliquist makes a comparison to how ATMs impacted the banking industry, noting that because it became much easier for banks to open small branches in more remote locations, the number of tellers employed by banks ended up increasing.

Lilliquist does think that AI will make the nature of the legal profession constantly change and evolve, necessitating that lawyers possess a more varied skill set to make use of AI-enabled technologies and stay competitive in the job market. Other kinds of jobs, positions adjacent to legal positions, could also be created. For example, the number of data analysts who can analyze legal and business related datasets and propose plans to improve law practice might increase.

Lilliquist explained to the Observer:

“We’re already seeing a rise of legal technology companies providing alternative legal services backed by AI and machine learning that are enhancing how lawyers practice law. Law firms will begin building their own engineering departments and product teams, too.”

While Lilliquist isn’t worried that AI will put lawyers out of jobs, he is somewhat worried about the way AI can be misused. Lilliquist is worried about how legal AI could be employed by people who don’t fully understand the law, thereby putting themselves at legal risk.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

Biomedical Engineers Apply Machine Learning to Biological Circuits

Published

on

Biomedical Engineers Apply Machine Learning to Biological Circuits

Biomedical engineers at Duke University have figured out a way to use machine learning in order to model interactions that take place between complex variables in engineered bacteria. Traditionally, this type of modeling has been to difficult to complete, but these new algorithms can be used within multiple different types of biological systems.   

The new research was published in the journal Nature Communications on September 25. 

The biomedical researchers looked at a biological circuit that was embedded into a bacterial culture, and they were able to predict circular patterns. This new way of modeling was extremely faster than traditional methods. Specifically, it was 30,000 times faster than the current computational model. 

In order to be more accurate, the researchers then retrained the machine learning model multiple times. They compared the answers and used it on a second biological system. The second system was computationally different than the first, so the algorithm wasn’t limited to one set of problems. 

Lingchong You is a professor of biomedical engineering at Duke. 

“This work was inspired by Google showing that neural networks could learn to beat a human in the board game Go.” she said. 

“Even though the game has simple rules, there are far too many possibilities for a computer to calculate the best next option deterministically,” You said. “I wondered if such an approach could be useful in coping with certain aspects of biological complexity confronting us.”

The study used 13 different bacterial variables including rates of growth, diffusion, protein degradation and cellular movement. A single computer would need at least 600 years to calculate six values per parameter, but the new machine learning system can complete it in hours. 

“The model we use is slow because it has to take into account intermediate steps in time at a small enough rate to be accurate,” said Lingchong You. “But we don’t always care about the intermediate steps. We just want the end results for certain applications. And we can (go back to) figure out the intermediate steps if we find the end results interesting.”

Postdoctoral associate Shangying Wang used a deep neural network that is able to make predictions much faster than the original model. The network uses model variables as the input, and it assigns random weights and biases. Then, it makes a prediction regarding the pattern that the bacterial colony will follow. 

The first result isn’t correct, but the network slightly changes the weights and biases as it is given new training data. Once there has been enough training data, the predictions will become more accurate and stay that way. 

There were four different neural networks that were trained, and their answers were compared. The researchers discovered that whenever the neural networks make similar predictions, they were close to the correct answer. 

“We discovered we didn’t have to validate each answer with the slower standard computational model,” said You. “We essentially used the ‘wisdom of the crowd’ instead.”

After the machine learning model was sufficiently trained, the biomedical researchers used it on a biological circuit. There were 100,000 data simulations used to train the neural network. Out of all of those, only one produced a bacterial colony with three rings, but they were also able to identify certain variables that were important. 

“The neural net was able to find patterns and interactions between the variables that would have been otherwise impossible to uncover,” said Wang.

To close out the study, the researchers tested it on a biological system that operates randomly. Traditionally, they would have to use a computer model that repeats certain parameters multiple times until it identifies the most probable outcome. The new system was able to do this as well, and it showed that it can be applied to various different complex biological systems. 

The biomedical researchers have now turned to more complex biological systems, and they are working on developing the algorithm to become even more efficient. 

“We trained the neural network with 100,000 data sets, but that might have been overkill,” said Wang. “We’re developing an algorithm where the neural network can interact with simulations in real-time to help speed things up.”

“Our first goal was a relatively simple system,” said You. “Now we want to improve these neural network systems to provide a window into the underlying dynamics of more complex biological circuits.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

The Use of Artificial Intelligence In Music Is getting More And More Sophisticated

mm

Published

on

The Use of Artificial Intelligence In Music Is getting More And More Sophisticated

The application of artificial intelligence in music has been increasing for a few years now.  As Kumba Sennaar explains, the three current applications of AI in music industry lies in music composition, music streaming and music monetization where AI platforms are helping artists monetize their music content based on data from user activity.

It all started way back in 1957 when Learn Hiller and Leonard Issacson programmed Illiac I to produce “Illiac Suite for String Quartet, the first work completely written by artificial intelligence, then, 60 years on, it turned into complete albums like the Taryn Southern album produced by Amper Music in 2017. Currently, Southern has over 452 thousand subscribers on YouTube and “Lovesick” a song from the album was listened and viewed by more than 45,000 viewers.

But since then, the application of AI in this field has both got more sophisticated and branched out further. Open AI has created MuseNet, as the company explains, “a deep neural network that can generate 4-minute musical compositions with 10 different instruments and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.

On the other hand, as GeekWire, among others, reports, Dr. Mick Grierson, computer scientist and musician from Goldsmiths, University of London was recently commissioned by the Italian car manufacturer Fiat to produce a list of 50 most iconic pop songs using algorithms. His analytical software was used to “determine what makes the songs noteworthy, including key, the number of beats per minute, chord variety, lyrical content, timbral variety, and sonic variance.”

According to his results, the song that had the best cocktail of the set parameters was Nirvana’s “Smells Like Teen Spirit,” ahead of U2’s “One” and John Lennon’s “Imagine”. Nirvana’s song was then used by FIAT to promote its new FIAT 500 model. Grierson explained that the algorithms showed that, ‘the sounds these songs use and the way they are combined is highly unique in each case.’

Another application was prepared by musicnn library, which as explained, uses deep convolutional neural networks to automatically tag songs. The models “that are included achieve the best scores in public evaluation benchmarks.” music (as in musician) and its best models have been released as an open-source library. The project has been developed by the Music Technology Group of the Universitat Pompeu Fabra, located in Barcelona, Spain.

In his analysis of the application, Jordi Pons used musicnn to analyze and tag another iconic song, Queen’s “Bohemian Rhapsody.” He noticed that the singing voice of Freddie Mercury was tagged as a female voice, while its other predictions were quite accurate. Making musicnn available as open-source makes it possible to further refine the tagging process.

Reporting on the use of AI in music streaming, Digital Music News concludes that “the introduction of artificial intelligence and machine learning technologies has greatly improved the way we listen to music.  Thanks to rapid advances in the AI and similar technologies, we are most likely going to see plenty of futuristic improvements in the upcoming years.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading