Connect with us

Artifical Neural Networks

Dragonflies and Missile Defense Systems

Published

 on

Dragonflies and Missile Defense Systems

Dragonflies have extremely fast reflexes with little depth perception. Their reaction time to prey that is moving through the air or ground is 50 milliseconds, the same amount of time it takes for information to cross three neurons. Sandia National Laboratories is working on research to figure out how dragonfly brains work and learn the ways they are able to calculate complex trajectories. 

The research is led by computational neuroscientist Frances Chance. She is the one responsible for developing the algorithms, and she will be presenting her research at the International Conference on Neuromorphic Systems in Knoxville, Tennessee. The research has already been presented at the Annual Meeting of the Organization for Computational Neurosciences in Barcelona, Spain. 

Frances Chance specializes in replicating biological neural networks like brains, especially neurons and the process of sending information throughout the nervous system. Brains are more complex and better versions of computers. They are more energy efficient while leaning and adapting at a faster speed. 

“I try to predict how neurons are wired in the brain and understand what kinds of computations those neurons are doing, based on what we know about the behavior of the animal or what we know about the neural responses,” Chance said. 

The research conducted by Sandia National Laboratories included creating a simple environment that had generated dragonflies through computer simulations. They used computer algorithms to make the dragonflies catch prey just like their real-life counterparts. The computer simulated dragonflies were able to process visual information while hunting just like dragonflies in the real environment. This showed that programming in this manner is possible, which could be applied in many different sectors. 

The new research is already being applied to the missile defense sector. Using the same system as the one with the computer simulated dragonflies could improve missile defense systems. Missile defense systems work in a similar way as dragonflies targeting and catching prey. They intercept an object in flight like a dragonfly intercepts prey in the environment. Dragonflies are one of the top predators in the world as they catch 95% of the prey they target.

With these new developments, they are trying to make on-board computers on missile defense systems smaller while still being fast and accurate. The current way missile defense systems work is through established intercept techniques that require a heavy computation load. This is one of the areas a model based on dragonflies and their prey can help. 

The new technology and research could help improve missile defense systems in many ways including reducing the size, weight, and power needs of onboard computers. Then, interceptors could become smaller and lighter which will make it much easier for them to move around. The new systems could also learn new ways to intercept moving targets like hypersonic weapons. Unlike ballistic missiles, these targets do not follow a similar predictive trajectory or pattern. Finally, the system could be able to use simpler sensors rather than the complex ones used now to intercept a target. 

One of the problems with this research and the idea is that missiles and dragonflies travel at very different speeds. This could cause some discrepancies

Outside of missile defense systems, the computation model of dragonfly brains could also help develop better machine learning and artificial intelligence. As the use of this kind of technology and artificial intelligence grows, it is finding its way into more and more sectors. The defense sector is one that is using this to become much more efficient and grow rapidly. This research shows how we can develop complex systems based on those that already exist in our environment, among those are dragonflies and their brains. Our new technology allows us to model this and create a better version.

 

Spread the love

Deep Learning Specialization on Coursera

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Artifical Neural Networks

Legal Tech Company Seeks To Bring AI To Lawyers

mm

Published

on

Legal Tech Company Seeks To Bring AI To Lawyers

Artificial intelligence programs are being used in more applications and more industries all the time. The legal field is an area that could substantially benefit from AI programs, due to the massive amount of documents that have to be reviewed for any given case. As reported by the Observer, one company is aiming to bring AI to the legal fields, with its CEO seeing a wide variety of uses for AI.

Lane Lillquist is the co-founder and CTO of InCloudCounsel, a legal tech firm. Lillquist believes that AI can be used to help lawyers be more efficient and accurate in their jobs. For instance, the massive amount of data that has to be processed by lawyers is usually better processed by a machine learning algorithm, and the insights generated by the AI could be used to make tasks like contract review more accurate. In this sense, the role for AI in the legal space is much like the various other tech tools that we use all the time, things like automatic spelling correction and document searching.

Because of the narrow role Lilliquist expects AI to take, Lilliquist doesn’t see much need to worry that AI will end up replacing lawyers at their jobs, at least not anytime soon. Lilliquist expects that for the near future most of the tasks done by AI will be things like automating many of the high-volume, repetitive tasks that prevent lawyer’s from focusing their attention on more important tasks. These are tasks like data extraction and categorization. Human lawyers will be able to have more time, more bandwidth, to focus on more complex tasks and different forms of work. Essentially, AI could make lawyers more impactful at their jobs, not less.

Lilliquist has made some predictions regarding the role of AI for the near future of the legal field. Lilliquist sees AI accomplishing tasks like automatically filling in certain forms or searching documents for specific terms and phrases relevant to a case.

One example of an application that fills in legal documents is the company DoNotPay, which promises to help users of the platform “fight corporations and beat bureaucracy” with just a few button presses. The app operates by having a chatbot ascertain the legal problems of its users, and then it generates and submits paperwork based on the provided answers. While the app is impressive, Lilliqust doesn’t think that apps like DoNotPay will end up replacing lawyers for a long time.

Lilliquist makes a comparison to how ATMs impacted the banking industry, noting that because it became much easier for banks to open small branches in more remote locations, the number of tellers employed by banks ended up increasing.

Lilliquist does think that AI will make the nature of the legal profession constantly change and evolve, necessitating that lawyers possess a more varied skill set to make use of AI-enabled technologies and stay competitive in the job market. Other kinds of jobs, positions adjacent to legal positions, could also be created. For example, the number of data analysts who can analyze legal and business related datasets and propose plans to improve law practice might increase.

Lilliquist explained to the Observer:

“We’re already seeing a rise of legal technology companies providing alternative legal services backed by AI and machine learning that are enhancing how lawyers practice law. Law firms will begin building their own engineering departments and product teams, too.”

While Lilliquist isn’t worried that AI will put lawyers out of jobs, he is somewhat worried about the way AI can be misused. Lilliquist is worried about how legal AI could be employed by people who don’t fully understand the law, thereby putting themselves at legal risk.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

Biomedical Engineers Apply Machine Learning to Biological Circuits

Published

on

Biomedical Engineers Apply Machine Learning to Biological Circuits

Biomedical engineers at Duke University have figured out a way to use machine learning in order to model interactions that take place between complex variables in engineered bacteria. Traditionally, this type of modeling has been to difficult to complete, but these new algorithms can be used within multiple different types of biological systems.   

The new research was published in the journal Nature Communications on September 25. 

The biomedical researchers looked at a biological circuit that was embedded into a bacterial culture, and they were able to predict circular patterns. This new way of modeling was extremely faster than traditional methods. Specifically, it was 30,000 times faster than the current computational model. 

In order to be more accurate, the researchers then retrained the machine learning model multiple times. They compared the answers and used it on a second biological system. The second system was computationally different than the first, so the algorithm wasn’t limited to one set of problems. 

Lingchong You is a professor of biomedical engineering at Duke. 

“This work was inspired by Google showing that neural networks could learn to beat a human in the board game Go.” she said. 

“Even though the game has simple rules, there are far too many possibilities for a computer to calculate the best next option deterministically,” You said. “I wondered if such an approach could be useful in coping with certain aspects of biological complexity confronting us.”

The study used 13 different bacterial variables including rates of growth, diffusion, protein degradation and cellular movement. A single computer would need at least 600 years to calculate six values per parameter, but the new machine learning system can complete it in hours. 

“The model we use is slow because it has to take into account intermediate steps in time at a small enough rate to be accurate,” said Lingchong You. “But we don’t always care about the intermediate steps. We just want the end results for certain applications. And we can (go back to) figure out the intermediate steps if we find the end results interesting.”

Postdoctoral associate Shangying Wang used a deep neural network that is able to make predictions much faster than the original model. The network uses model variables as the input, and it assigns random weights and biases. Then, it makes a prediction regarding the pattern that the bacterial colony will follow. 

The first result isn’t correct, but the network slightly changes the weights and biases as it is given new training data. Once there has been enough training data, the predictions will become more accurate and stay that way. 

There were four different neural networks that were trained, and their answers were compared. The researchers discovered that whenever the neural networks make similar predictions, they were close to the correct answer. 

“We discovered we didn’t have to validate each answer with the slower standard computational model,” said You. “We essentially used the ‘wisdom of the crowd’ instead.”

After the machine learning model was sufficiently trained, the biomedical researchers used it on a biological circuit. There were 100,000 data simulations used to train the neural network. Out of all of those, only one produced a bacterial colony with three rings, but they were also able to identify certain variables that were important. 

“The neural net was able to find patterns and interactions between the variables that would have been otherwise impossible to uncover,” said Wang.

To close out the study, the researchers tested it on a biological system that operates randomly. Traditionally, they would have to use a computer model that repeats certain parameters multiple times until it identifies the most probable outcome. The new system was able to do this as well, and it showed that it can be applied to various different complex biological systems. 

The biomedical researchers have now turned to more complex biological systems, and they are working on developing the algorithm to become even more efficient. 

“We trained the neural network with 100,000 data sets, but that might have been overkill,” said Wang. “We’re developing an algorithm where the neural network can interact with simulations in real-time to help speed things up.”

“Our first goal was a relatively simple system,” said You. “Now we want to improve these neural network systems to provide a window into the underlying dynamics of more complex biological circuits.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

The Use of Artificial Intelligence In Music Is getting More And More Sophisticated

mm

Published

on

The Use of Artificial Intelligence In Music Is getting More And More Sophisticated

The application of artificial intelligence in music has been increasing for a few years now.  As Kumba Sennaar explains, the three current applications of AI in music industry lies in music composition, music streaming and music monetization where AI platforms are helping artists monetize their music content based on data from user activity.

It all started way back in 1957 when Learn Hiller and Leonard Issacson programmed Illiac I to produce “Illiac Suite for String Quartet, the first work completely written by artificial intelligence, then, 60 years on, it turned into complete albums like the Taryn Southern album produced by Amper Music in 2017. Currently, Southern has over 452 thousand subscribers on YouTube and “Lovesick” a song from the album was listened and viewed by more than 45,000 viewers.

But since then, the application of AI in this field has both got more sophisticated and branched out further. Open AI has created MuseNet, as the company explains, “a deep neural network that can generate 4-minute musical compositions with 10 different instruments and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.

On the other hand, as GeekWire, among others, reports, Dr. Mick Grierson, computer scientist and musician from Goldsmiths, University of London was recently commissioned by the Italian car manufacturer Fiat to produce a list of 50 most iconic pop songs using algorithms. His analytical software was used to “determine what makes the songs noteworthy, including key, the number of beats per minute, chord variety, lyrical content, timbral variety, and sonic variance.”

According to his results, the song that had the best cocktail of the set parameters was Nirvana’s “Smells Like Teen Spirit,” ahead of U2’s “One” and John Lennon’s “Imagine”. Nirvana’s song was then used by FIAT to promote its new FIAT 500 model. Grierson explained that the algorithms showed that, ‘the sounds these songs use and the way they are combined is highly unique in each case.’

Another application was prepared by musicnn library, which as explained, uses deep convolutional neural networks to automatically tag songs. The models “that are included achieve the best scores in public evaluation benchmarks.” music (as in musician) and its best models have been released as an open-source library. The project has been developed by the Music Technology Group of the Universitat Pompeu Fabra, located in Barcelona, Spain.

In his analysis of the application, Jordi Pons used musicnn to analyze and tag another iconic song, Queen’s “Bohemian Rhapsody.” He noticed that the singing voice of Freddie Mercury was tagged as a female voice, while its other predictions were quite accurate. Making musicnn available as open-source makes it possible to further refine the tagging process.

Reporting on the use of AI in music streaming, Digital Music News concludes that “the introduction of artificial intelligence and machine learning technologies has greatly improved the way we listen to music.  Thanks to rapid advances in the AI and similar technologies, we are most likely going to see plenty of futuristic improvements in the upcoming years.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading