The application of artificial intelligence in music has been increasing for a few years now. As Kumba Sennaar explains, the three current applications of AI in music industry lies in music composition, music streaming and music monetization where AI platforms are helping artists monetize their music content based on data from user activity.
It all started way back in 1957 when Learn Hiller and Leonard Issacson programmed Illiac I to produce “Illiac Suite for String Quartet, the first work completely written by artificial intelligence, then, 60 years on, it turned into complete albums like the Taryn Southern album produced by Amper Music in 2017. Currently, Southern has over 452 thousand subscribers on YouTube and “Lovesick” a song from the album was listened and viewed by more than 45,000 viewers.
But since then, the application of AI in this field has both got more sophisticated and branched out further. Open AI has created MuseNet, as the company explains, “a deep neural network that can generate 4-minute musical compositions with 10 different instruments and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.”
On the other hand, as GeekWire, among others, reports, Dr. Mick Grierson, computer scientist and musician from Goldsmiths, University of London was recently commissioned by the Italian car manufacturer Fiat to produce a list of 50 most iconic pop songs using algorithms. His analytical software was used to “determine what makes the songs noteworthy, including key, the number of beats per minute, chord variety, lyrical content, timbral variety, and sonic variance.”
According to his results, the song that had the best cocktail of the set parameters was Nirvana’s “Smells Like Teen Spirit,” ahead of U2’s “One” and John Lennon’s “Imagine”. Nirvana’s song was then used by FIAT to promote its new FIAT 500 model. Grierson explained that the algorithms showed that, ‘the sounds these songs use and the way they are combined is highly unique in each case.’
Another application was prepared by musicnn library, which as explained, uses deep convolutional neural networks to automatically tag songs. The models “that are included achieve the best scores in public evaluation benchmarks.” music (as in musician) and its best models have been released as an open-source library. The project has been developed by the Music Technology Group of the Universitat Pompeu Fabra, located in Barcelona, Spain.
In his analysis of the application, Jordi Pons used musicnn to analyze and tag another iconic song, Queen’s “Bohemian Rhapsody.” He noticed that the singing voice of Freddie Mercury was tagged as a female voice, while its other predictions were quite accurate. Making musicnn available as open-source makes it possible to further refine the tagging process.
Reporting on the use of AI in music streaming, Digital Music News concludes that “the introduction of artificial intelligence and machine learning technologies has greatly improved the way we listen to music. Thanks to rapid advances in the AI and similar technologies, we are most likely going to see plenty of futuristic improvements in the upcoming years.”
Biomedical Engineers Apply Machine Learning to Biological Circuits
Biomedical engineers at Duke University have figured out a way to use machine learning in order to model interactions that take place between complex variables in engineered bacteria. Traditionally, this type of modeling has been to difficult to complete, but these new algorithms can be used within multiple different types of biological systems.
The new research was published in the journal Nature Communications on September 25.
The biomedical researchers looked at a biological circuit that was embedded into a bacterial culture, and they were able to predict circular patterns. This new way of modeling was extremely faster than traditional methods. Specifically, it was 30,000 times faster than the current computational model.
In order to be more accurate, the researchers then retrained the machine learning model multiple times. They compared the answers and used it on a second biological system. The second system was computationally different than the first, so the algorithm wasn’t limited to one set of problems.
Lingchong You is a professor of biomedical engineering at Duke.
“This work was inspired by Google showing that neural networks could learn to beat a human in the board game Go.” she said.
“Even though the game has simple rules, there are far too many possibilities for a computer to calculate the best next option deterministically,” You said. “I wondered if such an approach could be useful in coping with certain aspects of biological complexity confronting us.”
The study used 13 different bacterial variables including rates of growth, diffusion, protein degradation and cellular movement. A single computer would need at least 600 years to calculate six values per parameter, but the new machine learning system can complete it in hours.
“The model we use is slow because it has to take into account intermediate steps in time at a small enough rate to be accurate,” said Lingchong You. “But we don’t always care about the intermediate steps. We just want the end results for certain applications. And we can (go back to) figure out the intermediate steps if we find the end results interesting.”
Postdoctoral associate Shangying Wang used a deep neural network that is able to make predictions much faster than the original model. The network uses model variables as the input, and it assigns random weights and biases. Then, it makes a prediction regarding the pattern that the bacterial colony will follow.
The first result isn’t correct, but the network slightly changes the weights and biases as it is given new training data. Once there has been enough training data, the predictions will become more accurate and stay that way.
There were four different neural networks that were trained, and their answers were compared. The researchers discovered that whenever the neural networks make similar predictions, they were close to the correct answer.
“We discovered we didn’t have to validate each answer with the slower standard computational model,” said You. “We essentially used the ‘wisdom of the crowd’ instead.”
After the machine learning model was sufficiently trained, the biomedical researchers used it on a biological circuit. There were 100,000 data simulations used to train the neural network. Out of all of those, only one produced a bacterial colony with three rings, but they were also able to identify certain variables that were important.
“The neural net was able to find patterns and interactions between the variables that would have been otherwise impossible to uncover,” said Wang.
To close out the study, the researchers tested it on a biological system that operates randomly. Traditionally, they would have to use a computer model that repeats certain parameters multiple times until it identifies the most probable outcome. The new system was able to do this as well, and it showed that it can be applied to various different complex biological systems.
The biomedical researchers have now turned to more complex biological systems, and they are working on developing the algorithm to become even more efficient.
“We trained the neural network with 100,000 data sets, but that might have been overkill,” said Wang. “We’re developing an algorithm where the neural network can interact with simulations in real-time to help speed things up.”
“Our first goal was a relatively simple system,” said You. “Now we want to improve these neural network systems to provide a window into the underlying dynamics of more complex biological circuits.”
Scientists Use Artificial Intelligence to Estimate Dark Matter in the Universe
Scientists from the Department of Physics and the Department of Computer Science at ETH Zurich are using artificial intelligence to learn more about our universe. They are contributing to the methods used in order to estimate the amount of dark matter present. The group of scientists developed machine learning algorithms that are similar to those used by Facebook and other social media companies for facial recognition. These algorithms help analyze cosmological data. The new research and results were published in the scientific journal Physical Review D.
Tomasz Kacprzak, a researcher from the Institute of Particle Physics and Astrophysics, explained the link between facial recognition and estimating dark matter in the universe.
“Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy,” he explained.
Dark matter is not able to be seen directly by telescope images, but it does bend the path of light rays that are coming to earth from other galaxies. This is called weak gravitational lensing, and it distorts the images of those galaxies.
The distortion that takes place is then used by scientists. They build maps based on mass of the sky, and they show where dark matter is. The scientists then take theoretical predictions of the location of dark matter and compare them to the built maps, and they look for the ones that most match the data.
The described method with maps is traditionally done by using human-designed statistics, which help explain how parts of the maps relate to one another. The problem that arises with this method is that it is not well suited for detecting the complex patterns that are present in such maps.
“In our recent work, we have used a completely new methodology…Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job,” Alexandre Refregier said.
Aurelien Lucchi and his team from the Data Analytics Lab at the Department of Computer Science, along with Janis Fluri, a PhD student from Refregier’s group and the lead author of the study, worked together using machine learning algorithms. They used them to establish deep artificial neural networks that are able to learn to extract as much information from the dark matter maps as possible.
The group of scientists first gave the neural network computer-generated data that simulated the universe. The neural network eventually taught itself which features to look for and to extract large amounts of information.
These neural networks outperformed the human-made analysis. In total, they were 30% more accurate than the traditional methods based on human-made statistical analysis. If cosmologists wanted to achieve the same accuracy rate without using these algorithms, they would have to dedicate at least twice the amount of observation time.
After these methods were established, the scientists then used them to create dark matter maps based on the KiDS-450 dataset.
“This is the first time such machine learning tools have been used in this context, and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications,” Fluri said.
The scientists now want to use this method on bigger image sets such as the Dark Energy Survey, and the neural networks will start to take on new information about dark matter.
Researchers Create First-Of-Its-Kind Artificial Neural Network
Researchers have created a multilayer all-optical artificial neural network, something that hadn’t been successfully demonstrated to this point. There is a huge desire to create practical optical artificial neural networks since they are faster and consume far less power than those networks based on traditional computers. These new developments could enable parallel computation with light.
The researchers from The Hong Kong University of Science and Technology, Hong Kong, laid out their two-layer all-optical neural network in Optica, The Optical Society’s journal that includes high-impact research. The researchers also showed how they could apply the network to complex classification tasks.
“Our all-optical scheme could enable a neural network that performs optical parallel computation at the speed of light while consuming little energy,” said Junwei Liu, a member of the research team. “Large-scale, all-optical neural networks could be used for applications ranging from image recognition to scientific research.”
These all-optical networks operate differently than the conventional hybrid optical neural networks that are used currently. In those, optical components are normally used for linear operations. In nonlinear activation functions, those that simulate the way neurons in the human brain respond, the optical components are often implemented electronically. This is because nonlinear optics require high-power lasers which are difficult to implement in an optical neural network.
To get around this, the researchers utilized cold atoms with electro-magnetically induced transparency in order to perform nonlinear functions.
Shengwang Du, a member of the research team, spoke about the new developments.
“This light-induced effect can be achieved with very weak laser power,” he said. “Because this effect is based on nonlinear quantum interference, it might be possible to extend our system into a quantum neural network that could solve problems intractable by classical methods.”
In order to test out their new approach, the team created a two-layer fully-connected all optical neural network. The network has 16 inputs and two outputs. They then used their all-optical network to classify the order and disorder phases of a statistical model of magnetism. They were able to conclude that the all-optical neural network was just as accurate as a trained computer-based neural network.
The next step for the research team is to expand this to large-scale all-optical deep neural networks. These can have complex architectures that are designed for specific applications like image recognition. By doing this, they can demonstrate that this system works at much bigger scales.
“Although our work is a proof-of-principle demonstration, it shows that it may become possible in the future to develop optical versions of artificial intelligence,” said Du.
“The next generation of artificial intelligence hardware will be intrinsically much faster and exhibit lower power consumption compared to today’s computer-based artificial intelligence,” added Liu.
To see more of these kinds of developments in science and technology, The Optical Society (OSA) provides publications, meetings and membership initiatives, research, and dedicated resources. They have an extensive network of experts in the optics and photonics field. The organization supports scientists, engineers, students, and business leaders responsible for scientific discoveries, applications, and applications. Their website provides various news and research updates.