A group of Scientists from Linköping University have been able to demonstrate how a quantum computer works, and they were able to simulate the properties of it in a classical computer.
“Our results should be highly significant in determining how to build quantum computer,” Professor Jan-Åke Larsson said.
Sweden, Europe, and other parts of the world have been investing large resources and focusing research to create superfast powerful quantum computers. Within ten years, a Swedish quantum computer is expected to be built, and the EU has deemed quantum technology as one of its major projects.
Currently, we have few useful algorithms that can be used for quantum computers. Even though that’s the case, this type of technology will be extremely important in simulations of biological, chemical, and physical systems. Many of these are too complex for the most powerful computers that we have now. In a computer, a bit can take the value of one or zero, but a quantum bit is able to take all values in between. This means that quantum computers don’t need to take so many operations for each calculation that is done.
Professor Jan-Åke Larsson and his doctoral student Niklas Johansson, from the Division for Information Coding at the Department of Electrical Engineering, Linköping University, have figured out much of why a quantum computer is more powerful than a classic one. They have also looked into what happens within a quantum computer.
The results from the research has been published in the scientific journal Entropy.
“We have shown that the major difference is that quantum computers have two degrees of freedom for each bit. By simulating an additional degree of freedom in a classical computer, we can run some of the algorithms at the same speed as they would achieve in a quantum computer,” says Jan-Åke Larsson.
The team has created a simulation tool called Quantum Simulation Logic, or QSL. It allows them to simulate the operation of a quantum computer on a classical computer. The Quantum Simulation Logic has one specific property, and it is the only property, that a quantum computer has and a classical computer does not. That is one extra degree of freedom for each bit that is part of the calculation.
“Thus, each bit has two degrees of freedom: it can be compared with a mechanical system in which each part has two degrees of freedom — position and speed. In this case, we deal with computation bits — which carry information about the result of the function, and phase bits — which carry information about the structure of the function,” Jan-Åke Larsson explains.
The QSL tool has been used by the team in order to study some of the quantum algorithms that are responsible for managing the structure of the function. Many of those algorithms are as fast in the simulations as they would be in a quantum computer.
“The result shows that the higher speed in quantum computers comes from their ability to store, process and retrieve information in one additional information-carrying degree of freedom. This enables us to better understand how quantum computers work. Also, this knowledge should make it easier to build quantum computers, since we know which property is most important for the quantum computer to work as expected,” says Jan-Åke Larsson.
The team has also built a physical version with electronic components. They used gates that are similar to the ones in quantum computers, and a toolkit simulates how the quantum computer works. This can allow students and others to simulate and understand how quantum cryptography and quantum teleportation works, among other aspects of quantum computers.
This new research can add to the increasing crossover between quantum computing and artificial intelligence. One of these crossovers is feature mapping. Other research conducted by IBM Research, MIT, and Oxford scientists has shown that as quantum computers become more powerful, they will be able to perform feature mapping on highly complex data structures, something classical computers can’t do. Feature mapping is important within machine learning, and it can lead to more effective AI that could identify patterns in data that classical computers are unable to detect.
As more and more research takes place in these fields, there will be increasingly more crossover in the two important areas.
The Use of Artificial Intelligence In Music Is getting More And More Sophisticated
The application of artificial intelligence in music has been increasing for a few years now. As Kumba Sennaar explains, the three current applications of AI in music industry lies in music composition, music streaming and music monetization where AI platforms are helping artists monetize their music content based on data from user activity.
It all started way back in 1957 when Learn Hiller and Leonard Issacson programmed Illiac I to produce “Illiac Suite for String Quartet, the first work completely written by artificial intelligence, then, 60 years on, it turned into complete albums like the Taryn Southern album produced by Amper Music in 2017. Currently, Southern has over 452 thousand subscribers on YouTube and “Lovesick” a song from the album was listened and viewed by more than 45,000 viewers.
But since then, the application of AI in this field has both got more sophisticated and branched out further. Open AI has created MuseNet, as the company explains, “a deep neural network that can generate 4-minute musical compositions with 10 different instruments and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.”
On the other hand, as GeekWire, among others, reports, Dr. Mick Grierson, computer scientist and musician from Goldsmiths, University of London was recently commissioned by the Italian car manufacturer Fiat to produce a list of 50 most iconic pop songs using algorithms. His analytical software was used to “determine what makes the songs noteworthy, including key, the number of beats per minute, chord variety, lyrical content, timbral variety, and sonic variance.”
According to his results, the song that had the best cocktail of the set parameters was Nirvana’s “Smells Like Teen Spirit,” ahead of U2’s “One” and John Lennon’s “Imagine”. Nirvana’s song was then used by FIAT to promote its new FIAT 500 model. Grierson explained that the algorithms showed that, ‘the sounds these songs use and the way they are combined is highly unique in each case.’
Another application was prepared by musicnn library, which as explained, uses deep convolutional neural networks to automatically tag songs. The models “that are included achieve the best scores in public evaluation benchmarks.” music (as in musician) and its best models have been released as an open-source library. The project has been developed by the Music Technology Group of the Universitat Pompeu Fabra, located in Barcelona, Spain.
In his analysis of the application, Jordi Pons used musicnn to analyze and tag another iconic song, Queen’s “Bohemian Rhapsody.” He noticed that the singing voice of Freddie Mercury was tagged as a female voice, while its other predictions were quite accurate. Making musicnn available as open-source makes it possible to further refine the tagging process.
Reporting on the use of AI in music streaming, Digital Music News concludes that “the introduction of artificial intelligence and machine learning technologies has greatly improved the way we listen to music. Thanks to rapid advances in the AI and similar technologies, we are most likely going to see plenty of futuristic improvements in the upcoming years.”
Scientists Use Artificial Intelligence to Estimate Dark Matter in the Universe
Scientists from the Department of Physics and the Department of Computer Science at ETH Zurich are using artificial intelligence to learn more about our universe. They are contributing to the methods used in order to estimate the amount of dark matter present. The group of scientists developed machine learning algorithms that are similar to those used by Facebook and other social media companies for facial recognition. These algorithms help analyze cosmological data. The new research and results were published in the scientific journal Physical Review D.
Tomasz Kacprzak, a researcher from the Institute of Particle Physics and Astrophysics, explained the link between facial recognition and estimating dark matter in the universe.
“Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy,” he explained.
Dark matter is not able to be seen directly by telescope images, but it does bend the path of light rays that are coming to earth from other galaxies. This is called weak gravitational lensing, and it distorts the images of those galaxies.
The distortion that takes place is then used by scientists. They build maps based on mass of the sky, and they show where dark matter is. The scientists then take theoretical predictions of the location of dark matter and compare them to the built maps, and they look for the ones that most match the data.
The described method with maps is traditionally done by using human-designed statistics, which help explain how parts of the maps relate to one another. The problem that arises with this method is that it is not well suited for detecting the complex patterns that are present in such maps.
“In our recent work, we have used a completely new methodology…Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job,” Alexandre Refregier said.
Aurelien Lucchi and his team from the Data Analytics Lab at the Department of Computer Science, along with Janis Fluri, a PhD student from Refregier’s group and the lead author of the study, worked together using machine learning algorithms. They used them to establish deep artificial neural networks that are able to learn to extract as much information from the dark matter maps as possible.
The group of scientists first gave the neural network computer-generated data that simulated the universe. The neural network eventually taught itself which features to look for and to extract large amounts of information.
These neural networks outperformed the human-made analysis. In total, they were 30% more accurate than the traditional methods based on human-made statistical analysis. If cosmologists wanted to achieve the same accuracy rate without using these algorithms, they would have to dedicate at least twice the amount of observation time.
After these methods were established, the scientists then used them to create dark matter maps based on the KiDS-450 dataset.
“This is the first time such machine learning tools have been used in this context, and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications,” Fluri said.
The scientists now want to use this method on bigger image sets such as the Dark Energy Survey, and the neural networks will start to take on new information about dark matter.
Element AI Closes Series B – Raises $151 Million To Bring AI To More Companies
The Canadian startup Element AI, based out of Montreal, has recently completed its series B round of funding, raising $151 million dollars to fund their AI expansion goals. Element AI’s goal is to bring the power of AI to companies that aren’t typically likely to use it, making AI available to those who aren’t savvy regarding AI and computer technologies.
Element AI was founded in 2016, and it aims to dramatically expand the use of AI outside of the traditional fields like retail and security. Element AI hopes to “turn research and industry expertise into software solutions that exponentially learn and improve”, focusing specifically on the supply chain and financial services sectors.
According to VentureBeat, Element AI’s successful series B funding managed to accrue over $151.3 million dollars from both old and new investors. The startup plans to invest this money in the marketing of its current product line as well as in the development of new AI solutions. The CEO of Element AI, Jean-François Gagné, put out a recent press release remarking that the company is excited to start working with their new partners who wish to explore the potential of AI in non-traditional market areas. According to Gagné, Element AI remains fully committed to operationalizing AI, despite it being “the industry’s toughest challenge”.
Although AI is frequently in the headlines, AI applications are primarily found in a few specific fields. Element AI was founded with the idea that AI will be the next major transformative technology, although not every business is equipped to take advantage of it. The disparity between technology companies that are positioned to take advantage of AI and non-tech companies creates a substantial divide between companies who can use AI and those that can’t. Element AI wants to bring AI algorithms to companies that lack the experience to properly utilize AI.
Element AI set out to achieve this by providing consultation to companies that could potentially benefit from utilizing AI, helping them identify areas where they could implement AI solutions. The company has since expanded to offering other services, offering products tailored to specific industries like retail/logistics, financial services, manufacturing, and insurance. The list of specialized products that the company offers is likely to grow, thanks to the substantial increase in funding the company has received.
Element AI is not the only company to try and operationalize AI, with other companies like UiPath creating tools designed to allow companies to automate repetitive tasks. However, Element has definitely been the most successful at bringing AI to a wider section of society.
As reported by CrunchBase, Element AI has worked with many different companies, including Gore Mutual, Bank of Canada, National Bank, LG, and others. In terms of investors, many of their supporters from the series A investment have returned to back the company a second time, including Real Venture, BDC Capital, Hanwha Asset Management and DCVC. Some of the new investors in the company include Gouvernement du Quebec and McKinsey & Company.
According to TechCrunch, McKinsey is a management consultancy company, and though at first glance the company seems like a competitor to Element AI, McKinsey seems to be funneling customers to Element. Many system integrators don’t have the experience with AI needed to ascertain the best uses for the technology, while Element AI has experience with emerging technologies and computing. QuantumBlack, the AI and advanced analytics division of McKinsey, has also established its own offices in Montreal, where they will be collaborating on projects with Element AI.
Element AI also stated in its press release that the company would be using the newly acquired funds to expand its operations across the globe. Currently, the company has approximately 500 employees located in offices around Singapore, South Korea, Seoul, London, and Toronto.
Element AI isn’t the only Canadian AI startup to see recent success. The company CDPQ recently launched its own AI funding initiative intended to advance the commercialization of AI platforms throughout Quebec.
- The Use of Artificial Intelligence In Music Is getting More And More Sophisticated
- Scientists Use Artificial Intelligence to Estimate Dark Matter in the Universe
- Element AI Closes Series B – Raises $151 Million To Bring AI To More Companies
- Robotic Fish Created to Control Invasive Species
- AI Researchers Create 3D Video Game Face Models From User Photos