Connect with us

AI News

AI Used To Create Drug Molecule That Could Fight Fibrosis

mm

Published

 on

Creating new medical drugs is a complex process that can take years of research and billions of dollars. Yet it’s also an important investment to make for people’s health. Artificial intelligence could potentially make the discovery of new drugs easier and substantially quicker if the recent work of the startup Insilico Medicine continues to make progress. As reported by SingularityHub, the AI startup has recently utilized AI to design a molecule that could combat fibrosis.

Given how complex and time-consuming the process of discovering new molecules for a drug is, scientists and engineers are constantly looking for ways to expedite it. The idea of using computers to help discover new drugs is nothing new, as the concept has existed for decades. However, progress on this front has been slow, with engineers struggling to find the right algorithms for drug creation.

Deep learning has started to make AI-driven drug discovery more viable, with pharmaceutical companies investing heavily in AI startups over the past few years. One company has managed to use AI to design a molecule that could combat fibrosis, taking only 46 days to do dream up a molecule resembling therapeutic drugs. Insilco Medicine combined two different deep learning techniques to achieve this result: reinforcement learning and generative adversarial networks (GANs).

Reinforcement learning is a machine learning method that encourages the machine learning model to make certain decisions by providing the network with feedback that elicits certain responses. The model can be punished for making undesirable choices or rewarded for making desirable choices. By using a combination of both negative and positive reinforcement the model is guided toward making desired decisions, and it will trend towards making decisions that minimize punishment and maximize reward.

Meanwhile, generative adversarial networks are “adversarial” because they consist of two different neural networks pitted against one another. The two networks are given examples of objects to train on, frequently images. The job of one network is to create a counterfeit object, something sufficiently similar to the real object that it can be confused for the genuine article. The job of the second network is to detect counterfeit objects. The two networks try to outperform the other network, and as they are both increasing their performance to overcome the other network, this virtual arms race leads to the counterfeit model generating objects that are nearly indistinguishable from the real thing.

By combining both GANS and reinforcement learning algorithms, the researchers were able to have their models produce new drug molecules extremely similar to already existing therapeutic drugs.

The results of Insilico Medicine’s experiments with AI drug discovery were recently published in the journal Nature Biotechnology. In the paper, the researchers discuss how the deep learning models were trained. The researchers took representations of molecules already used in drugs to handle proteins involved in idiopathic pulmonary fibrosis or IPF. These molecules were used as the basis for training and the combined models were able to generate around 30,000 possible drug molecules.

The researchers then sorted through the 30000 candidate molecules and selected the six most promising molecules for lab testing. These six finalists were synthesized in the lab and used in a series of tests that tracked their ability to target the IPF protein. One molecule, in particular, seemed promising, as it delivered the kind of results that are desired in a medical drug.

It’s important to note that the fibrosis drug targeted in the experiment has already been extensively researched, with multiple effective drugs already existing for it. The researchers could reference these drugs, and this gave the research team a leg up as they had a substantial amount of data to train their models on. This doesn’t hold true for many other diseases, and as a result, there is a larger gap to close on these treatments.

Another important fact is that the company’s current drug development model only deals with the initial discovery process,and that the molecules generated by their model will still require many tweaks and optimizations before the molecules could potentially be used for clinical trials.

According to Wired, Insilico Medicine’s CEO Alex Zharvornokov acknowledges that their AI-driven drug isn’t ready for field use, with the current study just being a proof of concept. The goal of this experiment was to see how quickly a drug could be designed with the assistance of AI systems. However, Zhavornokov notes that the researchers were able to design a potentially useful molecule much faster than they could have if they had used regular drug discovery methods.

Despite the caveats, Insilico Medicine’s research still represents a notable advancement in the usage of AI to create new drugs. The refinement of the techniques used in the study could substantially shorten the amount of time required to develop a new drug. This could prove especially useful in an era where antibiotic-resistant bacteria are proliferating and many previously effective drugs losing their potency.

Spread the love

AI News

The Use of Artificial Intelligence In Music Is getting More And More Sophisticated

mm

Published

on

The application of artificial intelligence in music has been increasing for a few years now.  As Kumba Sennaar explains, the three current applications of AI in music industry lies in music composition, music streaming and music monetization where AI platforms are helping artists monetize their music content based on data from user activity.

It all started way back in 1957 when Learn Hiller and Leonard Issacson programmed Illiac I to produce “Illiac Suite for String Quartet, the first work completely written by artificial intelligence, then, 60 years on, it turned into complete albums like the Taryn Southern album produced by Amper Music in 2017. Currently, Southern has over 452 thousand subscribers on YouTube and “Lovesick” a song from the album was listened and viewed by more than 45,000 viewers.

But since then, the application of AI in this field has both got more sophisticated and branched out further. Open AI has created MuseNet, as the company explains, “a deep neural network that can generate 4-minute musical compositions with 10 different instruments and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.

On the other hand, as GeekWire, among others, reports, Dr. Mick Grierson, computer scientist and musician from Goldsmiths, University of London was recently commissioned by the Italian car manufacturer Fiat to produce a list of 50 most iconic pop songs using algorithms. His analytical software was used to “determine what makes the songs noteworthy, including key, the number of beats per minute, chord variety, lyrical content, timbral variety, and sonic variance.”

According to his results, the song that had the best cocktail of the set parameters was Nirvana’s “Smells Like Teen Spirit,” ahead of U2’s “One” and John Lennon’s “Imagine”. Nirvana’s song was then used by FIAT to promote its new FIAT 500 model. Grierson explained that the algorithms showed that, ‘the sounds these songs use and the way they are combined is highly unique in each case.’

Another application was prepared by musicnn library, which as explained, uses deep convolutional neural networks to automatically tag songs. The models “that are included achieve the best scores in public evaluation benchmarks.” music (as in musician) and its best models have been released as an open-source library. The project has been developed by the Music Technology Group of the Universitat Pompeu Fabra, located in Barcelona, Spain.

In his analysis of the application, Jordi Pons used musicnn to analyze and tag another iconic song, Queen’s “Bohemian Rhapsody.” He noticed that the singing voice of Freddie Mercury was tagged as a female voice, while its other predictions were quite accurate. Making musicnn available as open-source makes it possible to further refine the tagging process.

Reporting on the use of AI in music streaming, Digital Music News concludes that “the introduction of artificial intelligence and machine learning technologies has greatly improved the way we listen to music.  Thanks to rapid advances in the AI and similar technologies, we are most likely going to see plenty of futuristic improvements in the upcoming years.”

Spread the love
Continue Reading

AI News

Scientists Use Artificial Intelligence to Estimate Dark Matter in the Universe

mm

Published

on

Scientists from the Department of Physics and the Department of Computer Science at ETH Zurich are using artificial intelligence to learn more about our universe. They are contributing to the methods used in order to estimate the amount of dark matter present. The group of scientists developed machine learning algorithms that are similar to those used by Facebook and other social media companies for facial recognition. These algorithms help analyze cosmological data. The new research and results were published in the scientific journal Physical Review D. 

Tomasz Kacprzak, a researcher from the Institute of Particle Physics and Astrophysics, explained the link between facial recognition and estimating dark matter in the universe. 

“Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy,” he explained. 

Dark matter is not able to be seen directly by telescope images, but it does bend the path of light rays that are coming to earth from other galaxies. This is called weak gravitational lensing, and it distorts the images of those galaxies. 

The distortion that takes place is then used by scientists. They build maps based on mass of the sky, and they show where dark matter is. The scientists then take theoretical predictions of the location of dark matter and compare them to the built maps, and they look for the ones that most match the data.

The described method with maps is traditionally done by using human-designed statistics, which help explain how parts of the maps relate to one another. The problem that arises with this method is that it is not well suited for detecting the complex patterns that are present in such maps. 

“In our recent work, we have used a completely new methodology…Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job,” Alexandre Refregier said. 

Aurelien Lucchi and his team from the Data Analytics Lab at the Department of Computer Science, along with Janis Fluri, a PhD student from Refregier’s group and the lead author of the study, worked together using machine learning algorithms. They used them to establish deep artificial neural networks that are able to learn to extract as much information from the dark matter maps as possible. 

The group of scientists first gave the neural network computer-generated data that simulated the universe. The neural network eventually taught itself which features to look for and to extract large amounts of information.

These neural networks outperformed the human-made analysis. In total, they were 30% more accurate than the traditional methods based on human-made statistical analysis. If cosmologists wanted to achieve the same accuracy rate without using these algorithms, they would have to dedicate at least twice the amount of observation time. 

After these methods were established, the scientists then used them to create dark matter maps based on the KiDS-450 dataset. 

“This is the first time such machine learning tools have been used in this context, and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications,” Fluri said. 

The scientists now want to use this method on bigger image sets such as the Dark Energy Survey, and the neural networks will start to take on new information about dark matter.

 

Spread the love
Continue Reading

AI News

Element AI Closes Series B – Raises $151 Million To Bring AI To More Companies

mm

Published

on

The Canadian startup Element AI, based out of Montreal, has recently completed its series B round of funding, raising $151 million dollars to fund their AI expansion goals. Element AI’s goal is to bring the power of AI to companies that aren’t typically likely to use it, making AI available to those who aren’t savvy regarding AI and computer technologies.

Element AI was founded in 2016, and it aims to dramatically expand the use of AI outside of the traditional fields like retail and security. Element AI hopes to “turn research and industry expertise into software solutions that exponentially learn and improve”, focusing specifically on the supply chain and financial services sectors.

According to VentureBeat, Element AI’s successful series B funding managed to accrue over $151.3 million dollars from both old and new investors. The startup plans to invest this money in the marketing of its current product line as well as in the development of new AI solutions.  The CEO of Element AI, Jean-François Gagné, put out a recent press release remarking that the company is excited to start working with their new partners who wish to explore the potential of AI in non-traditional market areas. According to Gagné, Element AI remains fully committed to operationalizing AI, despite it being “the industry’s toughest challenge”.

Although AI is frequently in the headlines, AI applications are primarily found in a few specific fields. Element AI was founded with the idea that AI will be the next major transformative technology, although not every business is equipped to take advantage of it. The disparity between technology companies that are positioned to take advantage of AI and non-tech companies creates a substantial divide between companies who can use AI and those that can’t. Element AI wants to bring AI algorithms to companies that lack the experience to properly utilize AI.

Element AI set out to achieve this by providing consultation to companies that could potentially benefit from utilizing AI, helping them identify areas where they could implement AI solutions. The company has since expanded to offering other services, offering products tailored to specific industries like retail/logistics, financial services, manufacturing, and insurance. The list of specialized products that the company offers is likely to grow, thanks to the substantial increase in funding the company has received.

Element AI is not the only company to try and operationalize AI, with other companies like UiPath creating tools designed to allow companies to automate repetitive tasks. However, Element has definitely been the most successful at bringing AI to a wider section of society.

As reported by CrunchBase, Element AI has worked with many different companies, including Gore Mutual, Bank of Canada, National Bank, LG, and others. In terms of investors, many of their supporters from the series A investment have returned to back the company a second time, including Real Venture, BDC Capital, Hanwha Asset Management and DCVC. Some of the new investors in the company include Gouvernement du Quebec and McKinsey & Company.

According to TechCrunch, McKinsey is a management consultancy company, and though at first glance the company seems like a competitor to Element AI, McKinsey seems to be funneling customers to Element. Many system integrators don’t have the experience with AI needed to ascertain the best uses for the technology, while Element AI has experience with emerging technologies and computing. QuantumBlack, the AI and advanced analytics division of McKinsey, has also established its own offices in Montreal, where they will be collaborating on projects with Element AI.

Element AI also stated in its press release that the company would be using the newly acquired funds to expand its operations across the globe. Currently, the company has approximately 500 employees located in offices around Singapore, South Korea, Seoul, London, and Toronto.

Element AI isn’t the only Canadian AI startup to see recent success. The company CDPQ recently launched its own AI funding initiative intended to advance the commercialization of AI platforms throughout Quebec.

Spread the love
Continue Reading