Connect with us

Manufacturing

Super-Compressible Material Developed Through AI

Published

 on

Super-Compressible Material Developed Through AI

A new super-compressible material developed through AI by researchers at TU Delft can transform many of our everyday objects while still staying strong. The researchers did not conduct any experimental tests, and they created the material using only artificial intelligence and machine learning.

Miguel Bessa is the first author of the publication which appeared in Advanced Materials on October 14. 

“AI gives you a treasure map, and the scientist needs to find the treasure,” he said. 

Transforming Everyday Objects

Miguel Bessa, an assistant professor in materials science at TU Delft, got the inspiration to create this material after spending time at the California Institute of Technology. It was there, at the Space Structures Lab, where he observed a satellite structure that was able to open long solar sails from a small package. 

After seeing this, Bessa wanted to know if it was possible to design a super-compressible yet strong material and compress it into a small fraction of its volume. 

“If this was possible, everyday objects such as bicycles, dinner tables and umbrellas could be folded into your pocket,” he said. 

The Next Generation of Materials 

Bessa believes it is important that the next generation of materials be adaptive and multi-purpose with the capability to be altered. The way to do this is through structure-dominated materials, which are metamaterials that are able to exploit new geometries. This will allow the materials to have certain properties and functionalities that did not exist before. 

“However, metamaterial design has relied on extensive experimentation and a trial-and-error approach,” Bessa says. “We argue in favor of inverting the process by using machine learning for exploring new design possibilities, while reducing experimentation to an absolute minimum.”

“We follow a computational data-driven approach for exploring a new metamaterial concept and adapting it to different target properties, choice of base materials, length scales and manufacturing processes.

New Possibilities

Using machine learning, Bessa developed two designs that were different length scales for the super-compressible material developed through AI. They transformed brittle polymers into metamaterials which were a lot more lightweight and recoverable. The most important and impressive aspect of these new metamaterials is that they are super-compressible. The macro-scale design focuses on maximum compressibility, while the micro-scale is best for high strength and stiffness. 

Bessa argues that the most important part of the work is not the actual developed material, but it’s the new way of designing through the use of machine learning and artificial intelligence. This could open up possibilities that were unknown before. 

“The important thing is that machine learning creates an opportunity to invert the design process by shifting from experimentally guided investigations to computationally data-driven ones, even if the computer models are missing some information. The essential requisites are that ‘enough’ data about the problem of interest is available, and that the data is sufficiently accurate.”

Bessa believes in data-driven research in materials science and its ability to revolutionize and transform our way of life. 

“Data-driven science will revolutionize the way we reach new discoveries, and I can’t wait to see what the future will bring us.”

Taking Over From Start to Finish

These new developments show that there are areas that can be transformed by AI and machine learning that are not well-known. While it is proven that artificial intelligence will revolutionize machines, technologies, and almost every other aspect of society, it is not often acknowledged that it can also develop these completely on their own. There will be a point at which machine learning and AI will take over the design and development process from start to finish. It will be up to humans to instill certain mechanisms in these technologies so that they are compatible with our ways of life. 

 

Spread the love

Manufacturing

Artificial Intelligence Used to Prevent Icebergs from Disrupting Shipping

Published

on

Artificial Intelligence Used to Prevent Icebergs from Disrupting Shipping

Experts at the University of Sheffield have developed a combination of control systems and artificial intelligence (AI) forecasting models to prevent icebergs from drifting into busy shipping regions. 

Through the use of a recently published control systems model, experts were able to predict the movement of icebergs. In 2020, between 479 and 1,015 icebergs are expected to drift into waters south of 48°, an area that sees great shipping movement between Europe and north-east North America. Last year, there were a total of 1,515 observed in that same area.

The team relied on experimental artificial intelligence analysis in order to independently support the number of icebergs predicted. They also discovered a rapid early rise in the number of icebergs present in this area during the ice season, which runs from January to September. 

The International Ice Patrol (IIP) is supplied with the findings, and they use the information to figure out the best use of resources for better ice forecasts during the season. According to the seasonal forecast, ships in the north-west Atlantic will be less likely to encounter an iceberg compared to last year.

Icebergs cause serious problems and shipping risks in the north-west Atlantic. Records show that there have been collisions and sinkings dating back to the 17th century. The IIP was established in 1912 after the sinking of the Titanic, and its job is to observe sea ice and conditions in the north-west Atlantic and warn of potential dangers.

The risk of icebergs to shipping changes each year. One year can see no icebergs crossing the area, while another year can see over 1,000. This makes it difficult to predict, but in general, there has been a higher amount detected since the 1980s. 

2020 is the first year that artificial intelligence is being used to forecast the icebergs in the area, as well as the rate of change across the season.

The model was developed by a team led by Professor Grant Bigg at the University of Sheffield, and it was funded by insurance firm AXA XL’s Ocean Risk Scholarships Programme. There is a control systems model as well as two machine learning tools that are used. 

Data related to the surface temperature of the Labrador Sea is analyzed, as well as variations in atmospheric pressure in the North Atlantic and the surface mass balance of the Greenland ice sheet.

The foundation control systems approach had an 80 percent accuracy level when tested against data on iceberg numbers for the seasons between 1997 and 2016. 

According to some of Professor Bigg’s earlier research, the variation of the number of icebergs drifting into the region was due to variable calving rates from Greenland. However, the regional climate and ocean currents are the biggest factors. Higher numbers of icebergs appear when there are colder sea surface temperatures and stronger northwesterly winds. 

Grant Brigg is a Professor of Earth System Science at the University of Sheffield.

“We have issued seasonal ice forecasts to the IIP since 2018, but this year is the first time we have combined the original control system model with two artificial intelligence approaches to specific aspects of the forecast. The agreement in all three approaches gives us the confidence to release the forecast for low iceberg numbers publicly this year—but it is worth remembering that this is just a forecast of iceberg conditions, not a guarantee, and that collisions between ships and icebergs do occur even in low ice years.”

According to Mike Hicks of the International Ice Patrol,  “The availability of a reliable prediction is very important as we consider the balance between aerial and satellite reconnaissance methods.”

Dr. John Wardman is a Senior Science Specialist in the Science and Natural Perils team at AXA XL. 

“The impact of sea level rise on coastal exposure and a potential increase in Arctic shipping activity will require a greater number and diversity of risk transfer solutions through the use of re/insurance products and other ‘soft’ mitigation strategies. The insurance industry is keeping a keen eye on the Arctic, and this model is an important tool in helping the industry identify how or when the melting Greenland Ice Sheet will directly impact the market.”

 

Spread the love
Continue Reading

Manufacturing

Cerebras Has the “World’s Fastest AI Computer”

Published

on

Cerebras Has the “World’s Fastest AI Computer”

According to the startup Cerebras Systems, the CS-1 is the world’s most powerful AI computer system. It is the latest attempt to create the best supercomputer, and it has been accepted by the U.S. federal government’s supercomputing program. 

The CS-1 uses an entire wafer instead of a chip, and their computer design has many little cores across the wafer. There are over 1.2 trillion transistors across the cores of one wafer, which is a lot more than the 10 million that are often on one chip of a processor. If that wasn’t enough, the CS-1 supercomputer has six of the Cerebras wafers in one system. They are called a Wafer Scale Engine. 

Cerebras’ first CS-1 was sent to the U.S. Department of Energy’s Argonne National Laboratory. The 400,000 cores will be used to work on extremely difficult AI computing problems like studying cancer drug interactions. The Argonne National Lab is one of the world’s top buyers of supercomputers. 

The CS-1

The CS-1 is programmable with the Cerebras Software Platform and can be used with existing infrastructure, according to the startup. The Wafer Scale Engine (WSE) has more silicon area than the biggest graphics processing unit, and the 400.000 Sparse Linear Algebra Compute (SLAC) cores are flexible, programmable, and optimized for neural networks. 

The CS-1 has a copper-colored block, or cold plate, that conducts heat away from the giant chip. Pipes of cold water are responsible for cooling, and fans blow cold air to carry heat away from the pipes. 

According to many, the big breakthrough is the dashboard. Argonne has been constantly working on spreading out a neural net over large numbers of individual chips, making them better to program compared to other supercomputer machines like Google’s Pod. 

The Cerebras CS-1 is basically one giant, self-contained chip where the neural network can be placed. A program has been developed to optimize the way math operations of a neural network are spread across the WSE’s circuits. 

According to Rick Stevens, Argonne’s associate laboratory director for computing, environment, and life sciences, “We have tools to do this but nothing turnkey the way the CS-1 is, [where] it’s all done automatically.”

Built From the Ground Up

According to Cerebras, they are the only startup to build a dedicated system from the ground up. In order to achieve its amazing performance, Cerebras optimized every aspect of chip design, system design, and software of the CS-1 system. This allows the CS-1 to complete AI tasks that normally take months in minutes. 

The supercomputer machine also greatly reduces training time, and single image classification can be completed in microseconds. 

According to an interview given to the technology website VentureBeat, CEO of Cerebras Andrew Feldman said, “This is the largest square that you can cut out of a 300 millimeter wafer.” he continued, “Even though we have the largest and fastest chip, we know that an extraordinary processor is not necessarily sufficient to deliver extraordinary performance. If you want to deliver really fast performance, you need to build a system. And you can’t take a Ferrari engine and put it in a Volkswagen to get Ferrari performance. What you do is you move the bottlenecks if you want to get a 1,000 times performance gain.”

After the introduction of the CS-1 system, Cerebras have positioned themselves as one of the leaders in the supercomputer industry. Their contribution will undoubtedly have a major impact in solving some of the world’s most pressing AI challenges. These systems are drastically decreasing the time it will take to tackle many problems.

Spread the love
Continue Reading

Investments

Microsoft Partners with Startup Graphcore to Develop AI Chips

mm

Published

on

Microsoft Partners with Startup Graphcore to Develop AI Chips

Microsoft hopes that its Azure cloud platform will catch up in popularity with Amazon and Google, so as Wired reports, it has partnered with a British startup Graphcore to come up with a new computer chip that would be able to sustain all-new artificial intelligence developments.

As Wired notes, Bristo, UK startup Graphcore “has attracted considerable attention among AI researchers—and several hundred million dollars in investment—on the promise that its chips will accelerate the computations required to make AI work.” Since its inception in 2016, this is the first time that the company is publicly coming up with its chips and testing results.

Microsoft’s invested in Graphcore in December 2018 “as a part of a $200 million funding round”, as it wants to stimulate the use of its cloud services to a growing number of customers that use AI applications.

Graphcore itself designed its chips from scratch “to support the calculations that help machines to recognize facesunderstand speechparse languagedrive cars, and train robots.” The company expects that its chips will be used by “companies running business-critical operations on AI, such as self-driving car startups, trading firms, and operations that process large quantities of video and audio, as well as those working on next-generation AI algorithms.”

According to the benchmarks published by Microsoft and Graphcore on November 13, 2019, “the chip matches or exceeds the performance of the top AI chips from Nvidia and Google using algorithms written for those rival platforms. Code is written specifically for Graphcore’s hardware maybe even more efficient.”

The two companies also stated that “certain image-processing tasks work many times faster on Graphcore’s chips,” and that “ they were able to train a popular AI model for language processing, called BERT, at rates matching those of any other existing hardware.”

Moor Insights AI chip specialist Karl Freund is of the opinion that the results of the new chip show that it is “cutting-edge but still flexible,”  and that “they’ve done a good job making it programmable,” an extremely hard thing to do.

Wired further adds that Nigel Toon, co-founder, and CEO of Graphcore, says the companies began working together a year after his company’s launch, through Microsoft Research Cambridge in the UK. He also told the publication that his company’s chips are especially well-suited to tasks that involve very large AI models or temporal data. Also, one customer in finance supposedly saw a 26-fold performance boost in an algorithm used to analyze market data thanks to Graphcore’s hardware.

Some other, smaller companies used this occasion to announce that “they are working with Graphcore chips through Azure.” This includes Citadel, which will use the chips to analyze financial data, and Qwant, a European search engine that wants the hardware to run an image-recognition algorithm known as ResNext.

Spread the love
Continue Reading