Connect with us

Manufacturing

Engineers Create AI From Only a Sheet of Glass

Updated

 on

A group of engineers and scientists at the University of Wisconsin-Madison have developed AI that is made from only a piece of glass with no sensors, circuits, or power sources. This is based on the same technology used for facial recognition in today’s smartphones, but it is much more simple. Professor of Electrical and Computer Engineering, Zongfu Yu, commented on the new research that has been published in Photonics Research. 

“We’re using optics to condense the normal setup of cameras, sensors, and deep neural networks into a single piece of thin glass.” 

As of now, AI uses a lot of computational resources and energy for things like facial recognition. With this new technology, the simple sheet of glass could possibly do the same thing but with absolutely no power source. 

This exciting new AI is multiple pieces of glass that look like translucent squares. Within this glass are bubbles and other small impurities that are spread throughout certain locations. These work by bending light so that it can detect certain images. Instead of using codes, this glass is able to work with analog material only. 

“We’re accustomed to digital computing, but this has broadened our view…The wave dynamics of light propagation provide a new way to perform analog artificial neural computing.” says Zongfu Yu. 

Since this new type of AI glass does not rely on any type of power, circuits, or internet, it can last much longer. According to the engineers and scientists, there’s no reason to believe that one piece of glass couldn’t last for thousands of years.

According to Yu, “We could potentially use the glass as a biometric lock, tuned to recognize only one person’s face…Once built, it would last forever without needing power or internet, meaning it could keep something safe for you even after thousands of years.” 

If implemented into something like a smartphone, this technology would drastically improve the battery life since the phone would no longer have to dedicate large amounts of energy to things like facial recognition. To top it all off, the piece of glass is an inexpensive creation. 

The engineers and researchers at the University of Wisconsin-Madison figured out a way to take glass pieces and use them to identify written numbers. Light came from one of the written numbers and into the sheet of glass On the other side, there were nine spots that correspond to different digits. The light then focused on one of those as it moved through the glass. The engineers and scientists kept adjusting the location of the bubbles and impurities by small movements. After thousands of alterations, the glass was able to detect when a handwritten #3 was changed to a #8. 

The engineers now want to push this further and try to get it to work with things like facial recognition. This technology could really change the way AI operates, and it could play a big role in the development of more complex systems in the future. 

According to Ming Yuan, a Professor of Statistics at Columbia University who worked with the researchers, “The true power of this technology lies in its ability to handle much more complex classification tasks instantly without an energy consumption…These tasks are the key to create artificial intelligence; to teach driverless cars to recognize a traffic signal, to enable voice control in consumer devices, among numerous other examples.” 

If this technology keeps getting developed further, it could really change the way some of our AI operates. The possibilities are endless when you have a simple sheet of glass that can perform the complex actions of something like facial recognition technology. 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Manufacturing

“Artificial Chemist” Performs Chemical Reactions

Updated

 on

Artificial Intelligence (AI) is making its way into every field including Chemistry. In the latest innovation, researchers from North Carolina State University and the University at Buffalo have developed a new technology called “Artificial Chemist.” The technology uses artificial intelligence and an automated system in order to perform chemical reactions, which then accelerate the research, development, and manufacturing of commercial materials. 

The paper titled “Artificial Chemist: An Autonomous Quantum Dot Synthesis Bot,” is published in Advanced Materials. 

Proof of Concept Experiments

The researchers demonstrated in proof of concept experiments that the tool can identify and produce the best possible quantum dots for any color. It is capable of doing this within 15 minutes or less. Quantum dots are used in applications like LED displays and are colloidal semiconductor nanocrystals.

According to the researchers, the Artificial Chemist is also able to identify the best material for a variety of measurable properties rather than only quantum dots. 

Milad Abolhasani is an assistant professor of chemical and biomolecular engineering at NC State and a corresponding author of the paper. 

“Artificial Chemist is a truly autonomous system that can intelligently navigate through the chemical universe,” says Abolhasan. “Currently, Artificial Chemist is designed for solution-processed materials – meaning it works for materials that can be made using liquid chemical precursors. Solution-processed materials include high-value materials such as quantum dots, metal/metal oxide nanoparticles, metal organic frameworks (MOFs), and so on.

“The Artificial Chemist is similar to a self-driving car, but a self-driving car at least has a finite number of routes to choose from in order to reach its pre-selected destination. With Artificial Chemist, you give it a set of desired parameters, which are the properties you want the final material to have. Artificial Chemist has to figure out everything else, such as what the chemical precursors will be and what the synthetic route will be, while minimizing the consumption of those chemical precursors.

“The end result is a fully autonomous materials development technology that not only helps you find the ideal solution-processed material more quickly than any techniques currently in use, but it does so using tiny amounts of chemical precursors. That significantly reduces waste and makes the materials development process much less expensive.”

The “Body” and “Brain” 

The Artificial Chemist is able to perform experiments and sense the experimental results, as well as record the data and determine the next experiment. 

The Artificial Chemist’s “body” incorporated and automated two specific flow synthesis platforms in its proof-of-concept testing: Nanocrystal Factory and NanoRobo. While the technology was able to run 500 quantum dot synthesis experiments each day, Abolhasani believes that number could be 1,000. 

The “brain” of the Artificial Chemist is an AI program that is capable of characterizing the materials that are being synthesized by the body. The data is then used to make autonomous decisions about experimental conditions for the next experiment. Those decisions revolve around what is the most efficient path to achieving the best material compositions. 

The Artificial Chemist improves its capability to identify the right material over time by storing data that is generated from every request it receives. 

When it comes to the AI deciding what the next experiment will be, the researchers tested nine different policies. Through a series of requests, the Artificial Chemist was asked to identify the best quantum dot material for three different output parameters. 

The results showed that it was able to identify the best quantum dot within one-and-a-half hours. However, that time was reduced down to 10 to 15 minutes after it had prior knowledge. 

“I believe autonomous materials R&D enabled by Artificial Chemist can re-shape the future of materials development and manufacturing,” Abolhasani said. “I’m now looking for partners to help us transfer the technique from the lab to the industrial sector.”

 

Spread the love
Continue Reading

Interviews

Ludovic Larzul, Founder and CEO of Mipsology – Interview Series

mm

Updated

 on

Ludovic Larzul. is the founder and CEO of Mipsology, a groundbreaking startup focused on state-of-the-art acceleration for deep learning inference. They’ve devised technology to accelerate the computations of inference neural networks and conceal the hardware accelerator to AI users. Mipsology’s Zebra is the first commercial accelerator that encapsulates such technology to provide high performance and ease of use.

What first got you interested in AI and microchips?

I worked in the design of a specific type of super-computer for about 20 years with my previous company EVE, before it was acquired by Synopsys in 2012. Those computers, also called ASIC emulators, are used by many companies designing ASICs around the world. I quite enjoyed the complexity and diversity of that work. To succeed, you have to (a) understand electronics, software, complex algorithms, how people design chips and how to make sure they work fine, chip architecture, power, and more deep tech, (b) correctly predict the needs of customers a few years in advance, (c) innovate continuously, and (d) as a startup, defeat the competition with far fewer resources. After 20 years of success, I was looking for a new challenge. This was the time when AI had started to come back into the spotlight. AlexNet had made a leap forward into understanding images (and looking back, it was still in its infancy). Deep learning was brand new but promising (Who remembers when it took days to get a result on a simple network?). I found that quite “fun”, but recognized there were many challenges.

 

What was the inspiration behind launching Mipsology?

I don’t know if I would use the word “inspiration.” It was initially more like: “Can we do something that would be different and better?” It started with assumptions of what AI people would like and do, and the next few years were spent finding ever-better solutions based on that. I guess more than inspiration, I would say that the people I work with like to be the best at what they create, in a positive attitude of competition. That makes a strong team that can solve problems others fail to solve adequately.

 

Mipsology uses FPGA boards instead of GPUs. Can you describe what FPGA are?

FPGA are electronic components that can be programmed at the hardware level. You can imagine it as a set of Legos — a few million of them. Each little block performs a simple operation like keeping a value, or slightly more complex operations like addition. By grouping all these blocks, it is possible to create a specific behavior after the chip is manufactured. This is the opposite of GPUS and almost all other chips, which are designed for a specific function and cannot be changed afterwards.

Some, like CPUS and GPUS, can be programmed, but they are not as parallel as FPGAs. At any given moment, an FPGA performs a few million simplistic operations. And this can happen six to seven hundred million times a second. Because they are programmable, what they do can be changed at any time to adapt to different problems, so the extraordinary computing power can be effective. FPGAs are already almost everywhere, including base stations of mobile phones, networks, satellites, cars, etc. People don’t know them well though, because they are not as visible as a CPU like the one in your laptop.

 

What makes these FPGA boards the superior solution to the more popular GPUs?

FPGAs are superior in many aspects. Let’s just focus on a couple important ones.

GPUs are designed for rendering images, mainly for games. They have been found to match well with some computations in AI because of the similarity of the operations. But they remain primarily dedicated to games, which means they come with constraints that do not fit well with neural networks.

Their programming is also limited to the instructions that were decided two or three years before they are available. The problem is that neural networks are advancing more quickly than the design of ASICs, and GPUs are ASICs. So, it is like trying to predict the future: it’s not simple to be right. You can see trends, but the details are what really impact the results, like performance. In contrast, because FPGAs are programmable at the hardware level, we can more easily keep up with the progress of AI. This allows us to deliver a better product with higher performance, and meet the customer’s needs without having to wait for the next silicon generation.

Furthermore, GPUs are designed to be consumer products. Their lifespan is intentionally short, because the companies designing GPUs want to sell new ones a few years later to gamers. This does not work well in electronic systems that need to be reliable for many years. FPGAs are designed to be robust and used 24/7 for many years.

Other well-known advantages of FPGAs include:

  • There are many options that can fit in specific areas like networking or video processing
  • They work as well in data centers as at the edge or in embedded
  • They do not require specific cooling (much less water cooling like big GPUs)

One major drawback is that FPGAs are difficult to program. It requires specific knowledge. Even though companies selling FPGAs have put great effort into bridging the complexity gap, it is still not as simple as a CPU. In truth, GPUs are not simple either. But the software that hides their programming for AI makes that knowledge unnecessary. That is the problem that Mipsology is the first to solve: removing the need for AI computing to program or have any knowledge of FPGA.

 

Are there any current limitations to FPGA boards?

Some FPGA boards are like some GPU boards. They can be plugged into a computer’s PCIe slots. One well known advantage, on top of the lifespan I mentioned before, is that the power consumption is typically lower than GPUs. Another one less known is that there is a larger selection of FPGA boards than GPU boards. There are more FPGAs for more markets, which leads to more boards that fit in different areas of the markets. This simply means that there are more possibilities for computing neural networks everywhere at lower cost. GPUs are more limited; they fit in data centers, but not much else.

 

Mipsology’s Zebra is the first commercial accelerator that encapsulates FPGA boards to provide high performance and ease of use. Can you describe what Zebra is?

For those who are familiar with AI and GPUs, the easiest description is that Zebra is to FPGA what Cuda/CuDNN is to GPU. It is a software stack that completely hides the FPGA behind usual frameworks like PyTorch or TensorFlow. We are primarily targeting inference for images and videos. Zebra starts with a neural network that was trained typically in floating point, and without any manual user effort or proprietary tool, makes it run on any FPGA-based card. It is as simple as: plug in the FPGA board, load the driver, source the Zebra environment, and launch the same inference application as the one running on CPUs or GPUs. We have our own quantization that retains the accuracy, and performance is out of the box. There is no proprietary tool that the user must learn, and it doesn’t take hours of engineering time to get high throughput or low latency. This means simply quick transitions, which also reduces cost and time to market.

 

What are the different types of applications that Zebra is best designed for?

Zebra is a very generic acceleration engine, so it can accelerate the computation for any application that needs to compute neural networks, with a primary focus on images and video because the computing needs are larger for this kind of data. We have requests from very different markets, but they are all similar when it comes to computing the neural networks. They all typically require classification, segmentation, super resolution, body positioning, etc.

As Zebra runs on top of FPGAs, any kind of boards can be used. Some have high throughput and are typically used in data centers. Others are more appropriate for use at the Edge or embedded. Our vision is that, if an FPGA can fit, users can use Zebra to accelerate their neural network computations right away. And if GPUs or CPUs are used, Zebra can replace them and reduce the costs of the AI infrastructure. Most of the companies we talk to are having similar issues: they could deploy more AI-based applications, but the cost is limiting them.

 

For a company that wishes to use Zebra, what is the process?

Simply let us know at support@mipsology.com and we’ll get you started.

 

Is there anything else that you would like to share about Mipsology?

We are very encouraged about the feedback we get from the AI community for our Zebra solution. Specifically, we are told that this is probably the best accelerator on the market. After only a few months, we continue to add to a growing ecosystem of interested partners including Xilinx, Dell, Western Digital, Avnet, TUL and Advantech, to name a few.

I really enjoyed learning about this ground breaking technology. Readers who wish to learn more should visit Mipsology.

Spread the love
Continue Reading

Manufacturing

New Study Suggests Robots in the Workforce Increase Income Inequality, Greatly Depends on Region

Updated

 on

There have been many predictions about the future of work with artificial intelligence and automation, ranging from massive unemployment to the creation of many new jobs due to the technology. Now, a new study co-authored by an MIT professor has been released, providing some more insight into the replacement of workers by robots. 

The paper is titled “Robots and Jobs: Evidence from U.S. Labor Markets” and was authored by MIT economist Daron Acemoglu and Pascual Restrepo Ph.D. ‘16, who is an assistant professor of economics at Boston University. It can be found in the Journal of Political Economy. 

One of the findings of the study is that the impact of robots, specifically in the United States, will greatly depend on the industry and region. It also found that income inequality can be dramatically increased due to the technology. 

According to Acemoglu, “We find fairly major negative employment effects.” However, Acemoglu also said that the impact could be overstated. 

The study found that between 1990 and 2007, the addition of one robot per 1,000 workers resulted in a reduction of the national employment-to-population ratio by approximately 0.3 percent. It also found that this number differs depending on the areas of the U.S., with some areas being greatly more affected than others. 

In other terms, an average of 3.3 workers nationally were replaced for each additional robot that was added in manufacturing. 

Another key finding of the study was that during the same time period, wages were lowered by about 0.4 percent due to the increased use of robots in the workplace. 

“We find negative wage effects, that workers are losing in terms of real wages in more affected areas, because robots are pretty good at competing against them,” Acemoglu says.

Data Used in the Study

The study was conducted with data from 19 different industries, which was compiled by the International Federation of Robotics (IFR). The IFR is an industry group based in Frankfurt that gathers detailed data on robot deployments around the world. The data was then combined with more from the U.S., which was based on population, employment, business, and wages. The U.S. data was taken from the U.S. Census Bureau, the Bureau of Economic Analysis, and the Bureau of Labor Statistics. 

One of the other methods used in the study was to compare robot deployment in the U.S. to other countries, and the researchers found that the U.S. is behind Europe in this regard. Compared to Europe’s 1.6 new robots introduced per 1,000 workers between 1993 and 2007, U.S. firms only introduced one new robot per 1,000 workers. 

“Even though the U.S. is a technologically very advanced economy, in terms of industrial robots’ production and usage and innovation, it’s behind many other advanced economies,” Acemoglu says.

Hardest Hit Areas in the U.S. 

By analyzing 722 different commuting zones in the continental U.S., as well as the impact of robots on them, the study found that there are dramatic differences in the usage of robots based on geographical location. 

One of the areas most affected by this technology is the automobile industry, and some of the major hubs of that industry, including Detroit, Lansing, and Saginaw, are some of the hardest-hit areas. 

“Different industries have different footprints in different places in the U.S.,” Acemoglu says. “The place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].”

Each robot replaces about 6.6 jobs locally in the commuting zones where robots are put into the workforce. One of the more interesting findings of the study is that whenever robots are added in manufacturing, other industries and areas benefit around the country, due to things like a lower cost of goods. This is why the study concluded with a total of 3.3 jobs replaced per one robot added for the entire U.S. 

The researchers also found that income inequality is directly affected by the introduction of robots. This is largely due to the fact that in the areas where many of these jobs are replaced, there is a lack of other good employment opportunities. 

“There are major distributional implications,” Acemoglu says. “The burden falls on the low-skill and especially middle-skill workers. That’s really an important part of our overall research [on robots], that automation actually is a much bigger part of the technological factors that have contributed to rising inequality over the last 30 years.”

“It certainly won’t give any support to those who think robots are going to take all of our jobs,” Acemoglu continues. “But it does imply that automation is a real force to be grappled with.”

 

Spread the love
Continue Reading