Connect with us

Manufacturing

Intel’s New Neuromorphic Chips are 1,000 Times Faster Than Normal CPUs

Updated

 on

Intel’s new system codenamed Pohoiki Beach will be at the Consumer Electronics Show (CES) in Las Vegas. The device is built from 64 Loihi research chips, and the goal is for it to simulate the human brain when it comes to learning ability and energy efficiency. These neuromorphic chips are a simpler version of the way neurons and synapses function in the brain. 

Rich Uhlig, managing director of Intel Labs, spoke on the new technology. 

“We are impressed with the early results demonstrated as we scale Loihi to create more powerful neuromorphic systems. Pohoiki Beach will now be available to more than 60 ecosystem partners, who will use this specialized system to solve complex, compute-intensive problems.” 

The new AI neuromorphic chip can perform data-crunching tasks 1,000 times faster than normal processors like CPUs and GPUs while using a lot less power. 

The way it is based on brain neurons is not something entirely new. Many AI algorithms simulate neural networks in their programs. They use parallel processing for recognizing objects in images and words in speech. The new neuromorphic chips put these neural networks into silicon. While they are less flexible and powerful than some of the best general-purpose chips, they really perform when specialized in specific tasks. The new AI chip from Intel is 10,000 times more efficient than general processors. Since they are so energy efficient, the technology will be ideal for mobile devices, vehicles, industrial equipment, cybersecurity, and smart homes. AI researchers have already begun to use the system for things like improving prosthetic limbs so that they can adapt better to uneven ground, as well as creating digital maps to be used by self-driving cars. 

Chris Eliasmith, co-CEO of Applied Brain Research and professor at the University of Waterloo, is one of the several researchers using the new technology. 

“With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT interface hardware…Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time,” Chris Eliasmith said. 

Konstantinos Michmizos is a professor of Rutgers University, and his lab does work with SLAM which will be presented at the International Conference on Intelligent Robots and Systems (IROS) in November. 

“Loihi allowed us to realize a spiking neural network that imitates the brain’s underlying neural representations and behavior. The SLAM solution emerged as a property of the network’s structure. We benchmarked the Loihi-run network and found it to be equally accurate while consuming 100 times less energy than a widely used CPU-run SLAM method for mobile robots,” he said. 

As of right now, Pohoiki Beach is an 8 million neuron system. Rich Uhlig, head of Intel Labs, thinks that the company will be able to create a system that is able to simulate 100 million neurons by the end of 2019. This new technology will be able to be used by researchers for a wide range of things such as improvement of robot arms. These new developments and research are leading to what will likely be the commercialization of neuromorphic technology. 

According to the company, “Later this year, Intel will introduce an even larger Loihi system named Pohoiki Springs, which will build on the Pohoiki Beach architecture to deliver an unprecedented level of performance and efficiency for scaled-up neuromorphic workloads.” 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Manufacturing

“Artificial Chemist” Performs Chemical Reactions

Updated

 on

Artificial Intelligence (AI) is making its way into every field including Chemistry. In the latest innovation, researchers from North Carolina State University and the University at Buffalo have developed a new technology called “Artificial Chemist.” The technology uses artificial intelligence and an automated system in order to perform chemical reactions, which then accelerate the research, development, and manufacturing of commercial materials. 

The paper titled “Artificial Chemist: An Autonomous Quantum Dot Synthesis Bot,” is published in Advanced Materials. 

Proof of Concept Experiments

The researchers demonstrated in proof of concept experiments that the tool can identify and produce the best possible quantum dots for any color. It is capable of doing this within 15 minutes or less. Quantum dots are used in applications like LED displays and are colloidal semiconductor nanocrystals.

According to the researchers, the Artificial Chemist is also able to identify the best material for a variety of measurable properties rather than only quantum dots. 

Milad Abolhasani is an assistant professor of chemical and biomolecular engineering at NC State and a corresponding author of the paper. 

“Artificial Chemist is a truly autonomous system that can intelligently navigate through the chemical universe,” says Abolhasan. “Currently, Artificial Chemist is designed for solution-processed materials – meaning it works for materials that can be made using liquid chemical precursors. Solution-processed materials include high-value materials such as quantum dots, metal/metal oxide nanoparticles, metal organic frameworks (MOFs), and so on.

“The Artificial Chemist is similar to a self-driving car, but a self-driving car at least has a finite number of routes to choose from in order to reach its pre-selected destination. With Artificial Chemist, you give it a set of desired parameters, which are the properties you want the final material to have. Artificial Chemist has to figure out everything else, such as what the chemical precursors will be and what the synthetic route will be, while minimizing the consumption of those chemical precursors.

“The end result is a fully autonomous materials development technology that not only helps you find the ideal solution-processed material more quickly than any techniques currently in use, but it does so using tiny amounts of chemical precursors. That significantly reduces waste and makes the materials development process much less expensive.”

The “Body” and “Brain” 

The Artificial Chemist is able to perform experiments and sense the experimental results, as well as record the data and determine the next experiment. 

The Artificial Chemist’s “body” incorporated and automated two specific flow synthesis platforms in its proof-of-concept testing: Nanocrystal Factory and NanoRobo. While the technology was able to run 500 quantum dot synthesis experiments each day, Abolhasani believes that number could be 1,000. 

The “brain” of the Artificial Chemist is an AI program that is capable of characterizing the materials that are being synthesized by the body. The data is then used to make autonomous decisions about experimental conditions for the next experiment. Those decisions revolve around what is the most efficient path to achieving the best material compositions. 

The Artificial Chemist improves its capability to identify the right material over time by storing data that is generated from every request it receives. 

When it comes to the AI deciding what the next experiment will be, the researchers tested nine different policies. Through a series of requests, the Artificial Chemist was asked to identify the best quantum dot material for three different output parameters. 

The results showed that it was able to identify the best quantum dot within one-and-a-half hours. However, that time was reduced down to 10 to 15 minutes after it had prior knowledge. 

“I believe autonomous materials R&D enabled by Artificial Chemist can re-shape the future of materials development and manufacturing,” Abolhasani said. “I’m now looking for partners to help us transfer the technique from the lab to the industrial sector.”

 

Spread the love
Continue Reading

Interviews

Ludovic Larzul, Founder and CEO of Mipsology – Interview Series

mm

Updated

 on

Ludovic Larzul. is the founder and CEO of Mipsology, a groundbreaking startup focused on state-of-the-art acceleration for deep learning inference. They’ve devised technology to accelerate the computations of inference neural networks and conceal the hardware accelerator to AI users. Mipsology’s Zebra is the first commercial accelerator that encapsulates such technology to provide high performance and ease of use.

What first got you interested in AI and microchips?

I worked in the design of a specific type of super-computer for about 20 years with my previous company EVE, before it was acquired by Synopsys in 2012. Those computers, also called ASIC emulators, are used by many companies designing ASICs around the world. I quite enjoyed the complexity and diversity of that work. To succeed, you have to (a) understand electronics, software, complex algorithms, how people design chips and how to make sure they work fine, chip architecture, power, and more deep tech, (b) correctly predict the needs of customers a few years in advance, (c) innovate continuously, and (d) as a startup, defeat the competition with far fewer resources. After 20 years of success, I was looking for a new challenge. This was the time when AI had started to come back into the spotlight. AlexNet had made a leap forward into understanding images (and looking back, it was still in its infancy). Deep learning was brand new but promising (Who remembers when it took days to get a result on a simple network?). I found that quite “fun”, but recognized there were many challenges.

 

What was the inspiration behind launching Mipsology?

I don’t know if I would use the word “inspiration.” It was initially more like: “Can we do something that would be different and better?” It started with assumptions of what AI people would like and do, and the next few years were spent finding ever-better solutions based on that. I guess more than inspiration, I would say that the people I work with like to be the best at what they create, in a positive attitude of competition. That makes a strong team that can solve problems others fail to solve adequately.

 

Mipsology uses FPGA boards instead of GPUs. Can you describe what FPGA are?

FPGA are electronic components that can be programmed at the hardware level. You can imagine it as a set of Legos — a few million of them. Each little block performs a simple operation like keeping a value, or slightly more complex operations like addition. By grouping all these blocks, it is possible to create a specific behavior after the chip is manufactured. This is the opposite of GPUS and almost all other chips, which are designed for a specific function and cannot be changed afterwards.

Some, like CPUS and GPUS, can be programmed, but they are not as parallel as FPGAs. At any given moment, an FPGA performs a few million simplistic operations. And this can happen six to seven hundred million times a second. Because they are programmable, what they do can be changed at any time to adapt to different problems, so the extraordinary computing power can be effective. FPGAs are already almost everywhere, including base stations of mobile phones, networks, satellites, cars, etc. People don’t know them well though, because they are not as visible as a CPU like the one in your laptop.

 

What makes these FPGA boards the superior solution to the more popular GPUs?

FPGAs are superior in many aspects. Let’s just focus on a couple important ones.

GPUs are designed for rendering images, mainly for games. They have been found to match well with some computations in AI because of the similarity of the operations. But they remain primarily dedicated to games, which means they come with constraints that do not fit well with neural networks.

Their programming is also limited to the instructions that were decided two or three years before they are available. The problem is that neural networks are advancing more quickly than the design of ASICs, and GPUs are ASICs. So, it is like trying to predict the future: it’s not simple to be right. You can see trends, but the details are what really impact the results, like performance. In contrast, because FPGAs are programmable at the hardware level, we can more easily keep up with the progress of AI. This allows us to deliver a better product with higher performance, and meet the customer’s needs without having to wait for the next silicon generation.

Furthermore, GPUs are designed to be consumer products. Their lifespan is intentionally short, because the companies designing GPUs want to sell new ones a few years later to gamers. This does not work well in electronic systems that need to be reliable for many years. FPGAs are designed to be robust and used 24/7 for many years.

Other well-known advantages of FPGAs include:

  • There are many options that can fit in specific areas like networking or video processing
  • They work as well in data centers as at the edge or in embedded
  • They do not require specific cooling (much less water cooling like big GPUs)

One major drawback is that FPGAs are difficult to program. It requires specific knowledge. Even though companies selling FPGAs have put great effort into bridging the complexity gap, it is still not as simple as a CPU. In truth, GPUs are not simple either. But the software that hides their programming for AI makes that knowledge unnecessary. That is the problem that Mipsology is the first to solve: removing the need for AI computing to program or have any knowledge of FPGA.

 

Are there any current limitations to FPGA boards?

Some FPGA boards are like some GPU boards. They can be plugged into a computer’s PCIe slots. One well known advantage, on top of the lifespan I mentioned before, is that the power consumption is typically lower than GPUs. Another one less known is that there is a larger selection of FPGA boards than GPU boards. There are more FPGAs for more markets, which leads to more boards that fit in different areas of the markets. This simply means that there are more possibilities for computing neural networks everywhere at lower cost. GPUs are more limited; they fit in data centers, but not much else.

 

Mipsology’s Zebra is the first commercial accelerator that encapsulates FPGA boards to provide high performance and ease of use. Can you describe what Zebra is?

For those who are familiar with AI and GPUs, the easiest description is that Zebra is to FPGA what Cuda/CuDNN is to GPU. It is a software stack that completely hides the FPGA behind usual frameworks like PyTorch or TensorFlow. We are primarily targeting inference for images and videos. Zebra starts with a neural network that was trained typically in floating point, and without any manual user effort or proprietary tool, makes it run on any FPGA-based card. It is as simple as: plug in the FPGA board, load the driver, source the Zebra environment, and launch the same inference application as the one running on CPUs or GPUs. We have our own quantization that retains the accuracy, and performance is out of the box. There is no proprietary tool that the user must learn, and it doesn’t take hours of engineering time to get high throughput or low latency. This means simply quick transitions, which also reduces cost and time to market.

 

What are the different types of applications that Zebra is best designed for?

Zebra is a very generic acceleration engine, so it can accelerate the computation for any application that needs to compute neural networks, with a primary focus on images and video because the computing needs are larger for this kind of data. We have requests from very different markets, but they are all similar when it comes to computing the neural networks. They all typically require classification, segmentation, super resolution, body positioning, etc.

As Zebra runs on top of FPGAs, any kind of boards can be used. Some have high throughput and are typically used in data centers. Others are more appropriate for use at the Edge or embedded. Our vision is that, if an FPGA can fit, users can use Zebra to accelerate their neural network computations right away. And if GPUs or CPUs are used, Zebra can replace them and reduce the costs of the AI infrastructure. Most of the companies we talk to are having similar issues: they could deploy more AI-based applications, but the cost is limiting them.

 

For a company that wishes to use Zebra, what is the process?

Simply let us know at support@mipsology.com and we’ll get you started.

 

Is there anything else that you would like to share about Mipsology?

We are very encouraged about the feedback we get from the AI community for our Zebra solution. Specifically, we are told that this is probably the best accelerator on the market. After only a few months, we continue to add to a growing ecosystem of interested partners including Xilinx, Dell, Western Digital, Avnet, TUL and Advantech, to name a few.

I really enjoyed learning about this ground breaking technology. Readers who wish to learn more should visit Mipsology.

Spread the love
Continue Reading

Manufacturing

New Study Suggests Robots in the Workforce Increase Income Inequality, Greatly Depends on Region

Updated

 on

There have been many predictions about the future of work with artificial intelligence and automation, ranging from massive unemployment to the creation of many new jobs due to the technology. Now, a new study co-authored by an MIT professor has been released, providing some more insight into the replacement of workers by robots. 

The paper is titled “Robots and Jobs: Evidence from U.S. Labor Markets” and was authored by MIT economist Daron Acemoglu and Pascual Restrepo Ph.D. ‘16, who is an assistant professor of economics at Boston University. It can be found in the Journal of Political Economy. 

One of the findings of the study is that the impact of robots, specifically in the United States, will greatly depend on the industry and region. It also found that income inequality can be dramatically increased due to the technology. 

According to Acemoglu, “We find fairly major negative employment effects.” However, Acemoglu also said that the impact could be overstated. 

The study found that between 1990 and 2007, the addition of one robot per 1,000 workers resulted in a reduction of the national employment-to-population ratio by approximately 0.3 percent. It also found that this number differs depending on the areas of the U.S., with some areas being greatly more affected than others. 

In other terms, an average of 3.3 workers nationally were replaced for each additional robot that was added in manufacturing. 

Another key finding of the study was that during the same time period, wages were lowered by about 0.4 percent due to the increased use of robots in the workplace. 

“We find negative wage effects, that workers are losing in terms of real wages in more affected areas, because robots are pretty good at competing against them,” Acemoglu says.

Data Used in the Study

The study was conducted with data from 19 different industries, which was compiled by the International Federation of Robotics (IFR). The IFR is an industry group based in Frankfurt that gathers detailed data on robot deployments around the world. The data was then combined with more from the U.S., which was based on population, employment, business, and wages. The U.S. data was taken from the U.S. Census Bureau, the Bureau of Economic Analysis, and the Bureau of Labor Statistics. 

One of the other methods used in the study was to compare robot deployment in the U.S. to other countries, and the researchers found that the U.S. is behind Europe in this regard. Compared to Europe’s 1.6 new robots introduced per 1,000 workers between 1993 and 2007, U.S. firms only introduced one new robot per 1,000 workers. 

“Even though the U.S. is a technologically very advanced economy, in terms of industrial robots’ production and usage and innovation, it’s behind many other advanced economies,” Acemoglu says.

Hardest Hit Areas in the U.S. 

By analyzing 722 different commuting zones in the continental U.S., as well as the impact of robots on them, the study found that there are dramatic differences in the usage of robots based on geographical location. 

One of the areas most affected by this technology is the automobile industry, and some of the major hubs of that industry, including Detroit, Lansing, and Saginaw, are some of the hardest-hit areas. 

“Different industries have different footprints in different places in the U.S.,” Acemoglu says. “The place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].”

Each robot replaces about 6.6 jobs locally in the commuting zones where robots are put into the workforce. One of the more interesting findings of the study is that whenever robots are added in manufacturing, other industries and areas benefit around the country, due to things like a lower cost of goods. This is why the study concluded with a total of 3.3 jobs replaced per one robot added for the entire U.S. 

The researchers also found that income inequality is directly affected by the introduction of robots. This is largely due to the fact that in the areas where many of these jobs are replaced, there is a lack of other good employment opportunities. 

“There are major distributional implications,” Acemoglu says. “The burden falls on the low-skill and especially middle-skill workers. That’s really an important part of our overall research [on robots], that automation actually is a much bigger part of the technological factors that have contributed to rising inequality over the last 30 years.”

“It certainly won’t give any support to those who think robots are going to take all of our jobs,” Acemoglu continues. “But it does imply that automation is a real force to be grappled with.”

 

Spread the love
Continue Reading