Alibaba announced on Tuesday that they have released their first artificial intelligence (AI) chip called the Hanguang 800. This new chip will be able to process and compute certain tasks at a faster rate. This new development brings to light China’s pursuit to get involved in the semiconductor industry and artificial intelligence.
According to the company, they are able to complete certain tasks in five minutes that used to take one hour. They are using it within their business operations and in areas such as product search, automatic translation, and advertisement. The chip brings a new level of efficiency to these tasks that require high computing. After being used within business operations, it will eventually be available to Alibaba Cloud customers.
According to Jeff Zhang, Alibaba’s chief technology officer, “In the near future, we plan to empower our clients by providing access through our cloud business to the advanced computing that is made possible by the chip, anytime and anywhere.”
The new chip can be critical in reducing Chinese companies’ dependence on U.S. technology. This comes at a time of heightened tensions between the United States and China. The current trade war is complicating relationships and business partnerships between tech companies from each nation.
The new chip could play a role in expanding Alibaba Cloud into other markets outside of China, where it is the current leader. Other companies such as Microsoft, Amazon, and Google are the leaders in areas like the Asia-Pacific region.
T-Head is responsible for creating the new Hanguang 800. They are a group within Alibaba DAMO Academy, which is a global research and development initiative. Alibaba has invested more than $15 billion into the initiative.
“The launch of Hanguang 800 is an important step in our pursuit of next-generation technologies, boosting computing capabilities that will drive both our current and emerging businesses while improving energy-efficiency,” Jeff Zhang said.
As reported by the Financial Times, Alibaba is now joining the chip-making industry as a non-traditional creator. Others in that group include Chinese tech companies like Baidu and Tencent, along with Google and Facebook. All of these companies are in a worldwide race to develop the most powerful chips. These will play a huge role in increasing computing power and the new AI developments which are coming.
According to He Wei, a researcher at Tsinghua University’s department of precision instruments, “There’s a trend now for non-traditional chip companies to start developing chips, especially for AI chips, where there isn’t a clear leader.” He continued to say, “There will of course be some encouragement from the [Chinese] government for companies doing this.”
China is not on par with the United States when it comes to the development of high-end processor chips, but they are closer with AI chips. The Chinese government has been focusing resources on developing chips and manufacturing them within their own borders. Their goal is to create semiconductor self-sufficiency.
One of the big forces behind these developments in chips is the use of open-source chip architecture such as RISC-V. This makes it cheaper, and there is no need to pay big licensing fees to companies that design the chips.
These new developments show that China is going to be a major player in the chip industry. With the increasing tensions brought on by the trade war, one can expect more developments like these in the future. Whichever companies are able to develop the most powerful computing chips are going to have the upper hand in an array of industries such as artificial intelligence.
“Artificial Chemist” Performs Chemical Reactions
Artificial Intelligence (AI) is making its way into every field including Chemistry. In the latest innovation, researchers from North Carolina State University and the University at Buffalo have developed a new technology called “Artificial Chemist.” The technology uses artificial intelligence and an automated system in order to perform chemical reactions, which then accelerate the research, development, and manufacturing of commercial materials.
The paper titled “Artificial Chemist: An Autonomous Quantum Dot Synthesis Bot,” is published in Advanced Materials.
Proof of Concept Experiments
The researchers demonstrated in proof of concept experiments that the tool can identify and produce the best possible quantum dots for any color. It is capable of doing this within 15 minutes or less. Quantum dots are used in applications like LED displays and are colloidal semiconductor nanocrystals.
According to the researchers, the Artificial Chemist is also able to identify the best material for a variety of measurable properties rather than only quantum dots.
Milad Abolhasani is an assistant professor of chemical and biomolecular engineering at NC State and a corresponding author of the paper.
“Artificial Chemist is a truly autonomous system that can intelligently navigate through the chemical universe,” says Abolhasan. “Currently, Artificial Chemist is designed for solution-processed materials – meaning it works for materials that can be made using liquid chemical precursors. Solution-processed materials include high-value materials such as quantum dots, metal/metal oxide nanoparticles, metal organic frameworks (MOFs), and so on.
“The Artificial Chemist is similar to a self-driving car, but a self-driving car at least has a finite number of routes to choose from in order to reach its pre-selected destination. With Artificial Chemist, you give it a set of desired parameters, which are the properties you want the final material to have. Artificial Chemist has to figure out everything else, such as what the chemical precursors will be and what the synthetic route will be, while minimizing the consumption of those chemical precursors.
“The end result is a fully autonomous materials development technology that not only helps you find the ideal solution-processed material more quickly than any techniques currently in use, but it does so using tiny amounts of chemical precursors. That significantly reduces waste and makes the materials development process much less expensive.”
The “Body” and “Brain”
The Artificial Chemist is able to perform experiments and sense the experimental results, as well as record the data and determine the next experiment.
The Artificial Chemist’s “body” incorporated and automated two specific flow synthesis platforms in its proof-of-concept testing: Nanocrystal Factory and NanoRobo. While the technology was able to run 500 quantum dot synthesis experiments each day, Abolhasani believes that number could be 1,000.
The “brain” of the Artificial Chemist is an AI program that is capable of characterizing the materials that are being synthesized by the body. The data is then used to make autonomous decisions about experimental conditions for the next experiment. Those decisions revolve around what is the most efficient path to achieving the best material compositions.
The Artificial Chemist improves its capability to identify the right material over time by storing data that is generated from every request it receives.
When it comes to the AI deciding what the next experiment will be, the researchers tested nine different policies. Through a series of requests, the Artificial Chemist was asked to identify the best quantum dot material for three different output parameters.
The results showed that it was able to identify the best quantum dot within one-and-a-half hours. However, that time was reduced down to 10 to 15 minutes after it had prior knowledge.
“I believe autonomous materials R&D enabled by Artificial Chemist can re-shape the future of materials development and manufacturing,” Abolhasani said. “I’m now looking for partners to help us transfer the technique from the lab to the industrial sector.”
Ludovic Larzul, Founder and CEO of Mipsology – Interview Series
Ludovic Larzul. is the founder and CEO of Mipsology, a groundbreaking startup focused on state-of-the-art acceleration for deep learning inference. They’ve devised technology to accelerate the computations of inference neural networks and conceal the hardware accelerator to AI users. Mipsology’s Zebra is the first commercial accelerator that encapsulates such technology to provide high performance and ease of use.
What first got you interested in AI and microchips?
I worked in the design of a specific type of super-computer for about 20 years with my previous company EVE, before it was acquired by Synopsys in 2012. Those computers, also called ASIC emulators, are used by many companies designing ASICs around the world. I quite enjoyed the complexity and diversity of that work. To succeed, you have to (a) understand electronics, software, complex algorithms, how people design chips and how to make sure they work fine, chip architecture, power, and more deep tech, (b) correctly predict the needs of customers a few years in advance, (c) innovate continuously, and (d) as a startup, defeat the competition with far fewer resources. After 20 years of success, I was looking for a new challenge. This was the time when AI had started to come back into the spotlight. AlexNet had made a leap forward into understanding images (and looking back, it was still in its infancy). Deep learning was brand new but promising (Who remembers when it took days to get a result on a simple network?). I found that quite “fun”, but recognized there were many challenges.
What was the inspiration behind launching Mipsology?
I don’t know if I would use the word “inspiration.” It was initially more like: “Can we do something that would be different and better?” It started with assumptions of what AI people would like and do, and the next few years were spent finding ever-better solutions based on that. I guess more than inspiration, I would say that the people I work with like to be the best at what they create, in a positive attitude of competition. That makes a strong team that can solve problems others fail to solve adequately.
Mipsology uses FPGA boards instead of GPUs. Can you describe what FPGA are?
FPGA are electronic components that can be programmed at the hardware level. You can imagine it as a set of Legos — a few million of them. Each little block performs a simple operation like keeping a value, or slightly more complex operations like addition. By grouping all these blocks, it is possible to create a specific behavior after the chip is manufactured. This is the opposite of GPUS and almost all other chips, which are designed for a specific function and cannot be changed afterwards.
Some, like CPUS and GPUS, can be programmed, but they are not as parallel as FPGAs. At any given moment, an FPGA performs a few million simplistic operations. And this can happen six to seven hundred million times a second. Because they are programmable, what they do can be changed at any time to adapt to different problems, so the extraordinary computing power can be effective. FPGAs are already almost everywhere, including base stations of mobile phones, networks, satellites, cars, etc. People don’t know them well though, because they are not as visible as a CPU like the one in your laptop.
What makes these FPGA boards the superior solution to the more popular GPUs?
FPGAs are superior in many aspects. Let’s just focus on a couple important ones.
GPUs are designed for rendering images, mainly for games. They have been found to match well with some computations in AI because of the similarity of the operations. But they remain primarily dedicated to games, which means they come with constraints that do not fit well with neural networks.
Their programming is also limited to the instructions that were decided two or three years before they are available. The problem is that neural networks are advancing more quickly than the design of ASICs, and GPUs are ASICs. So, it is like trying to predict the future: it’s not simple to be right. You can see trends, but the details are what really impact the results, like performance. In contrast, because FPGAs are programmable at the hardware level, we can more easily keep up with the progress of AI. This allows us to deliver a better product with higher performance, and meet the customer’s needs without having to wait for the next silicon generation.
Furthermore, GPUs are designed to be consumer products. Their lifespan is intentionally short, because the companies designing GPUs want to sell new ones a few years later to gamers. This does not work well in electronic systems that need to be reliable for many years. FPGAs are designed to be robust and used 24/7 for many years.
Other well-known advantages of FPGAs include:
- There are many options that can fit in specific areas like networking or video processing
- They work as well in data centers as at the edge or in embedded
- They do not require specific cooling (much less water cooling like big GPUs)
One major drawback is that FPGAs are difficult to program. It requires specific knowledge. Even though companies selling FPGAs have put great effort into bridging the complexity gap, it is still not as simple as a CPU. In truth, GPUs are not simple either. But the software that hides their programming for AI makes that knowledge unnecessary. That is the problem that Mipsology is the first to solve: removing the need for AI computing to program or have any knowledge of FPGA.
Are there any current limitations to FPGA boards?
Some FPGA boards are like some GPU boards. They can be plugged into a computer’s PCIe slots. One well known advantage, on top of the lifespan I mentioned before, is that the power consumption is typically lower than GPUs. Another one less known is that there is a larger selection of FPGA boards than GPU boards. There are more FPGAs for more markets, which leads to more boards that fit in different areas of the markets. This simply means that there are more possibilities for computing neural networks everywhere at lower cost. GPUs are more limited; they fit in data centers, but not much else.
Mipsology’s Zebra is the first commercial accelerator that encapsulates FPGA boards to provide high performance and ease of use. Can you describe what Zebra is?
For those who are familiar with AI and GPUs, the easiest description is that Zebra is to FPGA what Cuda/CuDNN is to GPU. It is a software stack that completely hides the FPGA behind usual frameworks like PyTorch or TensorFlow. We are primarily targeting inference for images and videos. Zebra starts with a neural network that was trained typically in floating point, and without any manual user effort or proprietary tool, makes it run on any FPGA-based card. It is as simple as: plug in the FPGA board, load the driver, source the Zebra environment, and launch the same inference application as the one running on CPUs or GPUs. We have our own quantization that retains the accuracy, and performance is out of the box. There is no proprietary tool that the user must learn, and it doesn’t take hours of engineering time to get high throughput or low latency. This means simply quick transitions, which also reduces cost and time to market.
What are the different types of applications that Zebra is best designed for?
Zebra is a very generic acceleration engine, so it can accelerate the computation for any application that needs to compute neural networks, with a primary focus on images and video because the computing needs are larger for this kind of data. We have requests from very different markets, but they are all similar when it comes to computing the neural networks. They all typically require classification, segmentation, super resolution, body positioning, etc.
As Zebra runs on top of FPGAs, any kind of boards can be used. Some have high throughput and are typically used in data centers. Others are more appropriate for use at the Edge or embedded. Our vision is that, if an FPGA can fit, users can use Zebra to accelerate their neural network computations right away. And if GPUs or CPUs are used, Zebra can replace them and reduce the costs of the AI infrastructure. Most of the companies we talk to are having similar issues: they could deploy more AI-based applications, but the cost is limiting them.
For a company that wishes to use Zebra, what is the process?
Simply let us know at firstname.lastname@example.org and we’ll get you started.
Is there anything else that you would like to share about Mipsology?
We are very encouraged about the feedback we get from the AI community for our Zebra solution. Specifically, we are told that this is probably the best accelerator on the market. After only a few months, we continue to add to a growing ecosystem of interested partners including Xilinx, Dell, Western Digital, Avnet, TUL and Advantech, to name a few.
I really enjoyed learning about this ground breaking technology. Readers who wish to learn more should visit Mipsology.
New Study Suggests Robots in the Workforce Increase Income Inequality, Greatly Depends on Region
There have been many predictions about the future of work with artificial intelligence and automation, ranging from massive unemployment to the creation of many new jobs due to the technology. Now, a new study co-authored by an MIT professor has been released, providing some more insight into the replacement of workers by robots.
The paper is titled “Robots and Jobs: Evidence from U.S. Labor Markets” and was authored by MIT economist Daron Acemoglu and Pascual Restrepo Ph.D. ‘16, who is an assistant professor of economics at Boston University. It can be found in the Journal of Political Economy.
One of the findings of the study is that the impact of robots, specifically in the United States, will greatly depend on the industry and region. It also found that income inequality can be dramatically increased due to the technology.
According to Acemoglu, “We find fairly major negative employment effects.” However, Acemoglu also said that the impact could be overstated.
The study found that between 1990 and 2007, the addition of one robot per 1,000 workers resulted in a reduction of the national employment-to-population ratio by approximately 0.3 percent. It also found that this number differs depending on the areas of the U.S., with some areas being greatly more affected than others.
In other terms, an average of 3.3 workers nationally were replaced for each additional robot that was added in manufacturing.
Another key finding of the study was that during the same time period, wages were lowered by about 0.4 percent due to the increased use of robots in the workplace.
“We find negative wage effects, that workers are losing in terms of real wages in more affected areas, because robots are pretty good at competing against them,” Acemoglu says.
Data Used in the Study
The study was conducted with data from 19 different industries, which was compiled by the International Federation of Robotics (IFR). The IFR is an industry group based in Frankfurt that gathers detailed data on robot deployments around the world. The data was then combined with more from the U.S., which was based on population, employment, business, and wages. The U.S. data was taken from the U.S. Census Bureau, the Bureau of Economic Analysis, and the Bureau of Labor Statistics.
One of the other methods used in the study was to compare robot deployment in the U.S. to other countries, and the researchers found that the U.S. is behind Europe in this regard. Compared to Europe’s 1.6 new robots introduced per 1,000 workers between 1993 and 2007, U.S. firms only introduced one new robot per 1,000 workers.
“Even though the U.S. is a technologically very advanced economy, in terms of industrial robots’ production and usage and innovation, it’s behind many other advanced economies,” Acemoglu says.
Hardest Hit Areas in the U.S.
By analyzing 722 different commuting zones in the continental U.S., as well as the impact of robots on them, the study found that there are dramatic differences in the usage of robots based on geographical location.
One of the areas most affected by this technology is the automobile industry, and some of the major hubs of that industry, including Detroit, Lansing, and Saginaw, are some of the hardest-hit areas.
“Different industries have different footprints in different places in the U.S.,” Acemoglu says. “The place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].”
Each robot replaces about 6.6 jobs locally in the commuting zones where robots are put into the workforce. One of the more interesting findings of the study is that whenever robots are added in manufacturing, other industries and areas benefit around the country, due to things like a lower cost of goods. This is why the study concluded with a total of 3.3 jobs replaced per one robot added for the entire U.S.
The researchers also found that income inequality is directly affected by the introduction of robots. This is largely due to the fact that in the areas where many of these jobs are replaced, there is a lack of other good employment opportunities.
“There are major distributional implications,” Acemoglu says. “The burden falls on the low-skill and especially middle-skill workers. That’s really an important part of our overall research [on robots], that automation actually is a much bigger part of the technological factors that have contributed to rising inequality over the last 30 years.”
“It certainly won’t give any support to those who think robots are going to take all of our jobs,” Acemoglu continues. “But it does imply that automation is a real force to be grappled with.”
- Andrew Stein, Software Engineer Waymo – Interview Series
- Michael Schrage, Author of Recommendation Engines (The MIT Press) – Interview Series
- Scientists Detect Loneliness Through The Use Of AI And NLP
- Engineers Develop New Machine-Learning Method Capable of Cutting Energy Use
- Artificial Intelligence Enhances Speed of Discoveries For Particle Physics