David Lindell, a graduate student in electrical engineering at Stanford University, along with his team, developed a camera that can watch moving objects around corners. When they tested the new technology, Lindell wore a high visibility tracksuit as he moved around an empty room. They had a camera that was aimed at a blank wall away from Lindell, and the team was able to watch all of his movements with the use of a high powered laser. The laser reconstructed the images through the use of single particles of light that were reflected onto the walls around Lindell. The newly developed camera used advanced sensors and a processing algorithm.
Gordon Wetzstein, assistant professor of electrical engineering at Stanford, spoke about the newly developed technology.
“People talk about building a camera that can see as well as humans for applications such as autonomous cats and robots, but we want to build systems that go well beyond that,” he said. “We want to see things in 3D, around corners and beyond the visible light spectrum.”
The camera system that was tested will be presented at the SIGGRAPH 2019 conference on August 1.
The team has already developed similar around-the-corner cameras in the past, but this one is able to capture more light from more surfaces. It can also see wider and farther as well as monitor out-of-sight movement. They are hoping that these “superhuman vision systems” will be able to be used in autonomous cars and robots so that they will operate more safely than when controlled by a human.
One of the team’s main goals is to keep the system practical. They use hardware, scanning and image processing speeds, and styles of imaging that are already used in autonomous car vision systems. One difference is that the new system is able to capture light bouncing off of a variety of different surfaces with different textures. Before, the systems that were used to see things outside of a camera’s line of sight were only able to do so with objects that reflected even and strong light.
One of the developments that helped them create this technology was a laser that is 10,000 times more powerful than the one they used last year. It scans a wall on the opposite side of the point of interest. The light bounces off the wall, hits the objects in the scene, and returns back to the wall and camera sensors. The sensor is then able to pick up small specks of the laser light and sends them to an algorithm that was also developed by the team. The algorithm deciphers the specks to reconstruct the images.
“When you’re watching the laser scanning it out, you don’t see anything,” Lindell said. “With this hardware, we can basically slow down time and reveal these tracks of light. It almost looks like magic.”
The new system is able to scan at four frames per second and reconstruct scenes up to 60 frames per second with a computer graphics processing unit that enhance the capabilities.
The teams drew inspiration from other fields such as seismic imaging systems. Those bounce soundwaves off underground layers of Earth, and they are able to see what’s beneath the surface. The algorithm is reconfigured to decipher light that bounces off of hidden objects.
Matthew O’Toole, assistant professor at Carnegie Mellon University and previous postdoctoral fellow in Wetzstein’s lab, spoke about the new technology.
“There are many ideas being used in other spaces — seismology, imaging with satellites, synthetic aperture radar — that are applicable to looking around corners,” he said. We’re trying to take a little bit from these fields and we’ll hopefully be able to give something back to them at some point.”
The team’s next step is testing the system on autonomous research cars. They also want to see if it will be applicable in other areas such as medical imaging and to help combat problems of visual conditions that drivers encounter such as fog, rain, sandstorms, and snow.
“Artificial Chemist” Performs Chemical Reactions
Artificial Intelligence (AI) is making its way into every field including Chemistry. In the latest innovation, researchers from North Carolina State University and the University at Buffalo have developed a new technology called “Artificial Chemist.” The technology uses artificial intelligence and an automated system in order to perform chemical reactions, which then accelerate the research, development, and manufacturing of commercial materials.
The paper titled “Artificial Chemist: An Autonomous Quantum Dot Synthesis Bot,” is published in Advanced Materials.
Proof of Concept Experiments
The researchers demonstrated in proof of concept experiments that the tool can identify and produce the best possible quantum dots for any color. It is capable of doing this within 15 minutes or less. Quantum dots are used in applications like LED displays and are colloidal semiconductor nanocrystals.
According to the researchers, the Artificial Chemist is also able to identify the best material for a variety of measurable properties rather than only quantum dots.
Milad Abolhasani is an assistant professor of chemical and biomolecular engineering at NC State and a corresponding author of the paper.
“Artificial Chemist is a truly autonomous system that can intelligently navigate through the chemical universe,” says Abolhasan. “Currently, Artificial Chemist is designed for solution-processed materials – meaning it works for materials that can be made using liquid chemical precursors. Solution-processed materials include high-value materials such as quantum dots, metal/metal oxide nanoparticles, metal organic frameworks (MOFs), and so on.
“The Artificial Chemist is similar to a self-driving car, but a self-driving car at least has a finite number of routes to choose from in order to reach its pre-selected destination. With Artificial Chemist, you give it a set of desired parameters, which are the properties you want the final material to have. Artificial Chemist has to figure out everything else, such as what the chemical precursors will be and what the synthetic route will be, while minimizing the consumption of those chemical precursors.
“The end result is a fully autonomous materials development technology that not only helps you find the ideal solution-processed material more quickly than any techniques currently in use, but it does so using tiny amounts of chemical precursors. That significantly reduces waste and makes the materials development process much less expensive.”
The “Body” and “Brain”
The Artificial Chemist is able to perform experiments and sense the experimental results, as well as record the data and determine the next experiment.
The Artificial Chemist’s “body” incorporated and automated two specific flow synthesis platforms in its proof-of-concept testing: Nanocrystal Factory and NanoRobo. While the technology was able to run 500 quantum dot synthesis experiments each day, Abolhasani believes that number could be 1,000.
The “brain” of the Artificial Chemist is an AI program that is capable of characterizing the materials that are being synthesized by the body. The data is then used to make autonomous decisions about experimental conditions for the next experiment. Those decisions revolve around what is the most efficient path to achieving the best material compositions.
The Artificial Chemist improves its capability to identify the right material over time by storing data that is generated from every request it receives.
When it comes to the AI deciding what the next experiment will be, the researchers tested nine different policies. Through a series of requests, the Artificial Chemist was asked to identify the best quantum dot material for three different output parameters.
The results showed that it was able to identify the best quantum dot within one-and-a-half hours. However, that time was reduced down to 10 to 15 minutes after it had prior knowledge.
“I believe autonomous materials R&D enabled by Artificial Chemist can re-shape the future of materials development and manufacturing,” Abolhasani said. “I’m now looking for partners to help us transfer the technique from the lab to the industrial sector.”
Ludovic Larzul, Founder and CEO of Mipsology – Interview Series
Ludovic Larzul. is the founder and CEO of Mipsology, a groundbreaking startup focused on state-of-the-art acceleration for deep learning inference. They’ve devised technology to accelerate the computations of inference neural networks and conceal the hardware accelerator to AI users. Mipsology’s Zebra is the first commercial accelerator that encapsulates such technology to provide high performance and ease of use.
What first got you interested in AI and microchips?
I worked in the design of a specific type of super-computer for about 20 years with my previous company EVE, before it was acquired by Synopsys in 2012. Those computers, also called ASIC emulators, are used by many companies designing ASICs around the world. I quite enjoyed the complexity and diversity of that work. To succeed, you have to (a) understand electronics, software, complex algorithms, how people design chips and how to make sure they work fine, chip architecture, power, and more deep tech, (b) correctly predict the needs of customers a few years in advance, (c) innovate continuously, and (d) as a startup, defeat the competition with far fewer resources. After 20 years of success, I was looking for a new challenge. This was the time when AI had started to come back into the spotlight. AlexNet had made a leap forward into understanding images (and looking back, it was still in its infancy). Deep learning was brand new but promising (Who remembers when it took days to get a result on a simple network?). I found that quite “fun”, but recognized there were many challenges.
What was the inspiration behind launching Mipsology?
I don’t know if I would use the word “inspiration.” It was initially more like: “Can we do something that would be different and better?” It started with assumptions of what AI people would like and do, and the next few years were spent finding ever-better solutions based on that. I guess more than inspiration, I would say that the people I work with like to be the best at what they create, in a positive attitude of competition. That makes a strong team that can solve problems others fail to solve adequately.
Mipsology uses FPGA boards instead of GPUs. Can you describe what FPGA are?
FPGA are electronic components that can be programmed at the hardware level. You can imagine it as a set of Legos — a few million of them. Each little block performs a simple operation like keeping a value, or slightly more complex operations like addition. By grouping all these blocks, it is possible to create a specific behavior after the chip is manufactured. This is the opposite of GPUS and almost all other chips, which are designed for a specific function and cannot be changed afterwards.
Some, like CPUS and GPUS, can be programmed, but they are not as parallel as FPGAs. At any given moment, an FPGA performs a few million simplistic operations. And this can happen six to seven hundred million times a second. Because they are programmable, what they do can be changed at any time to adapt to different problems, so the extraordinary computing power can be effective. FPGAs are already almost everywhere, including base stations of mobile phones, networks, satellites, cars, etc. People don’t know them well though, because they are not as visible as a CPU like the one in your laptop.
What makes these FPGA boards the superior solution to the more popular GPUs?
FPGAs are superior in many aspects. Let’s just focus on a couple important ones.
GPUs are designed for rendering images, mainly for games. They have been found to match well with some computations in AI because of the similarity of the operations. But they remain primarily dedicated to games, which means they come with constraints that do not fit well with neural networks.
Their programming is also limited to the instructions that were decided two or three years before they are available. The problem is that neural networks are advancing more quickly than the design of ASICs, and GPUs are ASICs. So, it is like trying to predict the future: it’s not simple to be right. You can see trends, but the details are what really impact the results, like performance. In contrast, because FPGAs are programmable at the hardware level, we can more easily keep up with the progress of AI. This allows us to deliver a better product with higher performance, and meet the customer’s needs without having to wait for the next silicon generation.
Furthermore, GPUs are designed to be consumer products. Their lifespan is intentionally short, because the companies designing GPUs want to sell new ones a few years later to gamers. This does not work well in electronic systems that need to be reliable for many years. FPGAs are designed to be robust and used 24/7 for many years.
Other well-known advantages of FPGAs include:
- There are many options that can fit in specific areas like networking or video processing
- They work as well in data centers as at the edge or in embedded
- They do not require specific cooling (much less water cooling like big GPUs)
One major drawback is that FPGAs are difficult to program. It requires specific knowledge. Even though companies selling FPGAs have put great effort into bridging the complexity gap, it is still not as simple as a CPU. In truth, GPUs are not simple either. But the software that hides their programming for AI makes that knowledge unnecessary. That is the problem that Mipsology is the first to solve: removing the need for AI computing to program or have any knowledge of FPGA.
Are there any current limitations to FPGA boards?
Some FPGA boards are like some GPU boards. They can be plugged into a computer’s PCIe slots. One well known advantage, on top of the lifespan I mentioned before, is that the power consumption is typically lower than GPUs. Another one less known is that there is a larger selection of FPGA boards than GPU boards. There are more FPGAs for more markets, which leads to more boards that fit in different areas of the markets. This simply means that there are more possibilities for computing neural networks everywhere at lower cost. GPUs are more limited; they fit in data centers, but not much else.
Mipsology’s Zebra is the first commercial accelerator that encapsulates FPGA boards to provide high performance and ease of use. Can you describe what Zebra is?
For those who are familiar with AI and GPUs, the easiest description is that Zebra is to FPGA what Cuda/CuDNN is to GPU. It is a software stack that completely hides the FPGA behind usual frameworks like PyTorch or TensorFlow. We are primarily targeting inference for images and videos. Zebra starts with a neural network that was trained typically in floating point, and without any manual user effort or proprietary tool, makes it run on any FPGA-based card. It is as simple as: plug in the FPGA board, load the driver, source the Zebra environment, and launch the same inference application as the one running on CPUs or GPUs. We have our own quantization that retains the accuracy, and performance is out of the box. There is no proprietary tool that the user must learn, and it doesn’t take hours of engineering time to get high throughput or low latency. This means simply quick transitions, which also reduces cost and time to market.
What are the different types of applications that Zebra is best designed for?
Zebra is a very generic acceleration engine, so it can accelerate the computation for any application that needs to compute neural networks, with a primary focus on images and video because the computing needs are larger for this kind of data. We have requests from very different markets, but they are all similar when it comes to computing the neural networks. They all typically require classification, segmentation, super resolution, body positioning, etc.
As Zebra runs on top of FPGAs, any kind of boards can be used. Some have high throughput and are typically used in data centers. Others are more appropriate for use at the Edge or embedded. Our vision is that, if an FPGA can fit, users can use Zebra to accelerate their neural network computations right away. And if GPUs or CPUs are used, Zebra can replace them and reduce the costs of the AI infrastructure. Most of the companies we talk to are having similar issues: they could deploy more AI-based applications, but the cost is limiting them.
For a company that wishes to use Zebra, what is the process?
Simply let us know at firstname.lastname@example.org and we’ll get you started.
Is there anything else that you would like to share about Mipsology?
We are very encouraged about the feedback we get from the AI community for our Zebra solution. Specifically, we are told that this is probably the best accelerator on the market. After only a few months, we continue to add to a growing ecosystem of interested partners including Xilinx, Dell, Western Digital, Avnet, TUL and Advantech, to name a few.
I really enjoyed learning about this ground breaking technology. Readers who wish to learn more should visit Mipsology.
New Study Suggests Robots in the Workforce Increase Income Inequality, Greatly Depends on Region
There have been many predictions about the future of work with artificial intelligence and automation, ranging from massive unemployment to the creation of many new jobs due to the technology. Now, a new study co-authored by an MIT professor has been released, providing some more insight into the replacement of workers by robots.
The paper is titled “Robots and Jobs: Evidence from U.S. Labor Markets” and was authored by MIT economist Daron Acemoglu and Pascual Restrepo Ph.D. ‘16, who is an assistant professor of economics at Boston University. It can be found in the Journal of Political Economy.
One of the findings of the study is that the impact of robots, specifically in the United States, will greatly depend on the industry and region. It also found that income inequality can be dramatically increased due to the technology.
According to Acemoglu, “We find fairly major negative employment effects.” However, Acemoglu also said that the impact could be overstated.
The study found that between 1990 and 2007, the addition of one robot per 1,000 workers resulted in a reduction of the national employment-to-population ratio by approximately 0.3 percent. It also found that this number differs depending on the areas of the U.S., with some areas being greatly more affected than others.
In other terms, an average of 3.3 workers nationally were replaced for each additional robot that was added in manufacturing.
Another key finding of the study was that during the same time period, wages were lowered by about 0.4 percent due to the increased use of robots in the workplace.
“We find negative wage effects, that workers are losing in terms of real wages in more affected areas, because robots are pretty good at competing against them,” Acemoglu says.
Data Used in the Study
The study was conducted with data from 19 different industries, which was compiled by the International Federation of Robotics (IFR). The IFR is an industry group based in Frankfurt that gathers detailed data on robot deployments around the world. The data was then combined with more from the U.S., which was based on population, employment, business, and wages. The U.S. data was taken from the U.S. Census Bureau, the Bureau of Economic Analysis, and the Bureau of Labor Statistics.
One of the other methods used in the study was to compare robot deployment in the U.S. to other countries, and the researchers found that the U.S. is behind Europe in this regard. Compared to Europe’s 1.6 new robots introduced per 1,000 workers between 1993 and 2007, U.S. firms only introduced one new robot per 1,000 workers.
“Even though the U.S. is a technologically very advanced economy, in terms of industrial robots’ production and usage and innovation, it’s behind many other advanced economies,” Acemoglu says.
Hardest Hit Areas in the U.S.
By analyzing 722 different commuting zones in the continental U.S., as well as the impact of robots on them, the study found that there are dramatic differences in the usage of robots based on geographical location.
One of the areas most affected by this technology is the automobile industry, and some of the major hubs of that industry, including Detroit, Lansing, and Saginaw, are some of the hardest-hit areas.
“Different industries have different footprints in different places in the U.S.,” Acemoglu says. “The place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].”
Each robot replaces about 6.6 jobs locally in the commuting zones where robots are put into the workforce. One of the more interesting findings of the study is that whenever robots are added in manufacturing, other industries and areas benefit around the country, due to things like a lower cost of goods. This is why the study concluded with a total of 3.3 jobs replaced per one robot added for the entire U.S.
The researchers also found that income inequality is directly affected by the introduction of robots. This is largely due to the fact that in the areas where many of these jobs are replaced, there is a lack of other good employment opportunities.
“There are major distributional implications,” Acemoglu says. “The burden falls on the low-skill and especially middle-skill workers. That’s really an important part of our overall research [on robots], that automation actually is a much bigger part of the technological factors that have contributed to rising inequality over the last 30 years.”
“It certainly won’t give any support to those who think robots are going to take all of our jobs,” Acemoglu continues. “But it does imply that automation is a real force to be grappled with.”
- Researchers Develop New Theory on Animal Sensing Which Could be Used in Robotics
- AI Algorithms Can Enhance the Creation of Bioscaffold Materials and Help Heal Wounds
- Zayd Enam, Co-founder and CEO of Cresta
- Cognoa Seeks FDA Clearance for Digital Autism Diagnostic Device After Successful Study
- IoT Enhanced Processors Increase Performance, AI, & Security