Connect with us

Manufacturing

Intel’s New Neuromorphic Chips are 1,000 Times Faster Than Normal CPUs

Published

 on

Intel’s New Neuromorphic Chips are 1,000 Times Faster Than Normal CPUs

Intel’s new system codenamed Pohoiki Beach will be at the Consumer Electronics Show (CES) in Las Vegas. The device is built from 64 Loihi research chips, and the goal is for it to simulate the human brain when it comes to learning ability and energy efficiency. These neuromorphic chips are a simpler version of the way neurons and synapses function in the brain. 

Rich Uhlig, managing director of Intel Labs, spoke on the new technology. 

“We are impressed with the early results demonstrated as we scale Loihi to create more powerful neuromorphic systems. Pohoiki Beach will now be available to more than 60 ecosystem partners, who will use this specialized system to solve complex, compute-intensive problems.” 

The new AI neuromorphic chip can perform data-crunching tasks 1,000 times faster than normal processors like CPUs and GPUs while using a lot less power. 

The way it is based on brain neurons is not something entirely new. Many AI algorithms simulate neural networks in their programs. They use parallel processing for recognizing objects in images and words in speech. The new neuromorphic chips put these neural networks into silicon. While they are less flexible and powerful than some of the best general-purpose chips, they really perform when specialized in specific tasks. The new AI chip from Intel is 10,000 times more efficient than general processors. Since they are so energy efficient, the technology will be ideal for mobile devices, vehicles, industrial equipment, cybersecurity, and smart homes. AI researchers have already begun to use the system for things like improving prosthetic limbs so that they can adapt better to uneven ground, as well as creating digital maps to be used by self-driving cars. 

Chris Eliasmith, co-CEO of Applied Brain Research and professor at the University of Waterloo, is one of the several researchers using the new technology. 

“With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT interface hardware…Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time,” Chris Eliasmith said. 

Konstantinos Michmizos is a professor of Rutgers University, and his lab does work with SLAM which will be presented at the International Conference on Intelligent Robots and Systems (IROS) in November. 

“Loihi allowed us to realize a spiking neural network that imitates the brain’s underlying neural representations and behavior. The SLAM solution emerged as a property of the network’s structure. We benchmarked the Loihi-run network and found it to be equally accurate while consuming 100 times less energy than a widely used CPU-run SLAM method for mobile robots,” he said. 

As of right now, Pohoiki Beach is an 8 million neuron system. Rich Uhlig, head of Intel Labs, thinks that the company will be able to create a system that is able to simulate 100 million neurons by the end of 2019. This new technology will be able to be used by researchers for a wide range of things such as improvement of robot arms. These new developments and research are leading to what will likely be the commercialization of neuromorphic technology. 

According to the company, “Later this year, Intel will introduce an even larger Loihi system named Pohoiki Springs, which will build on the Pohoiki Beach architecture to deliver an unprecedented level of performance and efficiency for scaled-up neuromorphic workloads.” 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Interviews

Sanchit Mullick, Assoc. Vice President for AI & Automation at Infosys – Interview Series

mm

Published

on

Sanchit Mullick, Assoc. Vice President for AI & Automation at Infosys - Interview Series

Sanchit Mullick is associate vice president and global head of Sales for AI and Automation Services at Infosys. In this role, he leads worldwide sales, marketing and alliances for AI and Automation Services and partners with customers to help them chart their roadmap across the Automation spectrum, leveraging everything from robotic automation to cognitive services. Mullick has worked across the U.S., Australia, the UK and India, having played roles spanning sales, consulting and deliver

Infosys enables enterprise companies to integrate robotic process automation (RPA). What are some of the reasons why companies should consider these options?

RPA enables businesses to unlock operational efficiencies in the form of capacity release, cost avoidance through flattening the work curve, higher quality through repeated robotic actions, improved employee experience through removal of mundane, boring tasks, and improved customer experience through consistent behavior and better average handling time.

In the early stages of adoption, RPA can play a pivotal role in uplifting organization productivity. However, as adoption spreads across the enterprise and with low code / no code platforms driving citizen development in some enterprises, we see RPA as an effective tool to uplift individual productivity.

 

What are some examples of RPA tasks Infosys can assist with?

Infosys can assist any business across any industry that has repeatable, rule-based routine tasks being performed by humans. Most RPA adoption programs start in enterprise functions such as Finance, Accounting, HR, Procurement amongst others. This is because these functions lend themselves to cross-industry portability of use cases and automation ideas. Increasingly, however, there are instances of RPA being used to drive efficiency and effectiveness in core business processes. For example, we are working with a large banking client of ours to reimagine their Treasury operations using automation and AI.

An early limitation of RPA platforms was that they could only work off structured data as the input. However, most RPA platforms have now evolved to the point where they are able to apply techniques to first convert semi- or unstructured data into structured data thus increasing the pipeline of opportunities for automation, and in doing so transitioning into Intelligent Process Automation.

 

Does Infosys also offer AI Automation training?

We have developed an always-on learning approach and have democratized learning through our internal training platform, Lex. We have created learning paths with different levels of certification that allow our employees to become “Automation Professionals” or “AI Professionals” or both. The content for these learning paths has been put together with inputs from our software product partners, as well as our academic partnerships.

We offer these training programs and extend this learning value chain to our clients as they seek to reskill / upskill their employees. We have also created micro capsules around specific RPA platforms which can be leveraged to drive quick enablement amongst business users, which in turn can facilitate effective ideation. This is extremely important in the automation and AI domain, because the first step to driving success in automation adoption is to identify the best use cases and in most cases (dare I say all) this comes through effective participation from business.

 

Infosys offers a ‘Technology Rethink,’ so that the fundamental changes that are needed in the application landscape and infrastructure can be assessed. Could you walk us through an example of how this would be applied?

Think of how the process is being done, how the process will change over time, and the way the process interacts with other units within the organization: these are the considerations we take before starting an RPA project. To rethink how we use technology ultimately helps us rethink how we do work. At Infosys, we strive to bring this philosophy with us every day, and apply it to how we collaborate with our clients. With the democratization of AI and RPA, organizations have the opportunity to take leaps and bounds with the frameworks we’ve developed to undertake their digital transformation initiatives.

A vast majority of traditional organizations are sitting on applications and infrastructure assets that are dated. Legacy Modernization is one of Infosys’ core offerings to help our clients renew and refresh their offerings to the market. Purely from an AI & Automation stand point, our process discovery approaches encourage business teams to rethink how they should refresh their business processes according to changing market dynamics. For example, our AI driven IT Operations approach brings deep insights from the application and infrastructure through analysis of service tickets, alerts and exceptions to help develop AI driven humanless service desks, self-heal and preventative maintenance approaches to reduce the manual effort across client’s IT operations.

 

Infosys enables the creation of a smart workforce with the Infosys conversational AI solution. Could you share with us some details regarding the natural language processing that is used by this chatbot, and why it is superior to competing solutions?

Our internally developed NIA chatbot solution was designed to help all business units within Infosys apply conversational AI into their departments, and with this experience we are now able to service our clients using the same solution. A few features such as our FAQ extractor, utterance creator, and easy-to-use admin panel help with fast chatbot deployment.

Additionally, our ability to integrate with communication platforms such as Facebook, Skype and WhatsApp allows our clients to interface with their customers and vendors in a unified way. Adding RPA with Conversational AI allows for action bots to complete tasks and common requests. Active learning can be applied to chatbots to allow for bots to get smarter over time with human intervention. We leverage open source NLP algorithms as well as proprietary offerings from Google, Azure, AWS and Watson that we have integrated into the Nia chatbot. This allows us to continually improve based on the progress made in the industry and also leverage any investments already made by our clients.

 

Is a decision tree always used with this chatbot, or is this on a case-by-case basis?

The decision tree is a fundamental component to help chatbots understand the workflow requests with the users it interacts with. Other techniques can be used to enhance the accuracy and user interaction of the bot.

 

What are some interesting new products or solutions that Infosys is working on?

Bot Factory, which is a repository of prebuilt ‘micro bots’ that allows users to stitch together prebuilt automations for rapid RPA deployment.

Then there’s MiniChat, which has been developed based on NLP algorithms for quick and correct response to the user. The solution needs a fixed set of FAQs to perform initial training and generating training data as it uses utterances given in training data for answering user queries. It reduces human effort to find a solution for frequently occurring problems / issues. MiniChat can be deployed on any webpage within a time span of 2-3 hours.

Finally, email work bench. Organizations across the globe deal with a myriad of unstructured emails from clients be it for buying a product, requesting a change in their details, or complaints and feedback. Infosys has developed an NLP-driven model-based solution to help extract usable information and automate an action based on the trigger.

These are only some examples of solutions that we constantly develop as part of every project that we execute and then make available to our clients worldwide to drive improved delivery.

 

Is there anything else that you would like to share about Infosys?

There has been an enormous amount of coverage around automation and AI. In all of this, it is important to remain focused on solving real world problems. And Infosys is addressing this by focusing on both problem finding and problem solving. We are driving this by creating an ecosystem that brings together consulting, technology and operations. Also by treating the entire spectrum of tools as a continuum. From the very beginning we have housed automation and AI capabilities under a single umbrella, and by doing so we bring various technology interventions together to solve a business problem in the best way possible.

Our work has also been recognized by industry leading analysts. Infosys has been rated as a Leader in IDC’s Intelligent Automation Services MarketScape 2019, in our first ever rating in this space. This rating is based on a comprehensive and rigorous framework evaluating IA players on criteria such as strategy, delivery, marketing and client satisfaction. The study highlights the factors expected to be the most influential for success in the market, both in the short and the long term.

HFS Research recently published a viewpoint on why Enterprise AI implementation initiatives should have a business-first approach where HFS called out Infosys’ well-known consulting capabilities and business outcome-focused approaches, our approach to assuming the role of a trusted advisor while deploying AI into a client’s environment, as well as our ability to maintain an end-to-end journey view with a value-creation paradigm.

To learn more visit Infosys

Spread the love
Continue Reading

Manufacturing

Artificial Intelligence Used to Prevent Icebergs from Disrupting Shipping

Published

on

Artificial Intelligence Used to Prevent Icebergs from Disrupting Shipping

Experts at the University of Sheffield have developed a combination of control systems and artificial intelligence (AI) forecasting models to prevent icebergs from drifting into busy shipping regions. 

Through the use of a recently published control systems model, experts were able to predict the movement of icebergs. In 2020, between 479 and 1,015 icebergs are expected to drift into waters south of 48°, an area that sees great shipping movement between Europe and north-east North America. Last year, there were a total of 1,515 observed in that same area.

The team relied on experimental artificial intelligence analysis in order to independently support the number of icebergs predicted. They also discovered a rapid early rise in the number of icebergs present in this area during the ice season, which runs from January to September. 

The International Ice Patrol (IIP) is supplied with the findings, and they use the information to figure out the best use of resources for better ice forecasts during the season. According to the seasonal forecast, ships in the north-west Atlantic will be less likely to encounter an iceberg compared to last year.

Icebergs cause serious problems and shipping risks in the north-west Atlantic. Records show that there have been collisions and sinkings dating back to the 17th century. The IIP was established in 1912 after the sinking of the Titanic, and its job is to observe sea ice and conditions in the north-west Atlantic and warn of potential dangers.

The risk of icebergs to shipping changes each year. One year can see no icebergs crossing the area, while another year can see over 1,000. This makes it difficult to predict, but in general, there has been a higher amount detected since the 1980s. 

2020 is the first year that artificial intelligence is being used to forecast the icebergs in the area, as well as the rate of change across the season.

The model was developed by a team led by Professor Grant Bigg at the University of Sheffield, and it was funded by insurance firm AXA XL’s Ocean Risk Scholarships Programme. There is a control systems model as well as two machine learning tools that are used. 

Data related to the surface temperature of the Labrador Sea is analyzed, as well as variations in atmospheric pressure in the North Atlantic and the surface mass balance of the Greenland ice sheet.

The foundation control systems approach had an 80 percent accuracy level when tested against data on iceberg numbers for the seasons between 1997 and 2016. 

According to some of Professor Bigg’s earlier research, the variation of the number of icebergs drifting into the region was due to variable calving rates from Greenland. However, the regional climate and ocean currents are the biggest factors. Higher numbers of icebergs appear when there are colder sea surface temperatures and stronger northwesterly winds. 

Grant Brigg is a Professor of Earth System Science at the University of Sheffield.

“We have issued seasonal ice forecasts to the IIP since 2018, but this year is the first time we have combined the original control system model with two artificial intelligence approaches to specific aspects of the forecast. The agreement in all three approaches gives us the confidence to release the forecast for low iceberg numbers publicly this year—but it is worth remembering that this is just a forecast of iceberg conditions, not a guarantee, and that collisions between ships and icebergs do occur even in low ice years.”

According to Mike Hicks of the International Ice Patrol,  “The availability of a reliable prediction is very important as we consider the balance between aerial and satellite reconnaissance methods.”

Dr. John Wardman is a Senior Science Specialist in the Science and Natural Perils team at AXA XL. 

“The impact of sea level rise on coastal exposure and a potential increase in Arctic shipping activity will require a greater number and diversity of risk transfer solutions through the use of re/insurance products and other ‘soft’ mitigation strategies. The insurance industry is keeping a keen eye on the Arctic, and this model is an important tool in helping the industry identify how or when the melting Greenland Ice Sheet will directly impact the market.”

 

Spread the love
Continue Reading

Manufacturing

Cerebras Has the “World’s Fastest AI Computer”

Published

on

Cerebras Has the “World’s Fastest AI Computer”

According to the startup Cerebras Systems, the CS-1 is the world’s most powerful AI computer system. It is the latest attempt to create the best supercomputer, and it has been accepted by the U.S. federal government’s supercomputing program. 

The CS-1 uses an entire wafer instead of a chip, and their computer design has many little cores across the wafer. There are over 1.2 trillion transistors across the cores of one wafer, which is a lot more than the 10 million that are often on one chip of a processor. If that wasn’t enough, the CS-1 supercomputer has six of the Cerebras wafers in one system. They are called a Wafer Scale Engine. 

Cerebras’ first CS-1 was sent to the U.S. Department of Energy’s Argonne National Laboratory. The 400,000 cores will be used to work on extremely difficult AI computing problems like studying cancer drug interactions. The Argonne National Lab is one of the world’s top buyers of supercomputers. 

The CS-1

The CS-1 is programmable with the Cerebras Software Platform and can be used with existing infrastructure, according to the startup. The Wafer Scale Engine (WSE) has more silicon area than the biggest graphics processing unit, and the 400.000 Sparse Linear Algebra Compute (SLAC) cores are flexible, programmable, and optimized for neural networks. 

The CS-1 has a copper-colored block, or cold plate, that conducts heat away from the giant chip. Pipes of cold water are responsible for cooling, and fans blow cold air to carry heat away from the pipes. 

According to many, the big breakthrough is the dashboard. Argonne has been constantly working on spreading out a neural net over large numbers of individual chips, making them better to program compared to other supercomputer machines like Google’s Pod. 

The Cerebras CS-1 is basically one giant, self-contained chip where the neural network can be placed. A program has been developed to optimize the way math operations of a neural network are spread across the WSE’s circuits. 

According to Rick Stevens, Argonne’s associate laboratory director for computing, environment, and life sciences, “We have tools to do this but nothing turnkey the way the CS-1 is, [where] it’s all done automatically.”

Built From the Ground Up

According to Cerebras, they are the only startup to build a dedicated system from the ground up. In order to achieve its amazing performance, Cerebras optimized every aspect of chip design, system design, and software of the CS-1 system. This allows the CS-1 to complete AI tasks that normally take months in minutes. 

The supercomputer machine also greatly reduces training time, and single image classification can be completed in microseconds. 

According to an interview given to the technology website VentureBeat, CEO of Cerebras Andrew Feldman said, “This is the largest square that you can cut out of a 300 millimeter wafer.” he continued, “Even though we have the largest and fastest chip, we know that an extraordinary processor is not necessarily sufficient to deliver extraordinary performance. If you want to deliver really fast performance, you need to build a system. And you can’t take a Ferrari engine and put it in a Volkswagen to get Ferrari performance. What you do is you move the bottlenecks if you want to get a 1,000 times performance gain.”

After the introduction of the CS-1 system, Cerebras have positioned themselves as one of the leaders in the supercomputer industry. Their contribution will undoubtedly have a major impact in solving some of the world’s most pressing AI challenges. These systems are drastically decreasing the time it will take to tackle many problems.

Spread the love
Continue Reading