Connect with us

Startups

California Start-Up Cerebras Has Developed World’s Biggest Chip For AI

Published

 on

California start-up Cerebras has developed the world’s biggest computer chip to be used to train AI systems. It is set to be revealed after being in development for four years. 

Contrary to the normal progression of chips getting smaller, the new one developed by Cerebras has a surface area bigger than an IPad. It is more than 80 times bigger than any competitors, and it uses a large amount of electricity. 

The new development represents the astounding amount of computing power that is being used in AI. Included in this is the $1bn investment from Microsoft into OpenAI that was announced last month. OpenAI is trying to develop an Artificial General Intelligence (AGI) which will be a giant leap forward, something that will change much of what we know. 

Cerebras is unique in this field because of the enormous size of their chip. Other companies endlessly work to create extremely small chips. Most of our advanced chips today are assembled like this. According to Patrick Moorhead, a US chip analyst, Cerebras basically put an entire computing cluster on a single chip. 

Cerebras is looking to join the likes of other companies like Intel, Habana, Labs, and the UK start-up Graphcore. They are all building a new generation of specialized AI chips. This development in AI chips is reaching its biggest stage yet as the companies are going to start delivering the first chips to customers by the end of the year. Among the companies, Cerebras will be looking to be the go-to for massive computing tasks that are being done by our largest internet companies. 

There are many more companies and start-ups involved in this space including Graphcore, Wave Computing, and the Chinese based start-up Cambricon. They are all looking to develop specialized AI chips used for inference. They want to take a trained AI system and use it in real-world scenarios. 

Normally, it takes a long time for the development process to finish and actual products be shipped to people and companies. According to Linley Group, a US chip research firm, there are a lot of technical issues that are time-consuming. Although it takes awhile for products to be developed, there is still a big interest in these companies. Cerebras has raised over $200m in venture capital. As of late last year, they were valued at about $1.6bn. There is a lot of projected growth for the global revenue of these deep learning chipsets. 

The reason that these companies are focusing on this type of processor for AI is because of the huge amounts of data that are needed in order to train neural networks. Those neural networks are then used in deep-learning systems and are responsible for things such as image recognition. 

The chip from Cerebras is a single chip made out of a 300mm diameter circular wafer. It is the largest silicon disc to be made in the current chip factories. The norm is for these wafers to be split up into many individual chips instead of one giant one. Anyone who tried before ran into issues with putting circuitry into something so big. Cerebras got past this by connecting the different sectors on the wafers. Once this is done, they are able to communicate with each other and become a big processor. 

Cerebras is looking forward and will try to link cores in a matrix pattern to be able to communicate with each other. They want to connect 400,000 cores while keeping all of the processing on one single chip. 

It will be exciting to see these developments move forward with Cerebras and other companies continuing to advance our AI systems. 

 

Spread the love

Interviews

Mike Lahiff, CEO at ZeroEyes – Interview Series

mm

Published

on

Mike is the CEO of ZeroEyes a security company powered by AI. Lead by former Navy SEALS, they offer software to monitor camera systems and to detect weapons. They system notifies authorities on the risk of possible active shooters and it reduces response time, with the goal of keep schools and other public spaces safe.

Can you explain what ZeroEyes is, and how implementing this system can save lives?

ZeroEyes is an AI weapons detection platform that helps identify threats at first sight. Founded by a team of Navy SEALs and military veterans dedicated to ending mass shootings, our platform integrates with an organization’s existing IP security cameras to play one component of its overall security process, and provide security personnel and first responders with real-time information needed to keep people safe. ZeroEyes focuses only on the essential information needed to stop a threat, and closes the critical seconds between when a gun could be spotted to when it is fired to save lives.

 

Can you discuss the process for integrating ZeroEyes into an existing video camera infrastructure?

ZeroEyes’ AI weapons detection platform is one component of an organization’s multi-tiered security approach. Our software integrates with an organization’s existing camera systems and video analytics to detect weapons in real time. If ZeroEyes detects a gun, an alert with the image of the weapon goes to the ZeroEyes monitoring team. Once positively identified, an alert is sent to a local emergency dispatch (such as a 911 call center), onsite security staff, police and school administrators (via mobile and desktop). This process takes three to five seconds and bypasses the traditional dispatch process.

ZeroEyes’ software uses AI and computer vision, integrating with existing 3D satellite maps of a building so that as a visible weapon passes a camera, the map lights up. This allows first responders to know the precise location of a threat. By seeing exactly where a shooter(s) is in real time, security personnel can lock doors, move people to safety and enact other aspects of their security process, while first responders can go towards the shooter much faster with the knowledge of how many and what kinds of weapons the person has.

 

How much of a weapon needs to be visible for the system to correctly identify it as a weapon?

This can be dependent on multiple variables such as type of camera, height of camera, angle, lens, field of view, lighting, distance, and type of weapon. If a human eye can detect the gun on a camera feed, our system will detect the same gun.

 

How much of an issue are false positives and how is this minimized?

We are always looking to minimize false positives and are constantly improving our deep learning models based on data collected. In customer installations, we incorporate time upfront to collect data and custom tune the parameters for each camera, which allows us to more effectively filter out false positives. If a false positive happens, an alert gets sent to our team and we vet the threat in real-time. We then respond accordingly and let the customer know that it isn’t a serious threat.

 

Your initial focus was in installing this in schools, what are some other markets that ZeroEyes is targeting?

We sell to a broad list of decision makers, including school resource officers, school district administration, corporate security directors, chief security officers and chief risk officers. Our technology can be used in education (including K-12 schools, universities and training facilities) – our technology is used in Rancocas Valley Regional High School (NJ) and South Pittsburg High School (TN), commercial (including office buildings, malls and stadiums), and military/government installations (force protection). We partner closely with both our customers and local first responders to ensure that they have the additional layer of security to identify and stop threats.

 

Can you discuss the ZeroEyes app and how threat notifications work?

If a true weapon is detected, an alert is sent to ZeroEyes’ security monitoring team. Once positively identified, it is then sent to a local emergency dispatch (such as a 911 call center), onsite security staff, police and school administrators. This process takes three to five seconds and bypasses the traditional dispatch process. We include details such as the location of the camera, bounding box identifying the detected object and detection label.

The image lets first responders know the type of weapon (i.e. pistol or machine gun). This then dictates response tactics and amount of damage a shooter can cause. It also lets us know the total number of shooters and weapons so those responding to the alert are properly informed of the situation.

 

What type of relationship does ZeroEyes have with different law enforcement agencies and how are they set-up to receive dispatch alerts?

ZeroEyes works with local law enforcement to help decrease critical response time to serious threats to public safety like active shooter situations. If a threat is detected and verified, the alert is sent to a local emergency dispatch.

ZeroEyes provides real-time information to help first responders understand the situation at hand, allows security to quickly enact security protocols, and dramatically reduces response time which can mean the difference in saving lives.

 

Facial recognition capabilities are built into the system, but facial redaction is used to protect patrons’ privacy. Can you discuss these capabilities, for example is ZeroEyes able to identify specific individuals such teachers and principals in a school?

We do not use facial recognition, we solely focus on weapons detection. Our technology sits on top of existing IP security cameras, which could also have facial recognition software installed by the organization. We pursued weapons detection because we want to reduce mass shootings and active shooter threats, and security personnel should know where and when weapons are present regardless of who is carrying them.

 

Is there anything else that you would like to share about ZeroEyes?

Our mission is to detect a threat before it happens. We firmly believe that if this happens, we can reduce the amount of mass shootings and save lives.

Thank you for the interview, readers who wish to learn more should visit ZeroEyes.

Spread the love
Continue Reading

Commerce

Eugene Terekhov, CEO of AiBUY – Interview Series

mm

Published

on

Eugene Terekhov is the CEO of AIBUY.

In one sentence, can you tell us what service AiBUY provides?

AiBUY is a content commerce platform that allows online retailers, advertisers and entertainers to sell products natively within their videos or images.

You recently finished an accelerator with Salesforce, can you tell us about your experience and one invaluable thing that you learned during the accelerator.

Salesforce is a major player in the customer relationship and online marketing software industry and The Salesforce Accelerate program that we are a part of is designed to fast-track unique solutions through the integration and partner process. We are excited that Salesforce identified AiBUY for this program and believe that content commerce at scale, specifically video commerce, has huge potential to disrupt the eCommerce space.

Can you talk about some of the technologies used by AiBUY?

AiBUY has a strong belief in using the right technologies for the job. This means we have a robust collection of technologies in use today, including but not limited to Kotlin, PHP, Python, Node.js, Tensorflow, Keras, and a variety of prebuilt and homegrown neural networks. We are proud to be working on the bleeding edge and working to push the limits of current technology.

Which platforms is AiBUY integrated with?

Currently AiBUY is integrated with eCommerce platforms such as Salesforce (formerly Demandware), Magento Commerce by Adobe, and Shopify. We also are finalizing partnership discussions with 5 other enterprise ecommerce platforms along with other innovative technology companies within the social media, visual marketing, data visualization and customer personalization industries.

In addition, we have a very exciting partnership we’re working on right now through our AiBUY Labs program. This is our innovation center where we test out new and innovative ideas that have potential to radically transform content commerce. Look out for more info on that to come – it’s really going to be a game changer for an industry that is expected to reach $80 Billion by 2022.

You’ll be launching a new product soon called BUYLiVE which is currently supported for YouTube Live. Can you tell the readers a bit more about how BUYLiVE works and what future platforms you see integrating well with?

Yes, we are very excited about BUYLiVE and the future of AiBUYs proprietary technology. BUYLiVE was a natural extension of our current shoppable video technology for live events. In today’s culture, consumer attention spans are short, and they expect so much more from brands & experiences. We are merging entertainment with an opportunity for live events to provide deeper engagement and increase revenue. Whether it’s a concert, sporting event or a new product release, companies will be able to activate a shopping experience or interaction without the consumer ever having to leave the content. Consumers can buy merchandise or future event tickets all while watching the event – imagine the impact that will have on consumer purchasing behavior!

I also want to touch on interactions and what I mean by that. Of course we can connect consumers for selling opportunities, but we can also integrate with customer data tools that allow for consumer personalization. Imagine watching a sporting event and you’re a super fan of an athlete or entertainer. We can distribute data to that super fan to deepen the engagement. Additionally, we may notice that a fan loves a particular team or a specific athlete and have the ability to promote products based on that knowledge – completing that customer journey like never before over video.

Below is a video example of different types of linking you can do within your live stream.

Aside from shoppable videos and live streams, how else can AiBUY be applied?

As I mentioned before, AiBUY has a division called AiBUY Labs. It’s our innovation stream, where we test out new and innovative ideas for potential solutions to radically transform enterprise clients content commerce strategy with startup speed and enterprise scale. Another way we work is strategically with the largest companies to leverage our technology for a much broader corporate strategy. The future of commerce is content commerce and AiBUY is powering it.

AiBUY uses an existing product catalogue to recognize items on the screen, can you talk about how the product catalogue is updated and expanded, and how comprehensive the catalogue is currently? 

We can’t go into too much detail because it is proprietary, but we have built our platform to work very efficiently with product catalogues of any size. By integrating directly with the retailer’s ecommerce platform, we import and synchronize the product information and ensure data like inventory levels and product variance selections are maintained.

It does not matter if you are a smaller retailer with a few hundred SKUs or a large retailer with millions. Our system will process the product images and store the vectorized images to be used during video and image analysis to identify product matches.

What additional services do you envision AiBUY offering in the future?

The future really is very interesting as our patent coverage has a broad range of opportunities. Our interactive and shoppable media tech covers all sorts of mediums like mobile, web, OTT, AVOD, TVOD, and SVOD and potentially other future media types that might not even be available today. We’re looking forward to what is to come.

Do you have anything else you would like to share with the readers?         

We’re looking forward to defining the future of Commerce.

You can find out more about AiBUY at its website.

Spread the love
Continue Reading

Autonomous Vehicles

Vincent Scesa – Autonomous Vehicle Program Manager, EasyMile – Interview Series

mm

Published

on

Vincent Scesa is the Autonomous Vehicle Program Manager at EasyMile.

EasyMile is a pioneer in driverless technology and smart mobility solutions. The fast-growing start-up develops software to automate transportation platforms without the need for dedicated infrastructure. EasyMile’s cutting-edge technology is revolutionizing passenger and goods transportation, offering completely new mobility options. It has already deployed over 210 driverless projects with more than 320,000 people transported over 250,000 km.

What was it that initially attracted you to AI and robotics?

I’ve always been passionate about all forms of intelligence. I was always curious as a child and still am. My father is an engineer and my mother a psychosociologist. This got me interested. I realised that human intelligence is still much more advanced than artificial intelligence so managing the people that create the AI is also a great challenge. It’s the combination that gets me: managing the people and teams who create and manage the AI.

For as long as I can remember, I have always been fascinated by intelligence, whatever its form; computational, gestural, mechanical, emotional, collective, strategic, human or artificial. Robotics is a field that brings together so much of this. It attracted me at a very early age and I quickly oriented my path in that direction. I love that these machines have computing capacities that enable them to analyse complex situations and, combined with adapted mechanical assemblies, can provide answers and act by showing impressive behaviour.

This is what led me to couple my engineering background with a PhD in Robotics and Artificial Intelligence. I was able to work on decision making in complex articulated machines (bipedal robots), inspired by algorithms reproducing the processes found in the brains of living beings.

I then wanted to continue my professional career in this field, trying to find concrete applications of these technologies that would solve problems. But at the time there were still relatively few applications for robotics, so I threw myself into setting up companies three times. The first focused on AI for video games and robots, the second on robotics for monitoring industrial sites, and the last on cleaning robots for professionals.

 

You have been working with autonomous vehicles since 2015, what drew you to the space?

The more I learned about robots and machines, for me what became even more interesting than AI and robotics… was human intelligence! This is still far beyond what we can still imagine doing with computers and I am fascinated by the combination.

So, I was drawn to working with engineers and PhDs who are experts in robotics and AI.

This is what pushed me to join EasyMile in 2015, to manage human experts in the creation of artificial intelligence and robots in order to create, organize and monitor autonomous robotics and vehicle projects that solve everyday problems.

 

You’re the Project Manager at EasyMile, what does your average day look like?

My days are usually pretty busy 😉

My work is based on four different aspects:

  • Management of my team (the technical team who are the interface with our vehicle manufacturer partners): discussions on load plans, daily management, resolution of technical situations, facilitation of the work, training new staff, reviewing and optimizing our processes.
  • Relationship management with other teams: we interact with all the other teams from safety to fleet management through navigation and perception and AI algorithms. I work hand in hand with other managers to ensure that exchanges between teams are as efficient and optimal as possible. This is so that we can incorporate solutions into the vehicles that are best able to respond to each platform, while maintaining overall consistency.
  • Responsibility for the program to create new platforms (monitoring and reporting): I am in charge of making sure with the project managers that the various projects we have are in line with schedules and budgets and that our partners are satisfied with our work. I then report back to management on progress and status, and I make sure to present any blockages so that strategic decisions can be taken to resolve them.
  • Pre-sales for future projects: more recently, I regularly present our solutions to future partners, imagining new opportunities and building the project plans that allow us to help them meet their needs.

 

Can you discuss the sensor set and the computer vision technology that is used in EasyMile autonomous vehicles?

Taking a conservative approach to the sensor suite, EasyMile uses devices from a number of market leading suppliers but is not committed to any particular technology or supplier, and regularly implements updates every four to six months, which can involve sensor changes. The current set of LiDARs integrated into our EZ10 autonomous passenger shuttle for example comes from Velodyne, Valeo and SICK, indeed the entire sensor set and the computer suite are new in the vehicle. The purpose of this change was to be able to see further and in greater detail.

For example, the move to our next generation of vehicles included a change in the model of Velodyne LiDAR from the Puck VLP-16 to the Ultra Puck VLP-32 and its position shifted from just below the headlights to the roof, expanding the envelope of protection that it provides. The Ultra Puck offers a 120 mm range, 360 degree horizontal and 40 degree fields of view, a 0.33 degree vertical resolution and advanced features designed to minimise false positives. Another addition is a set of Scala LiDARs from Valeo mounted on the corners of the vehicle and at the front down at valence level.

Our new sensor suite also features stereo instead of mono cameras, adding passive 3D depth perception through binocular vision. The company has also integrated IMUs from a variety of sources including Continental and XSens.

We are testing several sensor sets and the market is evolving fast. All our vehicles are based on the same kind of sensors, but depending on the size, the dynamics of the platform and the use cases addressed, we make some small adjustments.

For now we are using what we think gives us the best information on every part of our environment, both close to the vehicle and at longer ranges.

Complementing the LiDAR’s, the stereo cameras provide input to EasyMile’s deep learning effort, which is centred at its Singapore office, a separate team that adds another redundancy layer in terms of software development. EasyMile’s own programmers write the algorithms that interpret sensor data and apply deep learning techniques to them.

 

The EasyMile autonomous vehicles are equipped with cybersecurity software, how important of an issue is cybersecurity?

Thinking of the vehicles we work with as equivalent to small, mobile enterprise computing systems, or even as data centres on wheels, makes the importance of cyber security obvious. With the main vehicle computer running the autonomous systems, the sensor suite, the communication and navigation systems, for example, there can be 20 or so computing instances connected over an Ethernet bus. Then there are the automotive components such as batteries, inverters and motors, controllers to open doors etc, all of which run software, and each vehicle is connected to the cloud. This makes for a potential “attack surface” that must be protected.

There are many different components on different networks – Ethernet, CAN bus etc – and some off-the-shelf components come with wifi capability.

You have to make sure that the network traffic looks nice, with no strange messages. For example, if your LiDAR is supposed to send messages at a frequency of 50 Hz or so, but you start to receive messages at 100 Hz, there is something fishy going on.

What’s more, sensors are not allowed to talk to each other; they are only permitted to communicate with the main computer.

Only the central computer has the right to speak to everyone, which means that you have to secure this device very thoroughly. It is the brain of the vehicle and is where we put most security. It’s what we call a minimised attack surface because we close every possible service that is not useful. We deactivate USB ports and wifi routers, for example. We make sure it is very, very hard for someone to connect to our computer.

Passwords and penetration tests

With many computer run devices on every vehicle and a growing fleet of vehicles that have to be maintained by engineers and technicians, there are a lot of passwords that have to be managed securely and applied in conjunction with other means of authenticating people who need physical access to vehicles deployed around the world.

Security is the main reason why EasyMile does not yet install software upgrades to its vehicles over the internet, for the moment sending one of their technicians with a secure computer to load the new software at the operator’s facility instead.

It’s like upgrading your brain. It has to be very, very secure, and we prefer to approach it step by step. First you have to prove that the code you want to run on your vehicle is the same code that was written by EasyMile developers, then you have to prove that this code was compiled by EasyMile on our servers and so on, so you have electronic signatures and certificates.  You have to have this layer of assurance just to make sure that when you inject a new system you are 100% sure that it is the right system.

To make sure that all these measures actually result in a secure vehicle and ecosystem, EasyMile regularly employs white hat hackers to conduct penetration tests

Environment hacks

In addition to the familiar cyber threats and countermeasures, there are new ones emerging that target services and sensors. The availability of GPS jamming and spoofing devices is well known, and attacks on the system are increasing, but hackers are also targeting sensors such as cameras and, through them, the AI and machine learning algorithms by subtly altering some aspects of the environment.

Last year, for example, a team from McAfee Advanced Threat Research managed to trick two Teslas equipped with Mobileye camera systems by altering a speed limit sign using electrical tape so that it appeared to read 85 mph instead of 35 mph. Tested in an off-road environment and with Traffic Aware Cruise Control engaged, both cars accelerated automatically in response to the sign before the drivers applied the brakes.

Some white hat hacking research teams are also looking into how to attack LiDARs.

The answer to this type of threat is never to rely on just a single sensor or subsystem for safety critical functions. EasyMile takes this further by mixing LiDARs from different suppliers. With three different brands on the vehicle, an attacker would have to be able to hack different LiDARs that don’t work in the same way, they use different wavelengths, for example. So redundancy is part of the security.

 

Can you also discuss the monitoring and blackbox technology?

Originally, the satellite navigation portion of our navigation and localisation system used only GPS, but the latest iteration is a multi-GNSS system that processes GLONASS as well and will soon add Galileo and Beidou also. The system’s precision is enhanced by Real Time Kinematic (RTK) processing. The GNSS position is also used in conjunction with information from 3G or 4G grid. We use it very much for correction.

The overall navigation and positioning system is accurate to a few cms, enabled by the combination of GNSS, the LiDARs, cameras, inertial system and odometry, which also provide redundancy and graceful degradation in case the system loses the GNSS signal.

Our vehicles communicate with EasyMile’s cloud-based supervision centre via the 3G/4G network. With a view to implementing 5G, the company is working with a number of providers around the world including SFR in France, Verizon in the US, Ericsson in Scandinavia and Saudi Telecom. In the short to medium term, 5G promises faster feedback from deployed EZ10 fleets, boosting both machine learning and R&D, the ability to update vehicles faster with large data sets, enhanced video surveillance through simultaneous streaming of multiple high-quality video feeds, and infotainment for passengers.

To communicate with the road infrastructure, our vehicles can exploit technologies provided by V2X suppliers, through an onboard unit that talks to roadside units, providing information on the state of traffic lights, for example.

If communications with the supervision centre are lost longer than 3 to 5 seconds, the vehicle will continue to the next planned stop and wait for communication with the EasyMile server to be restored so it can receive its next set of instructions.

 

EasyMile has multiple autonomous vehicles on the road. Could you give us some details on these?

Fully fabless, EasyMile licenses its software technology and sells/rents fully equipped driverless vehicles. It outsources production to blue-chip manufacturers.

EasyMile has developed a complete technology stack for autonomous vehicles that can be used for each of its use cases. The technology is vehicle/ platform agnostic.

The flagship vehicle this is found in is the EZ10 which is the most deployed autonomous passenger shuttle in the world. They carry passengers at speeds of up to 15 miles per hour and operate on a specified route. They are used around the world to show how cutting-edge technology will deliver huge benefits for communities. They improve public transport by connecting hubs and in many areas, provide a shared transport service where there otherwise wasn’t one. They also offer a powerful fleet management and supervision system, one of the first to be deployed with real-world autonomous vehicles.

Its rising star is the TractEasy fully electric autonomous tow-tractor. It allows 24/7 ground transportation of goods on industrial sites and logistics centres. It optimizes supply chains with the new and highly-automated innovation of being able to  cross from indoors to complex, outdoor environments, unlike existing automated guided vehicles (AGVs).

EasyMile is also working on other heavy-duty vehicle applications including buses, trams and trucks. My team is in charge of this program and I would say that working on EasyMile’s future AV vehicles is really challenging and motivating !

With more than 250 deployments in over 30 countries, EasyMile’s technology has powered 600,000km of autonomous driving to date.

 

What are some of the different cities or municipalities that you are currently working with?

In the USA, our EZ10s are involved in demonstration projects in 16 American cities, carrying tens of thousands of passengers. Most of these are by organizations like Departments of Transportation, airports, universities and transit agencies in collaboration with US-based EasyMile Inc.

We have a very strong presence in Germany and France as well as other projects around Europe.

These include business parks, hospitals, universities, cities and towns, and communities.

In Australia they have a focus on shared mobility with recent projects including a retirement village and connecting a ferry service to the centre of a small island.

We are also working on a number of projects in Asia.

 

Is there anything else that you would like to share about EasyMile?

It was such a wonderful opportunity for me because at the time when I was looking for an opportunity in this area the industry was still in its infancy.

I love that EasyMile is serious – we are really industrialising our products and services and this is still quite unique in this space. We’re not just playing in a garage with robots, we’re delivering real, measurable, services that deliver tangible benefits and outcomes for our clients.

Thank you for the fantastic interview I really enjoyed learning more about EasyMile, easily one of the most underrated startups in the autonomous vehicles space. Readers who wish to learn more should visit EasyMile.

Spread the love
Continue Reading