Connect with us

Deep Learning

Using Deep Learning Autonomous Research Vessel ‘Mayflower’ Sets Sail In September 2020

mm

Published

 on

Using Deep Learning Autonomous Research Vessel ‘Mayflower’ Sets Sail In September 2020

Next year marks the 400-year anniversary when the original ‘Mayflower ‘ ship set sail across the Atlantic Ocean. It is thought that it took 30 crew members to make sure that ship safely crosses the ocean, but as TechCrunch, CbrOnline, and other sources report, this version of the ‘Mayflower’ will be autonomous, powered by artificial intelligence, and will conduct extensive research.

The ship represents a result of global joint work that involves the University of Plymouth, marine research firm Promare, and technology giant IBM, which will along with technical support, supply its PowerAI vision technology backed by its Power Systems servers.

The new, autonomous Mayflower, as is reported, will be decked out with solar panels, as well as diesel and wind turbines to provide it with its propulsion power, as it attempts the 3,220-mile journey from Plymouth in England, to Plymouth in Massachusetts in the U.S.  For its part, Promare is hoping that a successful crossing and mission, which would be “ the first for full-size seafaring vessels navigating the Atlantic on their own,” will make possible the use of other research-focused applications of autonomous seagoing ships.

The researchers and academics from the University of Plymouth will develop research pods that are supposed to tackle various experiments in areas such as “maritime cybersecurity, sea mammal monitoring and even addressing the challenges of ocean-borne microplastics.”

The IBM technology, based on the deep learning models that were developed in partnership with Promare and using radars, LIDAR sensors, automated identification systems, and optical cameras, is designed to “help with the avoidance of obstacles and hazards at sea.” On-board servers are then supposed to process the data to allow the ship “to determine the best course forward at an optimal speed.”

CBR adds that the “AI model is being trained in Plymouth Sound, a bay on the English Channel, where it is being fed real-world data and images to help it determine risks and the best avoidance measures.”

It is envisioned that the system will use “both local and remote processing, meaning devices on the ship will be able to operate without connection at the edge, and then check back in periodically with HQ when conditions allow for processing via nodes located at either shore.”

During its voyage, “IoT and edge devices will be constantly collecting data and storing it on-board till the ship encounters edge nodes that are located onshore. Once in range, the ship will transmit its data to these nodes, which will then upload the data to the IBM cloud.”

Brett Phaneuf, Founding Board Member of ProMare, wrote in a blog that: “Putting a research ship to sea can cost tens of thousands of dollars or pounds a day and is limited by how much time people can spend onboard – a prohibitive factor for many of today’s marine scientific missions. With this project, we are pioneering a cost-effective and flexible platform for gathering data that will help safeguard the health of the ocean and the industries it supports.”

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Autonomous Vehicles

William Santana Li, CEO of Knightscope – Interview Series

mm

Published

on

William Santana Li, CEO of Knightscope - Interview Series

Knightscope is a leader in developing autonomous security capabilities with a vision to one day be able to predict and prevent crime disrupting the $500 billion security industry. The technology is a profound combination of self-driving technology, robotics and artificial intelligence.

William Santana Li,  is the Chairman and CEO of  Knightscope. He is also a seasoned entrepreneur, intrapreneur and former corporate executive at Ford Motor Company. He is also the Founder and COO of GreenLeaf, which became the world’s 2nd largest automotive recycler.

Knightscope was launched in 2013 which was very forward thinking for the time. What was the inspiration behind launching this company?

A professional and a personal motivation.  The professional answer is as a former automotive executive, I believe deeply that autonomous self-driving technology is going to turn the world upside down – but just not in agreement on how to commercialize the technology.  Over $80 billion has been invested autonomous technology with something like 200 companies on it – for years.  Yet, no one has shipped anything commercially viable.  I believe Knightscope is literally the only company in the world operating fully autonomous 24/7/365 across an entire country, without human intervention, generating real revenue, with real clients, in the real world.  Our crawl, walk, run approach is likely more suitable for this extremely complicated and execution intensive technology.  My personal motivation: someone hit my town on 9/11 and I’m still furious – and I am dedicating the rest of my life to better securing our country.  You can learn more about why we built Knightscope here.

 

Knightscope offers clients a Machine-as-a-Service (MaaS) subscription which aggregates data from the robots, analyzes it for anything out of the ordinary and serves that information to clients. What type of data is being collected?

Today we can read 1,200 license plates per minute, can detect a person, run a thermal scan, check for rogue mobile devices….it is over 90 terabytes of data a year that no human could ever process.  So our clients utilize our state-of-the-art browser-based user interface to interact with the machines.  You can get a glimpse of it here – we call the KSOC (Knightscope Security Operations Center).  In the future, our desire is to have the machines be able to ‘see, feel, hear and smell’ and do 100 times more than a human could ever do – giving law enforcement and security professionals ‘superpowers’ – so they can do their jobs much more effectively.

 

K1 is a stationary machine which is ideal for entry and exit points. What are the capabilities that are offered with this machine?

Yes, the K1 operates primarily at ingress/egress points for either humans and/or vehicles.  All our machines have the same suite of technologies – but at this time the K1 does have facial recognition capabilities which has proven to be quite useful in securing a location.

William Santana Li, CEO of Knightscope - Interview Series

The K3 is an indoor autonomous robot, and the K5 is an outdoor autonomous robot, both capable of autonomous recharging and of having conversations with humans. What else can you tell us about these robots, and is there anything else that differentiates the two robots from each other?

The K3 is the smaller version capable of handling much smaller and dynamic indoor environments.

William Santana Li, CEO of Knightscope - Interview Series

Obviously the K5 is weatherproof and can even go up ramps for vehicles – one of our clients is a 9-story parking structure – and the robot patrols autonomously on multiple levels on its own, which is a bit of a technical feat.

William Santana Li, CEO of Knightscope - Interview Series

 

Your robots have been tested in multiple settings including shopping malls and parking lots. What are some other settings or use cases which are ideal for these robots?

Basically, anywhere outdoors or indoors you may see a security guard.  Commercial real estate, corporate campuses, retail, warehouses, manufacturing plants, healthcare, stadiums, airports, rail stations, parks, data centers – the list is massive.  Usually we do well when the client has a genuine crime problem and/or budget challenges.

 

Could you share with us some of the noteworthy clients which are currently using the robots in a commercial setting?

Ten of the Fortune 1000 major corporations are clients, Samsung, Westfield Malls, Sacramento Kings, City of Hayward, City of Huntington Park, Citizens Bank, XPO Logistics, Faurecia, Dignity Health, Houston Methodist Hospital – are just a few that come to mind.   We operate across 4 time zones in the U.S. only.  Can check them out on our homepage at www.knightscope.com

 

The K7 is Multi-Terrain Autonomous robot which is currently under development. The pictures of this robot look very impressive. What can you tell us about the future capabilities of the K7?

The K7 is technically challenging but is intended to handle much more difficult terrain and much larger environments – with gravel, dirt, sand, grass, etc.  It is the size of a small car.

William Santana Li, CEO of Knightscope - Interview Series

 

Knightscope is currently fundraising on StartEngine. What are the investment terms for investors?

We are celebrating our 7th anniversary and have raised over $40 million since inception to build all this technology from scratch. We design, engineer, build, deploy and support it.  Made in the USA – and we are backed by over 7,000 investors and 4 major corporations and you learn about our investor base here.  We are now raising $50 million in growth capital to scale the Company up to profitability – we can accept accredited and unaccredited investors as well as domestic and international investors from $1,000 to $10M completely online.  You can learn more about the terms and buy shares here: www.startengine.com/knightscope

 

Is there anything else that you would like to share about Knightscope?

As I write this response, we are in complete lockdown in Silicon Valley due to the global pandemic.  The crazy thing is that our clients are ‘essential services’ (law enforcement agencies, hospitals, security teams) so we must continue to operate 24/7/365.  You can read more about why I think you should consider investing in Knightscope here – but these days the important thing to remember is that robots are immune!

Thank you for sharing information about your amazing startup. Readers who wish to learn more may visit Knightscope or the StartEngine investment page.

Spread the love
Continue Reading

Autonomous Vehicles

Dr. Leilei Shinohara, Vice President of R&D at RoboSense – Interview Series

mm

Published

on

Dr. Leilei Shinohara, Vice President of R&D at RoboSense - Interview Series

Dr. Leilei Shinohara is Vice President of R&D, RoboSense. With more than a decade of experience developing LiDAR systems, Dr. Shinohara is one of the most accomplished experts in this field. Prior to joining RoboSense, Dr. Shinohara worked at Valeo as the Technical Lead for the world’s first automotive grade LiDAR, SCALA®. He was responsible for multiple programs, including automotive LiDAR and sensor fusion projects. Dr. Shinohara managed an international sensor product development team for the development and implementation of systems, software, hardware, mechanics, testing, validation, and functional safety to build the first automotive grade LiDAR product.

Prior to joining Robosense as Vice President of R&D you had more than a decade of experience in developing LiDAR, including having worked with Valeo’s SCALA® LiDAR project. What was it that attracted you to joining Robosense?

RoboSense is Asia’s No.1 LiDAR with the amazing development speed.

Prior to joining RoboSense, I was impressed by the innovation capabilities and technical acumen of RoboSense. RoboSense is targeting to be the top smart LiDAR sensor provider to the automotive market, which not only about the LiDAR HW but also the AI perception algorithm. This goal is quite fitting with my vision for the future smart sensor approach. At CES 2019, RoboSense exhibited its latest MEMS solid-state LiDAR, which has superior performance to its peer products. At CES 2020, RoboSense made huge progress and announced that the solid-state LiDAR RS-LiDAR-M1 is ready for sale with price of $1,898.

“At CES 2019, RoboSense exhibited its latest MEMS solid-state LiDAR, which has superior performance to its peer products. At CES 2020, RoboSense made huge progress and announced that the solid-state LiDAR RS-LiDAR-M1 is ready for sale with price of $1,898.”

With RoboSense’s leading technology and my previous experience in the automotive industry, I am confident that together, we can greatly accelerate the development of automotive-grade LiDAR products that can be mass-produced to make highly automated driving a reality.

 

It’s important to understand the benefits of LiDAR technology for autonomous vehicles versus regular camera systems. Could you want us through some of these benefits?

Cameras and radar have their limitations. For example, cameras don’t work well under bad ambient light conditions and radar has limitations detecting the stationary obstacle. Compared to the camera, LiDAR’s biggest advantage lies in higher accuracy and precision. It is not affected by the ambient light conditions, such as night, bright sunlight or the oncoming headlights of other cars, and able to work in various complex traffic conditions.

Recently we know quite some news about the tesla autopilot accident. As we know, Tesla’s autopilot system only relies on the Camera and Radar. Those accidents also prove that LiDAR is critical to guarantee the safety and compensate for the weaknesses that currently occur with conventional sensors.

“Those accidents also prove that LiDAR is critical to guarantee the safety and compensate for the weaknesses that currently occur with conventional sensors.”

Both Audi’s A8 (a Level 3 mass-produced autonomous vehicle) and the Waymo One (an autopilot ride-hailing service) have used LiDAR, which is an important industry indicator. Level 3 autonomous passenger vehicles using LiDAR will gradually become the industry standard.

 

One of the common complaints that we hear about LiDAR is that it’s too expensive for the bulk of consumer vehicles. Do you feel that the price will eventually drop to make it more competitive?

As we all know, high cost is one of the major limits for traditional LiDAR systems to meet mass production, it is an inevitable trend that LiDAR price will eventually drop to meet consumer autonomous vehicle’s needs. Currently, our MEMS LiDAR using a MEMS micromirror to steer the laser beam for scanning can feasibly be made small and at a lower cost,which makes it more competitive to overpriced Mechanical LiDAR.

“Currently, our MEMS LiDAR using a MEMS micromirror to steer the laser beam for scanning can feasibly be made small and at a lower cost,which makes it more competitive to overpriced Mechanical LiDAR.”

RoboSense MEMS-based LiDAR M1 uses 905nm lasers with low cost, automotive grade, and compact size. Parts have reduced from hundreds to dozens in comparison to traditional mechanical LiDARs, greatly reducing the cost and shortening production time– achieving a breakthrough in manufacturability. The coin-sized optical module processes the optical-mechanical system results to meet autonomous driving performance and mass production requirements.

The M1’s prototype with a 200m detection range and equivalent to 125 layers now sells with a price of $1,898 in comparison with the conventional 128 layers mechanical LiDAR which costs ten thousand dollars. Furthermore, when we are moving to the volume mass-production, the sensor cost can drop to a range of $200.

 

In December 2019, Robosense announced its launch of a developed and complete LiDAR perception solution for Robo-Taxi (RS-Fusion-P5). What is this solution?

“The RS-Fusion-P5 equipped with RoboSense’s flagship mechanical LiDAR model RS-Ruby and four short range blind-spot LiDAR RS-Bpearl. The multiple LiDAR fusion perception solution is developed for further accelerating the development of Robo-Taxi.”

The RS-Fusion-P5 has excellent perception capabilities. It is able to reach 200m detection range for a 10% reflectivity target, and with up to 0.1° high-precision resolution with full coverage, zero blind spots in the sensing zone. In addition, through its advanced AI perception algorithms, multi-sensor fusion, and synchronization interfaces, vehicles are able to identify all-around obstacles and position easily and precisely, empowering Level 4 or above autonomous vehicles with full-stack perception Capabilities.

The embedded four RS-BPearl form a hemispherical FOV coverage of 90° * 360° (or 180° * 180°), which not only can precisely identify objects around the vehicle body such as pets, children, roadbeds as well as other details of the near-field ground area but also detect the actual height information under particular scenarios such as bridge tunnels and culverts, further supporting autonomous vehicles for driving decision making and greatly improving car safety.

 

The RS-Fusion-P5 has zero blind spots in the sensing zone. How is this achieved?

To cover the blind spot zone, as following picture shows, 4x RS-BPearl are integrated on 4 sides around the vehicle.

The BPearl is a mechanical type LiDAR based on the same platform as 16/32/Ruby LiDAR but special designed for the blind spot area detection.

Dr. Leilei Shinohara, Vice President of R&D at RoboSense - Interview Series

 

 

RoboSense’s LiDAR production line recently obtained the IATF16949 Letter of Conformity. This is a huge milestone for the company, can you explain the importance behind this letter and what it means for the company?

IATF 16949 is the most widely used global quality management standard for the automotive industry, which emphasizes various product reliability metrics. RoboSense has obtained the IATF16949 certificate in the automotive field, which now fully qualifies it to supply to automotive customers. It also has accelerated partnerships of automotive-grade LiDAR serial productions with major OEMs and Tier1s. Moreover, it stands for the global industry experts’ recognition of RoboSense product design, development, and production processes and also indicates that RoboSense has achieved a new milestone of complete readiness for serial mass production of automotive LiDARs, including the latest solid-state smart LiDAR “RS-LiDAR-M1”.

 

Robosense won this year’s CES 2020 innovation award for the first MEMS-based smart LiDAR sensor, the RS-LiDAR-M1. What sets it apart from competing solutions?

Since opening to partners in 2017, the Smart LiDAR Sensor’s built-in perception algorithm, the RS-LiDAR-Algorithm, is in the leading position in the automotive LiDAR industry. The RoboSense RS-LiDAR-M1 Smart LiDAR is world’s first and smallest MEMS-based smart LiDAR sensor incorporating LiDAR sensors, AI algorithms and IC chipsets, which transforms overpriced traditional LiDAR systems (also known as solely information collectors) to full data analysis and comprehension system, providing rich and reliable 3D point cloud data and structured, semantic environmental perception results in real-time for a faster autonomous vehicle decision-making than ever before. It fully ensures Level 3-Level 5 advanced automatic driving with the highest level ASIL-D relevant perception safety, which distinguishes us a lot from LiDAR companies.

“The RoboSense RS-LiDAR-M1 Smart LiDAR is world’s first and smallest MEMS-based smart LiDAR sensor”

In addition, the RS-LiDAR-M1 based on solid-state MEMS technology meets the automotive requirement. The RS-LiDAR-M1 has a field of view of 120°*25°, which is the MEMS solid-state LiDAR’s largest field of view among released products worldwide. RoboSense uses 905nm lasers with low cost, automotive grade and small size instead of expensive 1550nm lasers.

 

How does the LiDAR industry in China compared to North America?

LiDAR industry in China starts later than North America, but sooner becomes one of the fastest-growing markets in terms of autonomous driving. In 2018, RoboSense won a strategic investment of over $45 million USD from Alibaba Cainiao Network, SAIC and BAIC, setting the largest single financing record in China’s LiDAR industry. Along with this strategic investment, powered by RoboSense’s MEMS Solid-State LiDAR M1, Alibaba Cainiao announced the UAV logistic vehicle which accelerates the LiDAR application in the logistic market. Meanwhile, the RoboTaxi application also speeds up the LiDAR market in China since last year.

As a conclusion, the current market size in China is smaller than the US, but I do also see the fast growth in application of autonomous driving, MaaS, logistics, and robot applications.

 

When do you believe that we will see fully operation Level 5 autonomous vehicles on the road?

Fully automated vehicles (L5) to see on the road, I think, will still take a long time to be reached. There will be step-by-step growth in autonomous vehicles. There are already vehicles equipped with L3 system. Also, some of our partners and customers are in developing the L4 system with a potential start of production time in 5 years. But for the fully automated L5 vehicle, the biggest concerns are always the safety and public acceptance. If they are not able to prove that fully automated vehicles are safer than human drivers, there will be difficulty becoming popular. Currently, I do see the industry is moving in this direction step by step. But I don’t think there will be fully automated vehicles in 10 years.

 

Is there anything else that you would like to share about Robosense? 

RoboSense has received numerous awards, including the CES 2020 and 2019 Innovation Awards, 2019 AutoSens Award, and 2019 Stevie Gold Award and our partners cover the world’s major autonomous driving technology companies, OEMs, and Tier 1s, including the world’s leading automaker, China’s FAW(First Automobile Works), who will use the RoboSense RS-LiDAR-M1 LiDAR as FAW’s proprietary next-generation autonomous driving system.

RoboSense will focus on the development of the solid-state M1 product into automotive-grade mass-production as the first priority. We are not only developing the hardware, but also software as a comprehensive smart sensor system. The delivery of our Automotive Grade MEMS LiDAR in 2020 was one of our biggest milestones.

In addition, safety is the biggest challenge we will tackle. To ensure safety, fusion with different sensors is needed. Furthermore, an AD-friendly infrastructure, such as an intelligent vehicle cooperative infrastructure system (IVICS), also supports autonomous driving. Therefore, the development of short-range Blind Spot Detection (BSD) LiDAR, multiple sensor fusion projects and IVICS projects to provide high precision perception systems are also our focus in 2020.

Thank you for this fascinating interview, anyone who wishes to learn more should visit Robosense.

Spread the love
Continue Reading

Autonomous Vehicles

Swarm Robots Help Self-Driving Cars Avoid Collisions

Published

on

Swarm Robots Help Self-Driving Cars Avoid Collisions

The top priority for companies developing self-driving vehicles is that they can safely navigate and avoid crashing or causing traffic jams. Northwestern University has brought that reality one step closer with the development of the first decentralized algorithm with a collision-free, deadlock-free guarantee. 

The algorithm was tested by the researchers in a simulation of 1,024 robots, as well as a swarm of 100 real robots in the lab. Within a minute, the robots were able to reliably, safely, and efficiently converge to form a predetermined shape in less than a minute. 

Northwestern’s Michael Rubenstein led the study. He is the Lisa Wissner-Slivka and Benjamin Slivka Professor in Computer Science in Northwestern’s McCormick School of Engineering. 

“If you have many autonomous vehicles on the road, you don’t want them to collide with one another or get stuck in a deadlock,” said Rubenstein. “By understanding how to control our swarm robots to form shapes, we can understand how to control fleets of autonomous vehicles as they interact with each other.”

The paper is set to be published in the journal IEEE Transactions on Robotics later this month. 

There is an advantage in using a swarm of small robots compared to one large robot or a swarm led by one robot; there is a lack of centralized control. Centralized control can become a major reason for failure, and Rubenstein’s decentralized algorithm acts as a fail-safe. 

“If the system is centralized and a robot stops working, then the entire system fails,” Rubenstein said. “In a decentralized system, there is no leader telling all the other robots what to do. Each robot makes its own decisions. If one robot fails in a swarm, the swarm can still accomplish the task.”

In order to avoid collisions and jams, the robots coordinate with each other. The ground beneath the robots acts as a grid for the algorithm, and each robot is aware of its position on the grid due to technology similar to GPS. 

Prior to undertaking movement from one spot to another, each robot relies on sensors to communicate with the others. By doing this, it is able to determine whether or not other spaces on the grid are vacant or occupied. 

“The robots refuse to move to a spot until that spot is free and until they know that no other robots are moving to that same spot,” Rubenstein said. “They are careful and reserve a space ahead of time.”

The robots are able to communicate with each other in order to form a shape, and this is possible due to the near-sightedness of the robots. 

“Each robot can only sense three or four of its closest neighbors,” Rubenstein explained. “They can’t see across the whole swarm, which makes it easier to scale the system. The robots interact locally to make decisions without global information.”

100 robots can coordinate to form a shape within a minute, compared to the hour that it took in some previous approaches. Rubenstein wants his algorithm to be used in both driverless vehicles and automated warehouses. 

“Large companies have warehouses with hundreds of robots doing tasks similar to what our robots do in the lab,” he said. “They need to make sure their robots don’t collide but do move as quickly as possible to reach the spot where they eventually give an object to a human.”

 

Spread the love
Continue Reading