It seems that venture capitalists are really picking up the tabs of artificial startups, not only in the US but around the world. Venture Beat presented recently the Q3 2019 data from the National Venture Capital Association. According to that information “965 AI-related companies in the U.S. have raised $13.5 billion in venture capital through the first 9 months of this year. That should eclipse the 1,281 companies that raised $16.8 billion in 2018, according to the 3Q 2019 PitchBook-NVCA Venture Monitor.”
NVCA noted that the sector is still in its youth and that“the average deal size and valuations still tend to fluctuate quite a bit from quarter to quarter.” Still, as they also point out, “have clearly moved well past early, experimental stages, which tend to attract lots of little bets as investors explore the terrain.”
At the same time, TechWorld cites a 2018 report by the McKinsey Global Institute that AI “could contribute an additional global economic activity worth around $13 trillion by 2030, by which point around 70 percent of companies will have adopted at least one form of AI”.TechWorld adds that eight of the top 20 European universities and 40% of European tech unicorns’ currently reside in the UK, while AI venture capital firm Asgard estimates that the UK has the largest “ecosystem of AI startups” in Europe.
TechWorld also lists 36 AI startups that deserve attention, and its top five choices are the following:
[ ] Distributed, an on-demand workforce platform that assists businesses focused on delivering digital outcomes faster and to a higher standard.
[ ] SenSat builds digital copies of physical environments, where artificial intelligence models can be released to help understand the parameters of that environment and provide valuable feedback. The one-line mission statement is: “To teach computers how to understand the physical world we all live in.”
[ ] Phrasee, which uses AI to create marketing copy for customers including The Times, SuperDry and Domino’s. The company has developed a language generation algorithm that analyses engagement from previous campaigns to craft the content in email subject lines, push notifications and social media ads.
[ ] Deputi, which is leveraging AI to help businesses minimize the costs of day-to-day operational tasks using automation.
On the European continent itself, Silicon Anals consider the Dutch companies as the “early adopters of AI and are competitive when it comes to the importance placed on new AI developments.” It points to the following three AI companies as top to watch in the coming period:
– Dashmote (Amsterdam, current funding € 2.5 million) creates artificial intelligence technology that helps companies make complex decisions based on data gathered from images and text uploaded to the internet. Founded in 2014 by Dennis Tan (CEO), Matthäus Schreder (CPO), and Stefan Tan (CFO), the company’s platform is capable of analyzing the images and gain valuable insights.
– Doculayer.ai ( The Hague, € 3 million) is a content management platform that leverages AI to manage unstructured information. The company’s technology is capable of analyzing the content automatically, enhance the findability, and help enrich the quality of the documents.
– Owlin (Amsterdam, € 3.1 million), which provides real-time news analysis. The company uses the latest AI and machine learning technologies to monitor, analyze, and visualize more than 2.8 million news sources worldwide in 8 languages and all in near real-time.
In Asia, for example, China’s SenseTime Group, currently regarded as the world’s most valuable AI startup, has seen its valuation breach US$7.5 billion this year, following massive investments from the likes of SoftBank. The same investor as Nikkei Asian Review reports, “is accelerating its investment into the country’s artificial intelligence startups despite the verdict of its parent group’s founder that Japan is ‘underdeveloped’ in the cutting edge technology.
All this indicates that 2019 could be the best for AI startups yet.
Marcio Macedo, Co-Founder of Ava Robotics – Interview Series
Marcio Macedo is Co-Founder and VP of Product and Marketing at Ava Robotics, a recent spin-off of iRobot that focuses on autonomous navigating robots for enterprise, commercial and industrial environments.
Having previously worked at iRobot, what were some of the interesting projects that you worked on?
At iRobot we were fortunate to be designing and pioneering applications of telepresence, including an FDA-certified telemedicine robot for intensive care environments and the Ava telepresence product in partnership with Cisco.
Ava Robotics is a spinoff of iRobot, what was the inspiration behind launching a new company instead of keeping it in the iRobot family?
With iRobot’s strategic focus shifting to home products, Ava Robotics spun off to operate independently and better address the needs of our nascent markets. As an independent company we gain more flexibility in meeting our customers’ needs while enjoying the support of technology developed originally at iRobot.
The Ava Telepresence robot can be remotely controlled by users and features autonomous technology to have the robot simply move itself to a designated area. Could you walk us through the machine learning that is used to have the robot navigate through an environment without bumping into new objects?
When an Ava is installed at a location it learns its operating environment and creates a realistic topology map of the site. This map can be further annotated to force specific behaviors, such as speed zones, keep-out zones, etc.
Ava has built-in obstacle detection and obstacle avoidance (ODOA) capabilities, which leverage multiple sensors in the robot body so that Ava will not bump into people or objects in its path. Furthermore, if the most direct path to its destination is blocked, the Ava will search for and navigate through an alternative path if one is available.
What are the navigation sensors that are used, is it reliant on LiDAR or regular cameras?
Ava’s robotic navigation technologies use a variety of sensors (3-D cameras, LiDAR, and IMU) and they are combined for all actions, such as localization, planning, collision avoidance, cliff detections, etc. We operate in medium- and large-size spaces, so we think LiDAR is a very valuable part of a sensing package for real-world commercial spaces.
The telepresence robot looks like it would be extremely useful in the hospitality sector. Could you walk us through some of these potential use-cases?
Visits to Executive Briefing Centers provide access to senior-level executives and deliver value in the form of hands-on briefings, strategy reviews, product demonstrations and opportunities for relationship building. Customer Experience Centers offer organizations the opportunity to wow customers and show off their latest products and services. But with so many busy schedules, getting the right people to attend is not always easy.
For meeting planners, Ava provides the ability to “walk” the hotel and visit the meeting spaces, conference rooms and ballrooms that are available for their conference or event. In this application, the property’s sales and marketing team gain a unique tool to accelerate their sales cycles.
When invitees and guests can’t get to the conference or event, Ava allows them to attend and move around as if they were there. Whether it’s a business meeting, conference exhibit hall, or social event, Ava provides an immersive experience with freedom to move around.
What are some of the use-cases that are being targeted in the corporate sector?
Businesses benefit from Ava in many ways. The robot allows freedom of movement and access to meetings, corporate training, factory inspections, manufacturing sites, labs and customer experience settings.
Natural, face-to-face, ad-hoc conversations are critical to moving a business forward. Yet today’s globally distributed businesses have employees telecommuting from home or from across the world, who miss these vital interactions. With Ava, you unlock the ability to bring everyone back together as if they’re sitting together in the office and can walk up and interact naturally.
Use Case examples include:
- Agile Product Development: Agile product development teams come together for scheduled and unscheduled meetings, looking to achieve high levels of collaboration and communication. When remote workers are part of the team, existing collaboration tools are challenged to meet the need. With Ava, remote team members can actively participate in stand-up meetings, sprint planning and demos, and project reviews as if they were co-located with the team.
- Manufacturing: In manufacturing, remote visits by management, collaboration between experts at headquarters and staff at the plant, and remote tours by customers or suppliers are frequent – and necessary – events. Ava increases collaboration between those on the design team or in engineering and those building and delivering the final product on the plant floor. Also, imagine that the manufacturing plant is experiencing a production-line problem, but the person who knows how to fix it is thousands of miles away. In such a case, the technician needs to freely walk to different parts of the manufacturing floor to meet with someone or see something. Ava can by delivering that critical physical presence right to the factory floor. Ava allows the remote person to immediately connect via the robot as if she was physically present, put eyes on the problem, and communicate with the local team on the floor. As a result, she can deliver immediate insight into the problem and quickly resolve the issue.
- Laboratories and Clean Rooms: Those who work in laboratories and clean rooms work hard to ensure they are kept sterile and clean. While necessary, this can be a time-consuming process for employees entering and leaving these spaces repeatedly during the day. Due to the risks of potential contamination, companies often limit tours by customers and other visitors. Ava brings people right into a laboratory or a clean room without compromising the space. With Ava, remote visitors can easily move around as if they were there in person, observing the work being done and speaking with employees.
Ava Robotics recently partnered with Qatar Airways to Introduce Smart Airport Technologies at QITCOM 2019. Could you share with us some details in regards to this event and how those in attendance reacted?
We have been fortunate to work with Hamad International Airport in Qatar and Qatar Airways via our strategic partner Cisco building applications for robots in airports for a variety of use cases. Showing our work at QITCOM 2019 was a good opportunity to expose to the IT community to the applications that are now possible through different verticals and industries.
Is there anything else that you would like to share about Ava Robotics?
In these times of challenges to global travel, we have seen increased demand for solutions like telepresence robotics. Customers are just beginning to realize the potential of applications that truly empower remote workers to collaborate as if they were physically present at a distant location.
Elnaz Sarraf, CEO and founder of Roybi – Interview Series
Can you walk us through your journey, from growing up in Iran, to becoming an entrepreneur?
My childhood and Iranian heritage definitely play an important role in who I am today. My parents paid a lot of attention to my education at home and in school. My dad was a small business owner and was the face of our company outside of the home, while my mom took care of all the financial and operational aspects of our business at home, because as a woman in Iran, it would not have been acceptable for her to be involved directly in business negotiations. But the limitations imposed on women didn’t stop my parents from exposing me to every aspect of our business. My dad took me along to many of his meetings; observing the art of negotiating and conducting business deals fascinated me with both the business and social aspects of entrepreneurship.
While at home, I watched my parents manage the company together and discuss the financial elements of holding our business together and finding innovative ways to grow. My summers were always filled with extracurricular classes in the arts, engineering and science. I’m very grateful to my parents who exposed me to a diverse set of social and academic skills at an early age. When I was starting ROYBI, I knew that I have to do a variety of different tasks myself until the company grows. Because of my background in the arts and engineering, I was able to multitask on projects such as industrial design, website designs, coding, and presenting my ideas and vision for the company to investors and partners.
What was it that inspired you to design an AI-powered educational robot?
Our education system needs a fundamental change, and that change starts with early childhood education. It should no longer be a one-size-fits-all approach. Every child has his/her own unique set of skills and our focus needs to be on their individual capabilities. We saw a huge gap in this area and decided to use technology and specifically artificial intelligence to bring about change that can help children, parents, and teachers. We developed Roybi Robot to interact with children as young as 3-years-old because early childhood is the most critical age in a child’s growth and future success. We’re constantly engaged in thinking about the benefits of robotics and AI in early childhood education.
ROYBI teaches children languages and STEM skills by playing, what are some examples of games that children can play?
We use different methodologies to deliver our educational content. Some lessons are only based on conversations. By using our voice recognition technology, Roybi Robot can understand if the child is saying the correct word or not. If the answer is not correct, it encourages the child to repeat using playful and compassionate messages.
Also, lessons alternate between fun and educational to games that can be played by interacting with the buttons on Roybi Robot’s charging plate. This creates more involvement and encourages children to move their hands, body and gaze and stay engaged.
Facial detection and emotional detection are the primary focus of ROYBI AI. Can you discuss some of the technologies behind this?
We use several technologies to deliver our content. One important AI component is voice recognition. Based on what the child says during the lessons, we can understand their progress and interest and create our reports for parents and educators. Facial detection is being used to initiate a conversation with a child to say “Hello.” And we use emotion detection as social-emotional support for the child while interacting with Roybi Robot, the educational robot.
ROYBI was recently featured on the Cover of TIME Magazine ‘As One Of The Best Inventions of 2019’. How did it feel to see your product on the cover of one of the top magazines in the world?
We were shocked, excited, honored, and overwhelmed at the same time. We knew we were on something big that would change the world but receiving such amazing recognition and even getting featured on the cover of the magazine gave us so much encouragement to continue our path even stronger!
There have been some pilots with ROYBI in classrooms. Can you share some of the feedback that you’ve received from teachers?
Our content is created by teachers, and we’re hoping to pilot in schools in the next academic year. The teachers who work with us to create the lessons, give us direct feedback on what is needed most to encourage children to engage with our content.
You’ve stated that you want to see every child in the world hold a ROYBI in their hands, do you believe that this could become a possibility if the classroom pilots are a success?
Absolutely! We are on our way to provide learning both at home and classroom settings and we want to change the way our children learn. To do that, we will ensure to provide our Roybi Robot to as many children as possible and as you can imagine it is an ambitious mission. To make this happen, we also invite future partners, delegates, governments, investors, mentors, and anyone who shares the same passion as us, to give us a hand, so together we can change the world for our children!
ROYBI recently acquired kidsense.ai, what was the purpose behind this acquisition? Was it to simply offer more language options?
The recent acquisition happened as a strategic decision to make ROYBI’s technology even more accessible to all children around the world. With this acquisition, ROYBI becomes a leader in voice recognition AI that is specifically developed for children. As part of this proprietary technology, we can now accelerate language development efforts as well.
What would you tell women who feel that AI and tech are dominated by men and that it’s not an even playing field for them?
It is time to change this! Put your best effort at work. You got this!
Do you have any advice for female entrepreneurs who feel that it is more difficult for them to be taken seriously and to receive funding than their male counterparts?
The only limitation is in your own thoughts. There is no limit for what you can achieve no matter how difficult a situation may seem. You will find support from many people around you who share a similar passion as you. I encourage women to engage and involve themselves more in technology and how it is and will affect our future generations.
To make the change happen, first, we need to start by ourselves and continue it together!
Do you have anything else that you would like to share?
As part of growing ROYBI globally, we are continuously looking for partnerships with schools, government entities, and foundations to help us make Roybi Robot and education more accessible around the world and to every child regardless of their location or family income status. If you believe you can help us in our mission, reach out to us at email@example.com
Elnaz Sarraf is an inspiration to women and minorities, and shows that they too can be a success. Please visit the Roybi website to learn more or to order a Roybi Robot for a young child.
Portland Startup Using AI To Help Protect Endangered Animals
A non-profit based in Portland developed an AI computer vision system intended to recognize animals and help conservation groups protect them. Animals like zebras, giraffes, lions, and whales are often endangered, and both conservation groups and scientists need to track these animals. As reported by The Seattle Times, the nonprofit Wild Me’s AI is capable of recognizing animals by their patterns, like spots or stripes, enabling researchers to track animals more effectively.
The process of gathering identifiable information about animals has long been expensive, laborious, and invasive. However, a computer vision application that can recognize animals by sight can make recognizing individual animals much easier. Animals come in all different shapes, sizes, and patterns. If there’s one thing neural networks excel at, it’s recognizing patterns.
Thanks to advances in computer vision and photography, photo surveys are becoming the method of choice for animal population estimates and some forms of animal tracking. The process is far less invasive and much cheaper than traditional methods, not to mention much quicker. Wild Me’s AI is capable of recognizing animals based on unique patterns like a Giraffe’s spots, and it can recognize these specific animals much quicker than a human observer could.
The AI model developed by Wild Me is trained on large volumes of image data. This image data comes from regular citizens as often as it does form scientists. The AI model then uses computer vision techniques and machine learning systems to recognize specific giraffes or other animals.
The new AI system has been well received by conservation scientists. Christin Khan, an aerial surveyor of whales for National Oceanic and Atmospheric Administration (NOAA), was quoted as saying that an AI solution for monitoring endangered species of whales has been wanted for years. Khan works for NOAA tracking North Atlantic right whales, and it’s estimated that there are only about 400 of these whales left. Tracking them proves difficult because they tend to migrate long distances. Wild Me assembles catalogs of images of a given species into a database called a Wildbook. The Wildbook for whales encourages researchers around the globe to collaborate and share their findings and observations, which can make monitoring the movement of whales over large distances easier.
According to Michael Brown, a conservation scientist at the Smithsonian Conservation Bio Institute and the Giraffe Conservation Foundation, the Wildbooks and AI algorithms developed by Wild Me help protect animals in various ways. Said Brown to the Seattle Times:
“We can use this information to track diseases and poaching threats, look at manifestations of diseases. It lets us piece together an understanding of how these threats to giraffes are spatially situated (and) how the giraffes are utilizing different landscapes over time.”
Wild Me has also designed an AI system that pulls data from YouTube videos. The AI analyzes video of marine animals like sea turtles and whale sharks, and it uses this data to get a better estimate of the animal population. This is a useful way of harvesting more data and supporting the creation and optimization of Wildbooks.
The NOAA is employing AI to protect animals in another fashion as well. NOAA scientists are collaborating with Microsoft to design an AI that can monitor aquatic and arctic animals like beluga whales, polar bears, and ice seals. The AI tools are will be trained on sound and be able to distinguish the sound of a seal from the noise a dredging machine makes. An AI will also be trained on images in order to allow airplanes to fly over stretches of sea ice and take counts of polar bears and seals.