Paolo Pirjanian is an Armenia born in Iran and fled to Denmark as a teen. From the time he was young, he was fascinated by computers and started coding in his bedroom. After getting his PhD in robotics, Paolo became an early leader in the field of consumer robotics who has 16+ years of experience developing and commercializing cutting-edge home robots. He worked at NASA JPL and led world-class teams and companies at iRobot®, Evolution Robotics®, and others. In 2016, Paolo founded Embodied, Inc. with the vision to build socially and emotionally intelligent digital companions that improve care and wellness and support people in living better lives every day.
What attracted you initially to AI and robotics?
My fascination with AI and robotics stems back to my childhood. I was displaced from country to country several times until our family moved to Denmark. By accident, I discovered a computer. I became so fascinated by it that I locked myself in my room and started coding all day and night for months. My parents thought I was depressed or on drugs, but it was none of that. I was just so completely fascinated by the computer!
During that same time, I saw a documentary on TV by Pixar. Pixar was presenting their first animated short, Luxo Jr., a two-minute short about two table lamps running around and playing with a ball. I was so fascinated by that and amazed that a computer that I was just learning to code could generate such endearing characters on TV that evoke so much emotion in me. So from there on, I decided to go to school to study robotics, eventually getting my PhD.
I then moved to the US to work on Mars rovers at NASA, which was a childhood dream job. Eventually, I got into entrepreneurship to develop SLAM navigation technology that now enables iRobot’s products.
But looking back, I realized that my inspiration for this whole journey was actually the Pixar short animation of bringing life to inanimate objects. So, that’s why we created Embodied – to bring life to robots that can interact with people, focusing on helping children with social-emotional development.
When did you first come across the concept for launching Evolution Robotics?
Evolution Robotics was originally started by Bill Gross of Idealab in 2001 to become the Microsoft of Robotics, a bold vision which turned out to be way way too early and eventually failed. I was the CTO and GM at Evolution Robotics and after its failure I negotiated with Idealab to spin out some of the core technologies that my team and I had developed and start a new company. In 2008 the new entity, also known as Evolution Robotics, started out to develop products using our core navigation technologies including NorthStar and vSLAM which were groundbreaking approaches to spatial mapping and autonomous navigation similar to what we are seeing in self-driving cars but targeted for low-cost, consumer electronics products.
We developed a line of products for automatic sweeping and mopping of hard floors called Mint which we launched in 2010. By 2011 we rapidly grew to $25m in sales and got acquired by iRobot in 2012 for our product revenues and our navigation technology vSLAM which now powers Roomba and Braava product lines at iRobot.
At that point you became the CTO at iRobot. Could you discuss your experience at iRobot and what you learned from your experience?
As the CTO of iRobot, I was able to quickly integrate vSLAM into the Roomba product line to launch a new model that was able to systematically cover the entire floor plan without missing a spot. That helped the company stay ahead of competition like Dyson which was coming out with systematic cleaning solutions. vSLAM is now an integral part of iRobot’s flagship product lines Roomba and Braava.
I enjoyed working closely with Colin Angel, CEO of iRobot to help set a strategic direction to make Roomba central to the connected home ecosystem where Roomba’s spatial awareness gives it a unique position in understanding the floor plan and becoming the connective tissue between all connected devices. That strategy seems to have had a strong footing since my departure in 2015.
In addition, we decided on doubling down on the Consumer Robotics business to help iRobot maintain its global leadership position. This led to the divestiture of the defense business and exiting other peripheral businesses to bring focus and intensity to the consumer business.
Furthermore, we had to re-architect the organisation to be able to support a software-heavy strategy with connected products. That required a transformation of company culture to embrace more of an agile, iterative approach.
The list of things I learned at iRobot is long. One thing that sticks out is the power of team culture. Staying agile and committed to mission is probably the most important competitive advantage any company can have above any patent portfolio and above trade secrets. If you have a high-performing team, who feels empowered and inspired towards a clear goal, they will be hard to stop.
You’re currently the Founder & CEO of Embodied. Can you discuss what the inspiration was behind launching this company?
I really enjoyed my time at iRobot as the CTO, and we were working on a lot of exciting projects and pushing the boundaries of robotics. It was exciting to launch commercially successful robots into the marketplace that performed helpful physical tasks, such as vacuuming the floor.
However, in the back of my mind, I knew I still had a lifelong dream to fulfill – to build socially and emotionally intelligent robotic companions that improve care and wellness and enhance our daily lives. I knew we were at a tipping point in the way we will interact with technology. So with that, I decided to resign from iRobot and start Embodied.
When we started Embodied, from the beginning, we were rethinking and reinventing how human-machine interaction is done beyond simple verbal commands, to enable the next generation of computing, and to power a new class of machines capable of fluid social interaction. Specifically, the first product was to focus on building an animate companion to help children build social and emotional skills through play-based learning. This companion would come to be known as Moxie. Moxie is a new type of robot that has the ability to understand and express emotions with emotive speech, believable facial expressions and body language, tapping into human psychology and neurology to create deeper bonds. To do this, we brought together a cross-functional team of passionate leaders in engineering, technology, entertainment, game design, and child development. For the past four years, Embodied has been working tirelessly to bring all of the latest technology together to bring Moxie to life, and the team is excited to finally deliver it to families in need of a co-pilot for supporting healthy child development.
What are some of the unique entrepreneurial challenges behind a robotics startup?
It’s fun to do the impossible, but it can also be a little scary. We knew that if we wanted to revolutionize how humans interact with machines, we were going to have to solve problems that hadn’t been solved before. Some problems included:
- Flat screens are on devices, and we want to bring a device to life. So how do we create a face that’s more life-like, rounded, and not two-dimensional?
- Current conversation engines only allow for very limited conversation, so how do we create a solution that allows for more natural conversation?
- We don’t want the voice to sound robotic, so how do we make the voice sound natural, with contextually-appropriate tonality and inflections?
- We knew eye contact was very important, so we had to figure out how to use computer vision to ensure reliable eye tracking capabilities.
All of these questions about Moxie’s features led to many state of the art technological innovations.
First, projected and rounded face. The statistics are starting to pile up to show us that too much screen time can have devastating effects on developing minds. Even worse, most kids’ tech devices feature digital screen displays. That’s why we decided to put in the extra investment to make Moxie’s face fully projected which allowed us to create a face screen that is rounded with naturally-curved edges, instead of a flat display. This makes interacting with Moxie feel more life-like, realistic, and believable. In fact, only through this 3D appearance of the face, is it possible for Moxie to have actual eye-contact with the child. So not only is Moxie’s face protecting children from excessive screen-time, but it also makes the interaction experience feel all the more real.
Second, the conversation engine. Thus far, smart speakers and voice assistants have required the repetitive use of wake words to initiate commands. Moxie’s conversational engine is different. It follows a natural conversation and responds to typical flow of communication without the use of wake words (like “Hey Siri” or “Ok Google”). Advanced natural language processing allows Moxie to recognize, understand, and generate language seamlessly, making the interaction feel more personal and natural.
Third, speech synthesis. Moxie’s voice doesn’t have the same robotic speech and monotone sound found in most robots and voice assistants. Instead, Moxie uses natural and emotive vocal inflections, which help communicate a broader range of emotions. This enhances the scope of social-emotional lessons Moxie can engage in, while also bringing an added life-likeness and believability to the interaction.
Fourth, the eyes. One of the most important features is Moxie’s large, animated eyes. Innovative eye tracking technology allows Moxie to keep eye-contact with the child even as the child moves about the room. This eye tracking capability not only creates an incredibly life-like interaction, but it also helps the child practice eye contact. Additionally, the large, animated eyes help exaggerate emotional communication, so the child can more easily recognize certain emotions. Practicing eye contact and understanding emotions are two key developmental goals in social-emotional curriculum.
Lastly, all of these technological features allow interactions with Moxie to feel realistic and natural. Moxie’s multimodal sensory fusion makes Moxie aware of the environment and its users. Moxie’s computer vision and eye tracking technology helps maintain eye contact as the child moves. Machine learning helps Moxie to learn user preferences and needs, and recognize people, places, and things. Specially located mics enable Moxie to hear the direction a voice came from and easily turn to the source. Touch sensors allow Moxie to recognize hugs and handshakes. All of these pieces come together to make the experience very realistic.
Could you tell us some of the things that makes Moxie perfect for children?
With Moxie, children can engage in meaningful play, every day, with content informed by the best practices in child development and early childhood education. Every week is a different theme such as kindness, friendship, empathy or respect, and children are tasked to help Moxie with missions that explore human experiences, ideas, and life skills. These missions are activities that include creative unstructured play like drawing, mindfulness practice through breathing exercises and meditation, reading with Moxie, and exploring ways to be kind to others. Moxie encourages curiosity so children discover the world and people around them. All these activities help children learn and safely practice essential life skills such as turn taking, eye contact, active listening, emotion regulation, empathy, relationship management, and problem solving.
Embodied has also partnered with Encyclopaedia Britannica and Merriam-Webster to integrate Merriam-Webster’s Dictionary for Children, enabling Moxie to provide age-appropriate definitions and related information to help children learn and understand the meanings of new words and concepts. This is the first of many integrations with Moxie that deliver on Britannica and Merriam-Webster’s shared mission to inspire curiosity and the joy of learning.
Embodied has also developed a full ecosystem that assists parents in supporting their child’s journey with Moxie and allows children to expand their use of Moxie in a safe and parent-approved way:
- The Embodied Moxie Parent App provides a dashboard to help parents understand their child’s development progress with Moxie. The app will provide key insights to a child’s social, emotional, and cognitive development through their activities with Moxie. The app further provides valuable suggestions and tips to parents to enhance their child’s experience and progress with Moxie.
- An online child portal site (referred to as the Global Robotics Laboratory, or G.R.L.) provides additional activities, games and stories that will enhance the experience with Moxie.
- Monthly Moxie Mission Packs are mailings meant to engage children in new activities with Moxie and also provide fun items like trading cards and stickers.
Over time, Moxie learns more about the child to better personalize its content to help with each child’s individual developmental goals. Embodied has taken careful steps to ensure that information provided by children and families is handled with high standards of privacy and security. We intend that Moxie will be fully COPPA (Children’s Online Privacy Protection Act) Safe Harbor certified so parents can feel safe knowing that Moxie employs leading data integrity and security procedures and that its systems are regularly audited to ensure full compliance. Further, personally identifiable data and sensitive information is encrypted with the highest level of security and can only be decrypted by a unique key that only the parent has access to.
What are some of the natural language processing challenges that are faced by Moxie?
At Embodied, we strive to redefine how humans interact with machines, especially in conversation through natural language processing. So, we decided to create SocialXTM, which is a platform that enables children to engage with Moxie through natural interaction (i.e., facial expressions, conversation, body language, etc.), evoking trust, empathy and motivation as well as deeper engagement to promote developmental skills. With SocialXTM, Embodied is introducing a whole new category of robots: animate companions. “Animate” means to bring to life and SocialXTM allows Moxie to embody the very best of humanity in a new and advanced form of technology that can fuel new ways of learning.
Natural language processing is at the core of our natural conversation engine, and there are many unique features to the conversation engine that we worked tirelessly to create.
The key feature we worked on was Moxie’s ability to focus conversation with a single user and separate out background conversations and sounds, so Moxie is only responding to the user. This allows for a more focused and personable interaction. This is a solution to what many call the “cocktail party problem”. When you are at a cocktail party, and there are many people all around you talking in a room while you are trying to stay in conversation with one person, it isn’t terribly difficult for humans. For a computer, this is incredibly difficult. How do we make sure that Moxie only responds to what the single user says, and doesn’t get thrown off by background noises, conversations, TV, etc. There are many ways we approach the solution to this problem.
- We use our vision system to identify who is looking at and facing Moxie.
- We have a number of microphones in the front of Moxie that tell us where that sound is coming from.
- We can then use machine learning to match the sound to who is speaking in front of Moxie. This allows us to filter out the other conversations and stay focused on a single user.
Generally, conversation agents in the market have avoided the “cocktail party problem” by using wake words, such as, “Hey (device, followed by a question)”. This wake word allows the conversation agent to listen for the wake word and respond only when that wake word is said. However, since Moxie can focus on a single user, Moxie doesn’t need to have wake words to activate a response.
We wanted to make sure that Moxie’s conversation engine is so sophisticated that it is contextually aware of conversational responses. This allows for more nuanced conversation. For example, Moxie can understand the different meanings behind “I don’t know” and “no”.
Is there anything else that you would like to share about Moxie or Embodied?
We have been working on this project for four years with a dedicated team that has worked tirelessly to make the amazing inventions that are required to bring Moxie to life. Now we are excited to finally bring Moxie to families to help their children with social emotional development. So, we are looking forward to the journey!
Thank you for the interview, I loved hearing how you were initially inspired by a short Pixar film, and how you’ve since pursued your life passion. Readers who wish to learn more or who want to order a Moxie should visit Embodied, Inc.
Huma Abidi, Senior Director of AI Software Products at Intel – Interview Series
Huma Abidi is a Senior Director of AI Software Products at Intel, responsible for strategy, roadmaps, requirements, machine learning and analytics software products. She leads a globally diverse team of engineers and technologists responsible for delivering world-class products that enable customers to create AI solutions. Huma joined Intel as a software engineer and has since worked in a variety of engineering, validation and management roles in the area of compilers, binary translation, and AI and deep learning. She is passionate about women’s education, supporting several organizations around the world for this cause, and was a finalist for VentureBeat’s 2019 Women in AI award in the mentorship category.
What initially sparked your interest in AI?
I’ve always found it interesting to imagine what could happen if machines could speak, or see, or interact intelligently with humans. Because of some big technical breakthroughs in the last decade, including deep learning gaining popularity because of the availability of data, compute power, and algorithms, AI has now moved from science fiction to real world applications. Solutions we had imagined previously are now within reach. It is truly an exciting time!
In my previous job, I was leading a Binary Translation engineering team, focused on optimizing software for Intel hardware platforms. At Intel, we recognized that the developments in AI would lead to huge industry transformations, demanding tremendous growth in compute power from devices to Edge to cloud and we sharpened our focus to become a data-centric company.
Realizing the need for powerful software to make AI a reality, the first challenge I took on was to lead the team in creating AI software to run efficiently on Intel Xeon CPUs by optimizing deep learning frameworks like Caffe and TensorFlow. We were able to demonstrate more than 200-fold performance increases due to a combination of Intel hardware and software innovations.
We are working to make all of our customer workloads in various domains run faster and better on Intel technology.
What can we do as a society to attract women to AI?
It’s a priority for me and for Intel to get more women in STEM and computer science in general, because diverse groups will build better products for a diverse population. It’s especially important to get more women and underrepresented minorities in AI, because of potential biases lack of representation can cause when creating AI solutions.
In order to attract women, we need to do a better job explaining to girls and young women how AI is relevant in the world, and how they can be part of creating exciting and impactful solutions. We need to show them that AI spans so many different areas of life, and they can use AI technology in their domain of interest, whether it’s art or robotics or data journalism or television. Exciting applications of AI they can easily see making an impact e.g. virtual assistants like Alexa, self-driving cars, social media, how Netflix knows which movies they want to watch, etc.
Another key part of attracting women is representation. Fortunately, there are many women leaders in AI who can serve as excellent role models, including Fei-Fei Li, who is leading human-centered AI at Stanford, and Meredith Whittaker, who is working on social implications through the AI Now Institute at NYU.
We need to work together to adopt inclusive business practices and expand access of technology skills to women and underrepresented minorities. At Intel, our 2030 goal is to increase women in technical roles to 40% and we can only achieve that by working with other companies, institutes, and communities.
How can women best break into the industry?
There are a few options if you want to break into AI specifically. There are numerous online courses in AI, including UDACITY’s free Intel Edge AI Fundamentals course. Or you could go back to school, for example at one of Maricopa County’s community colleges for an AI associate degree – and study for a career in AI e.g. Data Scientist, Data Engineer, ML/DL developer, SW Engineer etc.
If you already work at a tech company, there are likely already AI teams. You could check out the option to spend part of your time on an AI team that you’re interested in.
You can also work on AI if you don’t work at a tech company. AI is extremely interdisciplinary, so you can apply AI to almost any domain you’re involved in. As AI frameworks and tools evolve and become more user-friendly, it becomes easier to use AI in different settings. Joining online events like Kaggle competitions is a great way to work on real-world machine learning problems that involve data sets you find interesting.
The tech industry also needs to put in time, effort, and money to reach out to and support women, including women who are also underrepresented ethnic minorities. On a personal note, I’m involved in organizations like Girls Who Code and Girl Geek X, which connect and inspire young women.
With Deep learning and reinforcement learning recently gaining the most traction, what other forms of machine learning should women pay attention to?
AI and machine learning are still evolving, and exciting new research papers are being published regularly. Some areas to focus on right now include:
- Classical ML techniques that continue to be important and are widely used.
- Responsible/Explainable AI, that has become a critical part of AI lifecycle and from a deployability of deep learning and reinforcement learning point-of-view.
- Graph Neural Networks and multi-modal learning that get insights by learning from rich relation information among graph data.
AI bias is a huge societal issue when it comes to bias towards women and minorities. What are some ways of solving these issues?
When it comes to AI, biases in training samples, human labelers and teams can be compounded to discriminate against diverse individuals, with serious consequences.
It is critical that diversity is prioritized at every step of the process. If women and other minorities from the community are part of the teams developing these tools, they will be more aware of what can go wrong.
It is also important to make sure to include leaders across multiple disciplines such as social scientists, doctors, philosophers and human rights experts to help define what is ethical and what is not.
Can you explain the AI blackbox problem, and why AI explainability is important?
In AI, models are trained on massive amounts of data before they make decisions. In most AI systems, we don’t know how these decisions were made — the decision-making process is a black box, even to its creators. And it may not be possible to really understand how a trained AI program is arriving at its specific decision. A problem arises when we suspect that the system isn’t working. If we suspect the system of algorithmic biases, it’s difficult to check and correct for them if the system is unable to explain its decision making.
There is currently a major research focus on eXplainable AI (XAI) that intends to equip AI models with transparency, explainability and accountability, which will hopefully lead to Responsible AI.
In your keynote address during MITEF Arab Startup Competition final award ceremony and conference you discussed Intel’s AI for Social Good initiatives. Which of these Social Good projects has caught your attention and why is it so important?
I continue to be very excited about all of Intel’s AI for Social Good initiatives, because breakthroughs in AI can lead to transformative changes in the way we tackle problem solving.
One that I especially care about is the Wheelie, an AI-powered wheelchair built in partnership with HOOBOX Robotics. The Wheelie allows extreme paraplegics to regain mobility by using facial expressions to drive. Another amazing initiative is TrailGuard AI, which uses Intel AI technology to fight illegal poaching and protect animals from extinction and species loss.
As part of Intel’s Pandemic Response Initiative, we have many on-going projects with our partners using AI. One key initiative is contactless fever detection or COVID-19 detection via chest radiography with Darwin AI. We’re also working on bots that can answer queries to increase awareness using natural language processing in regional languages.
For women who are interested in getting involved, are there books, websites, or other resources that you would recommend?
There are many great resources online, for all experience levels and areas of interest. Coursera and Udacity offer excellent online courses on machine learning and seep learning, most of which can be audited for free. MIT’s OpenCourseWare is another great, free way to learn from some of the world’s best professors.
Companies such as Intel have AI portals that contain a lot of information about AI including offered solutions. There are many great books on AI: foundational computer science texts like Artificial Intelligence: A Modern Approach by Peter Norvig and Stuart Russell, and modern, philosophical books like Homo Deus by historian Yuval Hararri. I’d also recommend Lex Fridman’s AI podcast on great conversations from a wide range of perspectives and experts from different fields.
Do you have any last words for women who are curious about AI but are not yet ready to leap in?
AI is the future, and will change our society — in fact, it already has. It’s essential that we have honest, ethical people working on it. Whether in a technical role, or at a broader social level, now is a perfect time to get involved!
Thank you for the interview, you are certainly an inspiration for women the world over. Readers who wish to learn more about the software solutions at Intel should visit AI Software Products at Intel.
AI Education Startup Riiid Seeks Worldwide Expansion After New Funding Round
The South Korean-based AI education startup Riiid has announced that the company raised $41.8 million in a pre-Series D funding round. The new investment, which includes the state-run Korea Development Bank (KDP), NVESTOR, Intervest, and existing investor IMM Investments, brings the company’s total funding up to $70.2 million.
According to the company, the funding is another indicator of its success, with over 200 percent annual sales growth and more than a million users since 2017.
Mobile Test Prep
One of Riiid’s biggest contributions to the field of education is a mobile test prep application called Santa. The application focuses on the Test of English for International Communication (TOEIC), and it has been used by more than one million students in Korea and Japan.
The company’s proprietary AI technology has helped launch it to No. 1 in sales among education apps in both Korea and Japan. The AI is able to provide analysis of student data and content, predict user behavior and scores, and what may be its most impressive feature is the ability to recommend personalized study plans in real-time. The use of personalized lessons has been regarded by many as one of the most effective approaches to education.
With the company’s success in the Santa application, it will now look to provide back-end solutions all across the globe for companies, school districts, and education ministries.
Y J Jang is Riiid’s CEO.
“Riiid successfully completed domestic funding amid a slower investment environment due to the unprecedented COVID-19 pandemic and has made significant progress in negotiating with overseas financial investors to accelerate global expansion,” said Jang. “Riiid is already in the process of forming various global partnerships based on its verified AI technology in both academic and commercial markets, and will soon unveil new products and services. We are committed to creating a future for education beyond our imagination through in-depth R&D and commercialization of technology.”
The company will use the secured funding to improve the company’s deep learning technology even further. One of its goals is to provide solutions that help students achieve learning objectives throughout the entire education process, not just for specific tests or tasks. This would be done through constant evaluation and feedback.
The company will also look to continue its expansion outside of South Korea, moving into the United States, South America, the Middle East, and other areas of the world. The company has recently opened up Riiid Labs in Silicon Valley, which acts as the global headquarters of the company.
“Riiid is establishing a global standard while defining valid technologies and leading researches in the field of AI EdTech,” said Intervest Director, Jay Jeon. “At a time when the need for effective remote learning solutions is expanding not only in the education market but also in various industries, the investment was made highly valuing the marketability of Riiid’s proven business model in Santa, excellent talent pool, and various global partnerships that are underway based on a scalable technology structure.”
Riiid also contributes to AI research and publishes papers at top AI conferences such as Neural Information Processing Systems (NeurIPS), the International Conference on Computer Supported Education (CSEDU), and others.
The company also launched EdNet in early 2020, which is the world largest open database for AI education.
Researchers Develop Tool Able to Turn Equations Into Illustrations
Researchers at Carnegie Mellon University have created a tool that is able to turn the abstractions of mathematics into illustrations and diagrams through software.
The process works by users typing ordinary mathematical expressions which are then turned into illustrations by the software. One of the major developments in this project is that the expressions are not required to be basic functions, as in the case of a graphing calculator. Instead, they can be complex relationships coming from various different fields within mathematics.
The tool was named Penrose by the researchers, inspired by the mathematician and physicist Roger Penrose, who is known for using complex mathematical and scientific ideas through diagrams and drawings.
Penrose will be presented by researchers at the SIGGRAPH 2020 Conference on Computer Graphics and Interactive Techniques. The conference will take place virtually this year due to the COVID-19 pandemic.
Keenan Crane is an assistant professor of computer science and robotics.
“Some mathematicians have a talent for drawing beautiful diagrams by hand, but they vanish as soon as the chalkboard is erased,” Crane said. “We want to make this expressive power available to anyone.”
Diagrams are not used as much in technical communication, due to the required amount of high-skill and tedious work required in order to produce them. To get around this, the Penrose tool allows experts to encode the steps in the system, and other users are then able to access this by using mathematical language. All of this means that the computer is doing most of the work.
Katherine Ye is a Ph.D student in the Computer Science Department.
“We started off by asking: ‘How do people translate mathematical ideas into pictures in their head?'” said Ye. “The secret sauce of our system is to empower people to easily ‘explain’ this translation process to the computer, so the computer can do all the hard work of actually making the picture.”
The computer first learns how the user wants the mathematical objects visualized, such as an arrow or a dot, and it then draws up multiple diagrams. The user selects and edits one of those diagrams.
According to Crane, mathematicians should have no problem learning the special programming language that the team developed.
“Mathematicians can get very picky about notation,” he said. “We let them define whatever notation they want, so they can express themselves naturally.”
Penrose is seen as a step towards something even bigger.
“Our vision is to be able to dust off an old math textbook from the library, drop it into the computer and get a beautifully illustrated book — that way more people understand,” Crane said.
The team that developed Penrose also included Nimo Ni and Jenna Wise, who are Ph.D. students in CMU’s Institute for Software Research (ISR); Jonathan Aldrich, professor in ISR; Joshua Sunshine, ISR senior research fellow; Max Krieger, cognitive science undergraduate; and Dor Ma’ayan, former master’s student at the Technion-Israel Institute of Technology.
The research was supported by the National Science Foundation, Defense Advanced Research Projects Agency, the Sloan Foundation, Microsoft Research, and the Packard Foundation.
- Matt Carlson, VP Business Development at WiBotic – Interview Series
- U.S. National Institutes of Health Turns to AI for Fight Against COVID-19
- WiBotic Receives Industry-First FCC Approval for High Power Wireless Charging of Robots
- AI Browser Tools Aim To Recognize Deepfakes and Other Fake Media
- Dave Ryan, General Manager, Health & Life Sciences Business at Intel – Interview Series