Connect with us

Surveillance

Anduril Industries Scores Defense Contract for a Surveillance System

Published

 on

Anduril Industries Scores Defense Contract for a Surveillance System

Anduril Industries, the surveillance startup founded by Oculus Rift inventor Palmer Luckey, received a U.S. Marine Corps contract this month. The defense technology company and Project Maven contractor is two-years-old. Project Maven was the secretive Pentagon program that aimed to use artificial intelligence from the private sector for military purposes. 

Marine Corps Installations Command announced on July 15th that Anduril Industries had been awarded a $13.5 million sole source contract. Some additional information has been made public through documents published by the organization Mijente under the Freedom of Information Act. 

The new defense contract is for an Autonomous Surveillance Counter Intrusion Capability (ASCIC) that will be used to help secure installations with the use of artificial intelligence against any intrusions. This will be able to operate without the use of humans. The new system is set to be used at four Marine Corps bases. Two of them are in Japan, one is in Hawaii, and the last is in Yuma, Arizona close to the U.S. border with Mexico. 

The ASCIC system uses Anduril’s existing perimeter-monitoring system called Lattice that uses sensor towers, drones, and machine learning to automatically identify movements and intruders. 

Palmer Luckey spoke about the project all the way back in November 2018 at a summit in Lisbon, Portugal. 

“What we’re working on is taking data from lots of different sensors, putting it into an AI-powered sensor fusion platform so that you can build a perfect 3D model of everything that’s going on in a large area. Then we take that data and run predictive analytics on it, and tag everything with metadata, find what’s relevant, then push it to people who are out in the field.”

According to Anduril, the system can “detect, classify, and track any person, drone or other threat in a restricted area,” and it can “help identify terrorist threats faster and allow troops to instantly spot potential threats with confidence.” 

Anduril combines the virtual reality systems at Oculus, another project from Palmer Luckey, with advanced sensors from the Pentagon. These together create a simple mobile platform that is intelligent and can monitor whatever the installation needs. 

In March, the MCICOM command was looking for a system that provided “24/7/365 autonomous situational awareness and actionable, real-time intelligence of surrounding air, land, and sea, through all-weather conditions.” 

“The system shall autonomously detect, identify, classify, and track humans on foot, wheeled and tracked vehicles on land, surface vessels and boats,” according to the original contract. “It must be a scalable federated network of sensors (EO/IR/RADAR) with capacity to expand into acoustic, seismic, and other sensors that operate across the electromagnetic spectrum.” 

Anduril was able to take all of this and create a single system. MCICOM has said that Anduril is the only company on the market able to deliver this kind of system. This is the reason Anduril was awarded the contract so quickly. It is abnormal for a defense contract to be awarded with so little competition from other defense firms or private companies. 

Despite the controversy that surrounds the use of AI in the military, it is becoming increasingly prominent in defense technology. In the past, Google stopped helping the US military use artificial intelligence to analyze drone footage in what was part of the Pentagon’s Project Maven. There were concerns from within the program as well as controversy in the media. There is going to be an increasing amount of competition among private companies looking to score defense contracts. 

With the increasing development of artificial intelligence in all areas of society, it was only a matter of time before the U.S. government began to use it in the defense sector. Just like in almost every other sector, AI can greatly increase the effectiveness of many aspects of military defense for the U.S.

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Interviews

Marc Sloan, Co-Founder & CEO of Scout – Interview Series

mm

Published

on

Marc Sloan, Co-Founder & CEO of Scout - Interview Series

Marc Sloan is the Co-Founder & CEO of Scout, the world’s first web browser chatbot, a digital assistant for getting anything done online. Scout suggests useful things it can do for you based on what you’re doing online.

What initially attracted you to AI?

My first experience of working on AI was during a gap year I spent working in the natural language processing research team at GCHQ during my Bachelor’s degree. I got to see first-hand the impact machine learning could have on real world problems and the difference it makes.

It flipped a switch in my mind about how computers can be used to solve problems: software engineering teaches you to create programs that take data and produce results, but machine learning lets you take data and describe the results you want to produce a program. Meaning you can use the same framework to solve thousands of different problems. To me this felt far more impactful than having to write a program for each problem.

I was already studying optimisation problems in mathematics alongside computer science, so once I got back to university I focused on AI and completed my dissertation on speech processing before applying for a PhD in Information Retrieval at UCL.

 

You researched reinforcement learning in web search under supervision of David Silver, the founder of AlphaGo. Could you discuss some of this research?

My PhD was on the topic of applying reinforcement learning to learning to rank problems in information retrieval, a field I helped create called Dynamic Information Retrieval. I was supervised by Prof Jun Wang and Prof David Silver, both experts in agent-based reinforcement learning.

Our research looked at how search engines could learn from user behaviour to improve search results autonomously over time. Using a Multi-Armed Bandit approach, our system would attempt different search rankings and collect click behaviour to determine if they were effective or not. It could also adapt to individual users over time and was particularly effective in handling ambiguous search queries. At the time, David was focusing deeply on the Go problem and he helped me determine the appropriate reinforcement learning setup of states and value function for this particular problem.

 

What are some of the entrepreneur lessons that you learned from working with David Silver?

Research at UCL is often entrepreneurial. David had previously founded Elixir studios with Demis Hassabis and then of course joined DeepMind to work on Alpha Go. But other members of our Media Futures research group also ended up spinning out a range of different startups: Jun founded Mediagamma (applying RL to online ad spend), Simon Chan started prediction.io (acquired by SalesForce) and Jagadeesh Gorla started Jaggu (a recommendation service for e-commerce). Our team often discussed the commercial impact our research could have, I think perhaps because UCL’s base in London makes it a natural starting point for creating a business.

 

You recently launched Scout, the world’s first web browser chatbot. What was the inspiration behind launching Scout?

The idea naturally evolved from my PhD research. I went straight from finishing my PhD to joining Entrepreneur First where I started to think about how I could turn my research into a product.

Before I started this, I completed an internship at Microsoft Research where I applied my research to Bing. At the time, the main thing I learned from my research was that information finding could be predicted based on online user behaviour. But I became frustrated that the only real way to surface these predictions in a search engine was by making auto-suggest better. So I started to think about how the user’s entire online experience could be improved using these predictions, not just the search experience.

It was this thinking that led me and my new co-founder on Entrepreneur First to create a browser add-on that observes user behaviour, predicts what information the user is likely to need next online, and fetches it for them. After a few years of experiments and prototypes, this evolved into a chatbot interface where the browser ‘chats’ to you about what you’re up to online and tries to help you along the way.

 

Which web browsers will Scout be compatible with?

We’re focusing on Chrome at the moment due to it being the most popular web browser and having a mature add-on architecture, but we have prototypes working on Firefox and Safari and even a mobile app.

 

The Scout shopping assistant functionality sounds like it could save users both time and money. Assuming someone is researching a product on Amazon, what happens in the backend, and how does Scout interact with the user?

The idea is that once you have Scout installed, you just continue using the web as normal. If you’re shopping, you may visit Amazon to look at products. At this point, Scout recognises that you’re shopping on Amazon, and the product you’re looking at, and it will say “Hello”. It pops up as a chat widget on the webpage, kind of like how Intercom works, except Scout can appear on potentially any webpage. You can see what it looks like on my website.

Because you’re shopping, it’ll start to suggest ways it can help. It’ll ask you if you want to see reviews online, other prices, YouTube videos of the product and more. You interact by pressing buttons and the chatbot tailors the experience to what you want it to do. Whenever it finds information (like a YouTube video), it will embed it within the chat thread, just like how a friend might share media with you on WhatsApp. Over time, you end up having a dialogue with the browser about what you are doing online, with the browser helping you along the way.

The webpage processing happens within the browser itself. The only information our backend sees is the chat thread, meaning that the privacy implications are minimal.

We have a bespoke architecture for understanding online browsing behaviour and managing dialogues with the user. We use machine learning to identify what tasks we can help with online and how we should help. Originally, we used reinforcement learning to adapt to user preferences over time. However, one of the biggest lessons I’ve learned from running an AI startup is to keep processes simple and to try to only use machine learning to optimise an existing process. So instead, we now have a sophisticated rules engine for handling tasks over time that can be managed by reinforcement learning once we need to scale.

 

What are some examples of how Scout can assist with event planning?

We realised that event planning (and travel booking) are not so different from shopping online. You’re still looking at products, reading reviews and committing to purchase/attend. So a lot of what we’ve built for shopping also applies here.

The biggest difference is that time and location are now important. So for instance, if you’re looking at concert tickets on Ticketmaster, Scout can identify the address of the venue and suggest finding you directions from your current location to it, or find the price of an Uber, or suggest what time you should leave. If you’ve connected Scout into your calendar, then Scout can check to see if you’re available at the time of the event and add it to your calendar for you.

In the future, we foresee Scout users being able to communicate to their friends through the platform to discuss the things they’re doing online such as event planning, shopping, work etc.

 

Dialogue triggers will be used for Scout to initiate communications. What are some of these triggers?

By default, Scout won’t disturb you unless it encounters a trigger that tells it you may need help. There are several types of trigger:

  • Visiting a specific website.
  • Visiting a type of website (such as news, shopping etc.).
  • Visiting a website containing a certain type of information (i.e. an address, a video etc.).
  • Clicking links or buttons on webpages.
  • Interacting with Scout by pressing buttons
  • Scout retrieving certain types of media such as videos, music, tweets etc.

We plan to allow users to fine-tune what type of triggers they want Scout to respond to, and eventually, learn their preference automatically.

 

Can you discuss some of the difficulties behind ensuring that Scout is genuinely helpful when it decides to interact with a user without becoming annoying?

We take user engagement very seriously and try to measure whether interactions led to positive or negative outcomes. We try to maintain a good ratio for how often Scout tries to start a conversation and how often it’s used. However, it’s a tricky balance to get right and we’re always trying to improve.

Because of the intrusive nature of this product, getting the interface and UX right is critical. We’ve spent a lot of time trying completely different interfaces and user interaction methods. This work has led us to the current, chatbot style interface, which we find gives us the greatest flexibility in the help we can provide, coupled with user familiarity and minimal user effort for interactions.

 

Can you provide other scenarios of how Scout can assist end users?

Our focus at the moment is in market-testing specific applications for Scout. Shopping and event planning have already been mentioned, but we’re also looking at how Scout can help academics (with finding research papers, author details and reference networks) and even guitarists (finding guitar sheet music, playing music and videos alongside sheet music online and helping to tune a guitar). We’ve also spent some time exploring professional scenarios such as online recruitment, financial analysis and law.

Ultimately, Scout can potentially work on any website and help in any scenario, which is what makes the technology incredibly exciting, but also makes it difficult to get started.

 

Is there anything else that you would like to share about Scout?

If you’d like to see what it’s like if your browser could talk to you, you can read more on Scout’s blog.

Thank you for the fascinating take on designing a unite type of chatbot. We are excited to follow this project. You may visit the Scout website or Marc Sloan’s website to learn more.

Spread the love
Continue Reading

Interviews

Marcio Macedo, Co-Founder of Ava Robotics – Interview Series

mm

Published

on

Marcio Macedo, Co-Founder of Ava Robotics - Interview Series

Marcio Macedo is Co-Founder and VP of Product and Marketing at Ava Robotics, a recent spin-off of iRobot that focuses on autonomous navigating robots for enterprise, commercial and industrial environments.

Having previously worked at iRobot, what were some of the interesting projects that you worked on?

At iRobot we were fortunate to be designing and pioneering applications of telepresence, including an FDA-certified telemedicine robot for intensive care environments and the Ava telepresence product in partnership with Cisco.

 

Ava Robotics is a spinoff of iRobot, what was the inspiration behind launching a new company instead of keeping it in the iRobot family?

With iRobot’s strategic focus shifting to home products, Ava Robotics spun off to operate independently and better address the needs of our nascent markets. As an independent company we gain more flexibility in meeting our customers’ needs while enjoying the support of technology developed originally at iRobot.

 

The Ava Telepresence robot can be remotely controlled by users and features autonomous technology to have the robot simply move itself to a designated area. Could you walk us through the machine learning that is used to have the robot navigate through an environment without bumping into new objects?

When an Ava is installed at a location it learns its operating environment and creates a realistic topology map of the site. This map can be further annotated to force specific behaviors, such as speed zones, keep-out zones, etc.

Ava has built-in obstacle detection and obstacle avoidance (ODOA) capabilities, which leverage multiple sensors in the robot body so that Ava will not bump into people or objects in its path. Furthermore, if the most direct path to its destination is blocked, the Ava will search for and navigate through an alternative path if one is available.

 

What are the navigation sensors that are used, is it reliant on LiDAR or regular cameras?

Ava’s robotic navigation technologies use a variety of sensors (3-D cameras, LiDAR, and IMU) and they are combined for all actions, such as localization, planning, collision avoidance, cliff detections, etc. We operate in medium- and large-size spaces, so we think LiDAR is a very valuable part of a sensing package for real-world commercial spaces.

 

The telepresence robot looks like it would be extremely useful in the hospitality sector. Could you walk us through some of these potential use-cases?

Visits to Executive Briefing Centers provide access to senior-level executives and deliver value in the form of hands-on briefings, strategy reviews, product demonstrations and opportunities for relationship building. Customer Experience Centers offer organizations the opportunity to wow customers and show off their latest products and services. But with so many busy schedules, getting the right people to attend is not always easy.

For meeting planners, Ava provides the ability to “walk” the hotel and visit the meeting spaces, conference rooms and ballrooms that are available for their conference or event. In this application, the property’s sales and marketing team gain a unique tool to accelerate their sales cycles.

When invitees and guests can’t get to the conference or event, Ava allows them to attend and move around as if they were there. Whether it’s a business meeting, conference exhibit hall, or social event, Ava provides an immersive experience with freedom to move around.

 

What are some of the use-cases that are being targeted in the corporate sector?

Businesses benefit from Ava in many ways. The robot allows freedom of movement and access to meetings, corporate training, factory inspections, manufacturing sites, labs and customer experience settings.

Natural, face-to-face, ad-hoc conversations are critical to moving a business forward. Yet today’s globally distributed businesses have employees telecommuting from home or from across the world, who miss these vital interactions. With Ava, you unlock the ability to bring everyone back together as if they’re sitting together in the office and can walk up and interact naturally.

Use Case examples include:

  • Agile Product Development: Agile product development teams come together for scheduled and unscheduled meetings, looking to achieve high levels of collaboration and communication. When remote workers are part of the team, existing collaboration tools are challenged to meet the need. With Ava, remote team members can actively participate in stand-up meetings, sprint planning and demos, and project reviews as if they were co-located with the team.
  • Manufacturing: In manufacturing, remote visits by management, collaboration between experts at headquarters and staff at the plant, and remote tours by customers or suppliers are frequent – and necessary – events. Ava increases collaboration between those on the design team or in engineering and those building and delivering the final product on the plant floor. Also, imagine that the manufacturing plant is experiencing a production-line problem, but the person who knows how to fix it is thousands of miles away. In such a case, the technician needs to freely walk to different parts of the manufacturing floor to meet with someone or see something. Ava can by delivering that critical physical presence right to the factory floor. Ava allows the remote person to immediately connect via the robot as if she was physically present, put eyes on the problem, and communicate with the local team on the floor. As a result, she can deliver immediate insight into the problem and quickly resolve the issue.
  • Laboratories and Clean Rooms: Those who work in laboratories and clean rooms work hard to ensure they are kept sterile and clean. While necessary, this can be a time-consuming process for employees entering and leaving these spaces repeatedly during the day. Due to the risks of potential contamination, companies often limit tours by customers and other visitors. Ava brings people right into a laboratory or a clean room without compromising the space. With Ava, remote visitors can easily move around as if they were there in person, observing the work being done and speaking with employees.

 

Ava Robotics recently partnered with Qatar Airways to Introduce Smart Airport Technologies at QITCOM 2019. Could you share with us some details in regards to this event and how those in attendance reacted?

We have been fortunate to work with Hamad International Airport in Qatar and Qatar Airways via our strategic partner Cisco building applications for robots in airports for a variety of use cases. Showing our work at QITCOM 2019 was a good opportunity to expose to the IT community to the applications that are now possible through different verticals and industries.

 

Is there anything else that you would like to share about Ava Robotics?

In these times of challenges to global travel, we have seen increased demand for solutions like telepresence robotics. Customers are just beginning to realize the potential of applications that truly empower remote workers to collaborate as if they were physically present at a distant location.

To learn more more visit AVA Robotics

Spread the love
Continue Reading

Education

Elnaz Sarraf, CEO and founder of Roybi – Interview Series

mm

Published

on

Elnaz Sarraf, CEO and founder of Roybi - Interview Series

Can you walk us through your journey, from growing up in Iran, to becoming an entrepreneur?

My childhood and Iranian heritage definitely play an important role in who I am today.  My parents paid a lot of attention to my education at home and in school. My dad was a small business owner and was the face of our company outside of the home, while my mom took care of all the financial and operational aspects of our business at home, because as a woman in Iran, it would not have been acceptable for her to be involved directly in business negotiations.  But the limitations imposed on women didn’t stop my parents from exposing me to every aspect of our business. My dad took me along to many of his meetings; observing the art of negotiating and conducting business deals fascinated me with both the business and social aspects of entrepreneurship.

While at home, I watched my parents manage the company together and discuss the financial elements of holding our business together and finding innovative ways to grow. My summers were always filled with extracurricular classes in the arts, engineering and science. I’m very grateful to my parents who exposed me to a diverse set of social and academic skills at an early age. When I was starting ROYBI, I knew that I have to do a variety of different tasks myself until the company grows. Because of my background in the arts and engineering, I was able to multitask on projects such as industrial design, website designs, coding, and presenting my ideas and vision for the company to investors and partners.

 

What was it that inspired you to design an AI-powered educational robot?

Our education system needs a fundamental change, and that change starts with early childhood education. It should no longer be a one-size-fits-all approach. Every child has his/her own unique set of skills and our focus needs to be on their individual capabilities. We saw a huge gap in this area and decided to use technology and specifically artificial intelligence to bring about change that can help children, parents, and teachers. We developed Roybi Robot to interact with children as young as 3-years-old because early childhood is the most critical age in a child’s growth and future success. We’re constantly engaged in thinking about the benefits of robotics and AI in early childhood education.

 

ROYBI teaches children languages and STEM skills by playing, what are some examples of games that children can play?

We use different methodologies to deliver our educational content. Some lessons are only based on conversations. By using our voice recognition technology, Roybi Robot can understand if the child is saying the correct word or not. If the answer is not correct, it encourages the child to repeat using playful and compassionate messages.

Also, lessons alternate between fun and educational to games that can be played by interacting with the buttons on Roybi Robot’s charging plate. This creates more involvement and encourages children to move their hands, body and gaze and stay engaged.

 

Facial detection and emotional detection are the primary focus of ROYBI AI. Can you discuss some of the technologies behind this?

We use several technologies to deliver our content. One important AI component is voice recognition. Based on what the child says during the lessons, we can understand their progress and interest and create our reports for parents and educators. Facial detection is being used to initiate a conversation with a child to say “Hello.” And we use emotion detection as social-emotional support for the child while interacting with Roybi Robot, the educational robot.

 

ROYBI was recently featured on the Cover of TIME Magazine ‘As One Of The Best Inventions of 2019’. How did it feel to see your product on the cover of one of the top magazines in the world?

We were shocked, excited, honored, and overwhelmed at the same time. We knew we were on something big that would change the world but receiving such amazing recognition and even getting featured on the cover of the magazine gave us so much encouragement to continue our path even stronger!

 

There have been some pilots with ROYBI in classrooms. Can you share some of the feedback that you’ve received from teachers?

Our content is created by teachers, and we’re hoping to pilot in schools in the next academic year. The teachers who work with us to create the lessons, give us direct feedback on what is needed most to encourage children to engage with our content.

 

You’ve stated that you want to see every child in the world hold a ROYBI in their hands, do you believe that this could become a possibility if the classroom pilots are a success?

Absolutely! We are on our way to provide learning both at home and classroom settings and we want to change the way our children learn. To do that, we will ensure to provide our Roybi Robot to as many children as possible and as you can imagine it is an ambitious mission. To make this happen, we also invite future partners, delegates, governments, investors, mentors, and anyone who shares the same passion as us, to give us a hand, so together we can change the world for our children!

 

ROYBI recently acquired kidsense.ai, what was the purpose behind this acquisition? Was it to simply offer more language options?

The recent acquisition happened as a strategic decision to make ROYBI’s technology even more accessible to all children around the world. With this acquisition, ROYBI becomes a leader in voice recognition AI that is specifically developed for children. As part of this proprietary technology, we can now accelerate language development efforts as well.

 

What would you tell women who feel that AI and tech are dominated by men and that it’s not an even playing field for them?

It is time to change this! Put your best effort at work. You got this!

 

Do you have any advice for female entrepreneurs who feel that it is more difficult for them to be taken seriously and to receive funding than their male counterparts?

The only limitation is in your own thoughts. There is no limit for what you can achieve no matter how difficult a situation may seem. You will find support from many people around you who share a similar passion as you. I encourage women to engage and involve themselves more in technology and how it is and will affect our future generations.

To make the change happen, first, we need to start  by ourselves and continue it together!

 

Do you have anything else that you would like to share?

As part of growing ROYBI globally, we are continuously looking for partnerships with schools, government entities, and foundations to help us make Roybi Robot and education more accessible around the world and to every child regardless of their location or family income status. If you believe you can help us in our mission, reach out to us at partnership@roybirobot.com

Elnaz Sarraf is an inspiration to women and minorities, and shows that they too can be a success. Please visit the Roybi website to learn more or to order a Roybi Robot for a young child.

Spread the love
Continue Reading