Connect with us

Interviews

Deniz Kalaslioglu, Co-Founder & CTO of Soar Robotics – Interview Series

mm

Published

 on

Deniz Kalaslioglu, Co-Founder & CTO of Soar Robotics - Interview Series

Deniz Kalaslioglu is the Co-Founder & CTO of Soar Robotics a cloud-connected Robotic Intelligence platform for drones.

You have over 7 years of experience in operating AI-back autonomous drones. Could you share with us some of the highlights throughout your career?

Back in 2012, drones were mostly perceived as military tools by the majority. On the other hand, the improvements in mobile processors, sensors and battery technology had already started creating opportunities for consumer drones to become mainstream. A handful of companies were trying to make this happen, and it became obvious to me that if correct research and development steps were taken, these toys could soon become irreplaceable tools that help many industries thrive.

I participated exclusively in R&D teams throughout my career, in automotive and RF design. I founded a drone service provider startup in 2013, where I had the chance to observe many of the shortcomings of human-operated drones, as well as their potential benefits for industries. I’ve led two research efforts in a timespan of 1.5 years, where we addressed the problem of autonomous outdoor and indoor flight.

Precision landing and autonomous charging was another issue that I have tackled later on. Solving these issues meant fully-autonomous operation with minimal human intervention throughout the operation cycle. At the time, solving the problem of fully-autonomous operation was huge and it enabled us to create intelligent systems that don’t need any human operator to execute flights; which resulted in safer, cost-effective and efficient flights. The “AI” part came into play later on in 2015, where deep learning algorithms could be effectively used to solve problems that were previously solved through classical computer vision and/or learning methods. We leveraged robotics to enable fully-autonomous flights and deep learning to transform raw data into actionable intelligence.

 

What inspired you to launch Soar Robotics?

Drones lack sufficient autonomy and intelligence features to become the next revolutionary tools for humans. They become inefficient and primitive tools in the hands of a human operator, both in terms of flight and post-operation data handling. Besides, these robots have very little access to real-time and long-term robotic intelligence that they can consume to become smarter.

As a result of my experience in this field, I have come to an understanding that the current commercial robotics paradigm is inefficient which is limiting the growth of many industries. I co-founded Soar Robotics to tackle some very difficult engineering challenges to make intelligent aerial operations a reality, which in turn will provide high-quality and cost-efficient solutions for many industries.

 

Soar Robotics provides a fully autonomous cloud connected robotics intelligence platform for drones. What are the types of applications that are best served by these drones?

Our cloud-connected robotics intelligence platform is designed as a modular system that can serve almost any application by utilizing the specific functionalities implemented within the cloud. Some industries such as security, solar energy, construction, and agriculture are currently in immediate need of this technology.

  • Surveillance of a perimeter for security,
  • Inspection and analysis of thermal and visible faults in solar energy,
  • Progress tracking and management in construction and agriculture

These are the main applications with the highest beneficial impact that we focus on.

 

For a farmer who wishes to use this technology, what are some of use cases that will benefit them versus traditional human-operated drones?

As with all our applications, we also provide end-to-end service for precision agriculture. Currently, the drone workflow in almost any industry is as follows:

  • the operator carries the drone and its accessories to the field,
  • the operator creates a flight plan,
  • the operator turns on the drone, uploads the flight plan for the specific task in hand,
  • drone arms and executes the planned mission and return to its takeoff coordinates, drone lands,
  • the operator turns off the drone,
  • the operator shares the data with the client (or the related department if hired in-house),
  • the data is processed accurately to become actionable insights for the specific industry.

It is crucial to point out that this workflow is proven to be very inefficient, especially in sectors such as solar energy, agriculture and construction where collecting periodic and objective aerial data for vast lands is essential. A farmer who uses our technology is able to get measurable, actionable and accurate insights on:

  • plant health and rigor,
  • nitrogen intake of the soil,
  • optimization and effectiveness of irrigation methods
  • early detection of disease and pest

Without having to go through all the hassle mentioned above, without even clicking a button every time. I firmly believe that enabling drones with autonomous features and cloud intelligence will provide considerable savings in terms of time, labor and money.

 

How will the drones be used for solar farm operators?

We handle almost everything that needs counting and measuring in all stages of the solar project. In the pre-construction and planning period, we generate topographic model, hydrologic analysis and obstacle analysis with high geographical precision and accuracy. During the construction period, we generate daily maps and videos of the site. After processing the collected media we measure the progress of the piling structures’, the mounting racks’ and the photovoltaic panels’ installations, position, area and volume measurements of trenches and inverter foundations as well as counting the construction machinery/vehicles and personnel on the site.

When the construction is over, and the solar site is fully operational Soar’s autonomous system continues its daily flights but this time generating thermal maps and videos along with visible spectrum maps and videos. From thermal data, Soar’s algorithms detect cell, multi-cell, diode, string, combiner and inverter level defects. From visible spectrum data, Soar’s algorithms detect shattering, soiling, shadowing, vegetation and missing panels. As a result, Soar’s software generates a detailed report of the detected faults and marks them on the as-built and RGB map of the site down to cell level, as well as showing all detected errors on a table; indicating string, row and module numbers with geolocations. Also clients’ total loss due to the inefficiencies caused by these faults and prioritize each depending on their importance and urgency.

 

In July 2019 Soar Robotics joined NVIDIA’s Inception Program which is an exclusive program for AI startups. How has this experience influenced you personally and how Soar Robotics is managed?

Throughout the months, this was proven to be an extremely beneficial program for us. We had already been using NVIDIA products both for onboard computation as well as the cloud side. This program has a lot of perks that streamlined our research, development and test processes.

 

Soar Robotics will be generating recurring revenue with Robotics-as-a-Service (RaaS) model. What is this model exactly and how does it differ from SaaS?

It possesses many similarities with SaaS in terms of its application and effects to our business model. RaaS model is especially critical since hardware is involved; most of our clients don’t want to own the hardware and only interested in the results. Cloud software and the new generations of robotics hardware blend together more and more each day.

This results in some fundamental changes in industrial robotics which used to be about stationary robots with repetitive tasks that didn’t need much of an intelligence. Operating under this mindset we provide our clients’ with robot connectivity and cloud robotics services to augment what their hardware would normally be capable of achieving.

Therefore Robotics-as-a-Service encapsulates all hardware and software tools that we utilize to create domain-specific robots for our clients’ purpose in the form of drones, communications hardware and cloud intelligence.

 

What are your predictions for drone technology in the coming decade?

Drones have clearly proven their value for enterprises, and the usage will only continue to increase. We have witnessed many businesses trying to integrate drones into their workflows, with only a few of them achieving great ROIs and most of them failing due to the inefficient nature of current commercial drone applications. Since the drone industry hype began to fade, we have seen a rapid consolidation in the market, especially in the last couple of years.  I believe that this was a necessary step for the industry, which opened the path to real productivity and better opportunities for products and services that are actually beneficial for enterprises. The addressable market that the commercial drones will create until 2025 is expected to exceed $100B, which in my opinion is a fairly modest estimation.

 

  • We will see an exponential rise in “Beyond Visual Line of Sight” flights, which will be the enabling factor for many use cases of commercial UAVs.
  • The advancements in battery technology such as hydrogen fuel cells will extend the flight times by at least an order of magnitude, which will also be a driving factor for many novel use cases.
  • Drone-in-a-box systems are still perceived as somehow experimental, but we will definitely see this technology become ubiquitous in the next decade.
  • There have been ongoing tests that are conducted by companies of various sizes in urban air mobility market, which could be broken down into roughly three segments, namely last-mile delivery, aerial public transport and aerial personal transport. The commercialization of these segments will definitely happen in the coming decade.

 

Is there anything else that you would like to share about Soar Robotics?

We believe that the feasibility and commercialization of autonomous aerial operations mainly depend on solving the problem of aerial vehicle connectivity. For drones to be able to operate Beyond Visual Line of Sight (BVLOS) they need seamless coverage, real-time high throughput data transmission, command and control, identification, and regulation. Although there have been some successful attempts to leverage current mobile networks as a communications method, these networks have many shortcomings and are far from becoming the go-to solution for aerial vehicles.

We have been developing a connectivity hardware and software stack that have the capability of forming ad hoc drone networks. We expect that these networking capabilities will enable seamless, safe and intelligent operations for any type of autonomous aerial vehicle. We are rolling the alpha and beta releases of the hardware in the coming months to test our products with larger user bases under various usage conditions and start forming these ad-hoc networks to serve many industries.

To learn more visit Soar Robotics or to invest in this company visit the Crowdfunding Page on Republic.

Spread the love

Antoine Tardif is a futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Autonomous Vehicles

William Santana Li, CEO of Knightscope – Interview Series

mm

Published

on

William Santana Li, CEO of Knightscope - Interview Series

Knightscope is a leader in developing autonomous security capabilities with a vision to one day be able to predict and prevent crime disrupting the $500 billion security industry. The technology is a profound combination of self-driving technology, robotics and artificial intelligence.

William Santana Li,  is the Chairman and CEO of  Knightscope. He is also a seasoned entrepreneur, intrapreneur and former corporate executive at Ford Motor Company. He is also the Founder and COO of GreenLeaf, which became the world’s 2nd largest automotive recycler.

Knightscope was launched in 2013 which was very forward thinking for the time. What was the inspiration behind launching this company?

A professional and a personal motivation.  The professional answer is as a former automotive executive, I believe deeply that autonomous self-driving technology is going to turn the world upside down – but just not in agreement on how to commercialize the technology.  Over $80 billion has been invested autonomous technology with something like 200 companies on it – for years.  Yet, no one has shipped anything commercially viable.  I believe Knightscope is literally the only company in the world operating fully autonomous 24/7/365 across an entire country, without human intervention, generating real revenue, with real clients, in the real world.  Our crawl, walk, run approach is likely more suitable for this extremely complicated and execution intensive technology.  My personal motivation: someone hit my town on 9/11 and I’m still furious – and I am dedicating the rest of my life to better securing our country.  You can learn more about why we built Knightscope here.

 

Knightscope offers clients a Machine-as-a-Service (MaaS) subscription which aggregates data from the robots, analyzes it for anything out of the ordinary and serves that information to clients. What type of data is being collected?

Today we can read 1,200 license plates per minute, can detect a person, run a thermal scan, check for rogue mobile devices….it is over 90 terabytes of data a year that no human could ever process.  So our clients utilize our state-of-the-art browser-based user interface to interact with the machines.  You can get a glimpse of it here – we call the KSOC (Knightscope Security Operations Center).  In the future, our desire is to have the machines be able to ‘see, feel, hear and smell’ and do 100 times more than a human could ever do – giving law enforcement and security professionals ‘superpowers’ – so they can do their jobs much more effectively.

 

K1 is a stationary machine which is ideal for entry and exit points. What are the capabilities that are offered with this machine?

Yes, the K1 operates primarily at ingress/egress points for either humans and/or vehicles.  All our machines have the same suite of technologies – but at this time the K1 does have facial recognition capabilities which has proven to be quite useful in securing a location.

William Santana Li, CEO of Knightscope - Interview Series

The K3 is an indoor autonomous robot, and the K5 is an outdoor autonomous robot, both capable of autonomous recharging and of having conversations with humans. What else can you tell us about these robots, and is there anything else that differentiates the two robots from each other?

The K3 is the smaller version capable of handling much smaller and dynamic indoor environments.

William Santana Li, CEO of Knightscope - Interview Series

Obviously the K5 is weatherproof and can even go up ramps for vehicles – one of our clients is a 9-story parking structure – and the robot patrols autonomously on multiple levels on its own, which is a bit of a technical feat.

William Santana Li, CEO of Knightscope - Interview Series

 

Your robots have been tested in multiple settings including shopping malls and parking lots. What are some other settings or use cases which are ideal for these robots?

Basically, anywhere outdoors or indoors you may see a security guard.  Commercial real estate, corporate campuses, retail, warehouses, manufacturing plants, healthcare, stadiums, airports, rail stations, parks, data centers – the list is massive.  Usually we do well when the client has a genuine crime problem and/or budget challenges.

 

Could you share with us some of the noteworthy clients which are currently using the robots in a commercial setting?

Ten of the Fortune 1000 major corporations are clients, Samsung, Westfield Malls, Sacramento Kings, City of Hayward, City of Huntington Park, Citizens Bank, XPO Logistics, Faurecia, Dignity Health, Houston Methodist Hospital – are just a few that come to mind.   We operate across 4 time zones in the U.S. only.  Can check them out on our homepage at www.knightscope.com

 

The K7 is Multi-Terrain Autonomous robot which is currently under development. The pictures of this robot look very impressive. What can you tell us about the future capabilities of the K7?

The K7 is technically challenging but is intended to handle much more difficult terrain and much larger environments – with gravel, dirt, sand, grass, etc.  It is the size of a small car.

William Santana Li, CEO of Knightscope - Interview Series

 

Knightscope is currently fundraising on StartEngine. What are the investment terms for investors?

We are celebrating our 7th anniversary and have raised over $40 million since inception to build all this technology from scratch. We design, engineer, build, deploy and support it.  Made in the USA – and we are backed by over 7,000 investors and 4 major corporations and you learn about our investor base here.  We are now raising $50 million in growth capital to scale the Company up to profitability – we can accept accredited and unaccredited investors as well as domestic and international investors from $1,000 to $10M completely online.  You can learn more about the terms and buy shares here: www.startengine.com/knightscope

 

Is there anything else that you would like to share about Knightscope?

As I write this response, we are in complete lockdown in Silicon Valley due to the global pandemic.  The crazy thing is that our clients are ‘essential services’ (law enforcement agencies, hospitals, security teams) so we must continue to operate 24/7/365.  You can read more about why I think you should consider investing in Knightscope here – but these days the important thing to remember is that robots are immune!

Thank you for sharing information about your amazing startup. Readers who wish to learn more may visit Knightscope or the StartEngine investment page.

Spread the love
Continue Reading

Big Data

Anthony Macciola, Chief Innovation Officer at ABBYY – Interview Series

mm

Published

on

Anthony Macciola, Chief Innovation Officer at ABBYY - Interview Series

Anthony is recognized as a thought leader and primary innovator of products, solutions, and technologies for the intelligent capture, RPA, BPM, BI and mobile markets.

ABBYY is an innovator and leader in artificial intelligence (Al) technology including machine learning and natural language processing that helps organizations better understand and drive context and outcomes from their data. The company sets a goal to grow and strengthen its leadership positions by satisfying the ever-increasing demand for AI-enabled products and solutions.

ABBYY has been developing semantic and AI technologies for many years. Thousands of organizations from over 200 countries and regions have chosen ABBYY solutions that transform documents into business value by capturing information in any format. These solutions help organizations of diverse industries boost revenue, improve processes, mitigate risk, and drive competitive advantage.

What got you initially interested in AI?

I first became interested in AI in the 90s. In my role, we were utilizing support vector machines, neural networks, and machine learning engines to create extraction and classification models. At the time, it wasn’t called AI. However, we were leveraging AI to address problems surrounding data and document-driven processes, problems like effectively and accurately extracting, classifying and digitizing data from documents. From very early on in my career, I’ve known that AI can play a key role in transforming unstructured content into actionable information. Now, AI is no longer seen as a futuristic technology but an essential part of our daily lives – both within the enterprise and as consumers. It has become prolific. At ABBYY, we are leveraging AI to help solve some of today’s most pressing challenges. AI and related technologies, including machine learning, natural language processing, neural networks and OCR, help power our solutions that enable businesses to obtain a better understanding of their processes and the content the fuels them.

 

You’re currently the Chief Innovation Officer at ABBYY. What are some of the responsibilities of this position? 

In my role as Chief Innovation Officer for ABBYY, I’m responsible for our overall vision, strategy, and direction relative to various AI initiatives that leverage machine learning, robotic process automation (RPA), natural language processing and text analytics to identify process and data insights that improve business outcomes.

As CIO, I’m responsible for overseeing the direction of our product innovations as well as identifying outside technologies that are a fit to integrate into our portfolio. I initiated the discussions that lead to acquisition of TimelinePI, now ABBYY Timeline, the only end-to-end Process Intelligence platform in the market. Our new offering enables ABBYY to provide an even more robust and dynamic solution for optimizing the processes a business runs on and the data within those processes. We provide enterprises across diverse industries with solutions to accelerate digital transformation initiatives and unlock new opportunities for providing value to their customers.

I also guide the strategic priorities for the Research & Development and Product Innovation teams. My vision for success with regards to our innovations is guided by the following tenants:

  • Simplification: make everything we do as easy as possible to deploy, consume and maintain.
  • Cloud: leverage the growing demand for our capabilities within a cloud-based SaaS model.
  • Artificial Intelligence: build on our legacy expertise in linguistics and machine learning to ensure we take a leadership role as it relates to content analytics, automation and the application of machine learning within the process automation market.
  • Mobility: ensure we have best-of-breed on device and zero footprint mobile capture capabilities.

 

ABBYY uses AI technologies to solve document-related problems for enterprises using intelligent capture. Could you walk us through the different machine learning technologies that are used for these applications?

ABBYY leverages several AI enabling technologies to solve document-related and process-related challenges for businesses. More specifically, we work with computer vision, neural networks, machine learning, natural language processing and cognitive skills. We utilize these technologies in the following ways:

Computer Vision: utilized to extract, analyze, and understand information from images, including scanned documents.

Neural Networks: leveraged within our capture solutions to strengthen the accuracy of our classification and extraction technology. We also utilize advanced neural network techniques within our OCR offerings to enhance the accuracy and tolerance of our OCR technology.

Machine Learning: enables software to “learn” and improve, which increases accuracy and performance. In a workflow involving capturing documents and then processing with RPA, machine learning can learn from several variations of documents.

Natural Language Processing: enables software to read, interpret, and create actionable and structured data around unstructured content, such as completely unstructured document such as contracts, emails and other free-form communications.

Cognitive Skill: the ability to carry out a given task with determined results within a specific amount of time and cost. Examples within our products including extracting data and classifying a document.

 

ABBYY Digital Intelligence solutions help organizations accelerate their digital transformation. How do you define Digital Intelligence, how does it leverage RPA, and how do you go about introducing this to clients?

Digital Intelligence means gaining the valuable, yet often hard to attain, insight into an organization’s operation that enables true business transformation. With access to real-time data about exactly how their processes are currently working and the content that fuels them, Digital Intelligence empowers businesses to make tremendous impact where it matters most: customer experience, competitive advantage, visibility, and compliance.

We are educating our clients as to how Digital Intelligence can accelerate their digital transformation projects by addressing the challenges they have with unstructured and semi-structured data that is locked in documents such as invoices, claims, bills of lading, medical forms, etc. Customers focused on implementing automation projects can leverage Content Intelligence solutions to extract, classify, and validate documents to generate valuable and actionable business insights from their data.

Another component of Digital Intelligence is helping customers solve their process-related challenges. Specifically in relation to using RPA, there is often a lack of visibility of the full end-to-end process and consequently there is a failure to consider the human workflow steps in the process and the documents on which they work. By understanding the full process with Process Intelligence, they can make better decisions on what to automate, how to measure it and how to monitor the entire process in production.

We introduce this concept to clients via the specific solutions that make up our Digital Intelligence platform. Content Intelligence enables RPA digital workers to turn unstructured content into meaningful information. Process Intelligence provides complete visibility into processes and how they are performing in real time.

 

What are the different types of unstructured data that you can currently work with?

We transform virtually any type of unstructured content, from simple forms to complex and free-form documents. Invoices, mortgage applications, onboarding documents, claim forms, receipts, and waybills are common use cases among our customers. Many organizations utilize our Content Intelligence solutions, such as FlexiCapture, to transform their accounts payable operations, enabling companies to reduce the amount of time and costs associated with tedious and repetitive administrative tasks while also freeing up valuable personnel resources to focus on high-value, mission critical responsibilities.

 

Which type of enterprises best benefit from the solutions offered by ABBYY?

Enterprises of all sizes, industries, and geographic markets can benefit from ABBYY’s Digital Intelligence solutions. In particular, organizations that are very process-oriented and document driven see substantial benefits from our platform. Businesses within the insurance, banking and financial services, logistics, and healthcare sectors experience notable transformation from our solutions.

For financial service institutions, extracting and processing content effectively can enhance application and onboarding operations, and also enable mobile capabilities, which is becoming increasingly important to remain competitive. With Content Intelligence, banks are able to easily capture documents submitted by the customer – including utility bills, pay stubs, W-2 forms – on virtually any device.

In the insurance industry, Digital Intelligence can significantly improve claims processes by identifying, extracting, and classifying data from claim documents then turning this data into information that feeds into other systems, such as RPA.

Digital Intelligence is a cross-industry solution. It enables enterprises of all compositions to improve their processes and generate value from their data, helping businesses increase operational efficiencies and enhance overall profit margins.

 

Can you give some examples of how clients would benefit from the Digital Intelligence solutions that are offered by ABBYY?

Several recent examples come to mind relating to transforming accounts payable and claims. A billion-dollar manufacturer and distributor of medical supplies was experiencing double-digit sales growth year-over-year. It used ABBYY solutions with RPA to automate its 2,000/day invoices and achieved significant results in productivity and cost efficiencies. Likewise, and insurance company digitized its 150,000+ annual claims processing. From claim setup to invoice clarity it achieved more than 5,000 hours of productivity benefits.

Another example is with a multi-billion global logistics company that had a highly manual invoice processing challenge. It had dozens of people processing hundreds of thousands of invoices from 124 different vendors annually. When it first considered RPA for its numerous finance activities, it shied away from invoice processing because of the complexity of semi-structured documents. It used our solutions to extract, classify and validate invoice data, which included machine learning for ongoing training of invoices. If there was data that could not be matched, invoices went to a staff member for verification, but the points that needed to be checked were clearly highlighted to minimize effort. The invoices were then processed in the ERP system using RPA software bots. As a result, its accounts payables are now completely automated and is able to processes thousands of invoices at a fraction of the time with significantly less errors.

 

What are some of the other interesting machine learning powered applications that are offered by ABBYY?

Machine learning is at the heart of our Content Intelligence solutions. ML fuels how we train our classification and extraction technology. We utilize this technology in our FlexiCapture solution to acquire, process, and validate data from documents – even complex or free form ones – and then feed this data into business applications including BPM and RPA. Leveraging machine learning, we are able to transform content-centric processes in a truly advanced way.

 

Is there anything else that you would like to share about ABBYY?

It goes without saying that these are uncertain and unprecedented times. ABBYY is fully committed to helping businesses navigate these challenging circumstances. It is more important than ever that businesses have what it takes to make timely, intelligent decisions. There is a lot of data coming in and it can be overwhelming. We are committed to making sure organizations are equipped with the technologies they need to deliver outcomes and help customers.

I really enjoyed learning about your work, for anyone who wishes to learn more please visit ABBYY

Spread the love
Continue Reading

Interviews

Marc Sloan, Co-Founder & CEO of Scout – Interview Series

mm

Published

on

Marc Sloan, Co-Founder & CEO of Scout - Interview Series

Marc Sloan is the Co-Founder & CEO of Scout, the world’s first web browser chatbot, a digital assistant for getting anything done online. Scout suggests useful things it can do for you based on what you’re doing online.

What initially attracted you to AI?

My first experience of working on AI was during a gap year I spent working in the natural language processing research team at GCHQ during my Bachelor’s degree. I got to see first-hand the impact machine learning could have on real world problems and the difference it makes.

It flipped a switch in my mind about how computers can be used to solve problems: software engineering teaches you to create programs that take data and produce results, but machine learning lets you take data and describe the results you want to produce a program. Meaning you can use the same framework to solve thousands of different problems. To me this felt far more impactful than having to write a program for each problem.

I was already studying optimisation problems in mathematics alongside computer science, so once I got back to university I focused on AI and completed my dissertation on speech processing before applying for a PhD in Information Retrieval at UCL.

 

You researched reinforcement learning in web search under supervision of David Silver, the founder of AlphaGo. Could you discuss some of this research?

My PhD was on the topic of applying reinforcement learning to learning to rank problems in information retrieval, a field I helped create called Dynamic Information Retrieval. I was supervised by Prof Jun Wang and Prof David Silver, both experts in agent-based reinforcement learning.

Our research looked at how search engines could learn from user behaviour to improve search results autonomously over time. Using a Multi-Armed Bandit approach, our system would attempt different search rankings and collect click behaviour to determine if they were effective or not. It could also adapt to individual users over time and was particularly effective in handling ambiguous search queries. At the time, David was focusing deeply on the Go problem and he helped me determine the appropriate reinforcement learning setup of states and value function for this particular problem.

 

What are some of the entrepreneur lessons that you learned from working with David Silver?

Research at UCL is often entrepreneurial. David had previously founded Elixir studios with Demis Hassabis and then of course joined DeepMind to work on Alpha Go. But other members of our Media Futures research group also ended up spinning out a range of different startups: Jun founded Mediagamma (applying RL to online ad spend), Simon Chan started prediction.io (acquired by SalesForce) and Jagadeesh Gorla started Jaggu (a recommendation service for e-commerce). Our team often discussed the commercial impact our research could have, I think perhaps because UCL’s base in London makes it a natural starting point for creating a business.

 

You recently launched Scout, the world’s first web browser chatbot. What was the inspiration behind launching Scout?

The idea naturally evolved from my PhD research. I went straight from finishing my PhD to joining Entrepreneur First where I started to think about how I could turn my research into a product.

Before I started this, I completed an internship at Microsoft Research where I applied my research to Bing. At the time, the main thing I learned from my research was that information finding could be predicted based on online user behaviour. But I became frustrated that the only real way to surface these predictions in a search engine was by making auto-suggest better. So I started to think about how the user’s entire online experience could be improved using these predictions, not just the search experience.

It was this thinking that led me and my new co-founder on Entrepreneur First to create a browser add-on that observes user behaviour, predicts what information the user is likely to need next online, and fetches it for them. After a few years of experiments and prototypes, this evolved into a chatbot interface where the browser ‘chats’ to you about what you’re up to online and tries to help you along the way.

 

Which web browsers will Scout be compatible with?

We’re focusing on Chrome at the moment due to it being the most popular web browser and having a mature add-on architecture, but we have prototypes working on Firefox and Safari and even a mobile app.

 

The Scout shopping assistant functionality sounds like it could save users both time and money. Assuming someone is researching a product on Amazon, what happens in the backend, and how does Scout interact with the user?

The idea is that once you have Scout installed, you just continue using the web as normal. If you’re shopping, you may visit Amazon to look at products. At this point, Scout recognises that you’re shopping on Amazon, and the product you’re looking at, and it will say “Hello”. It pops up as a chat widget on the webpage, kind of like how Intercom works, except Scout can appear on potentially any webpage. You can see what it looks like on my website.

Because you’re shopping, it’ll start to suggest ways it can help. It’ll ask you if you want to see reviews online, other prices, YouTube videos of the product and more. You interact by pressing buttons and the chatbot tailors the experience to what you want it to do. Whenever it finds information (like a YouTube video), it will embed it within the chat thread, just like how a friend might share media with you on WhatsApp. Over time, you end up having a dialogue with the browser about what you are doing online, with the browser helping you along the way.

The webpage processing happens within the browser itself. The only information our backend sees is the chat thread, meaning that the privacy implications are minimal.

We have a bespoke architecture for understanding online browsing behaviour and managing dialogues with the user. We use machine learning to identify what tasks we can help with online and how we should help. Originally, we used reinforcement learning to adapt to user preferences over time. However, one of the biggest lessons I’ve learned from running an AI startup is to keep processes simple and to try to only use machine learning to optimise an existing process. So instead, we now have a sophisticated rules engine for handling tasks over time that can be managed by reinforcement learning once we need to scale.

 

What are some examples of how Scout can assist with event planning?

We realised that event planning (and travel booking) are not so different from shopping online. You’re still looking at products, reading reviews and committing to purchase/attend. So a lot of what we’ve built for shopping also applies here.

The biggest difference is that time and location are now important. So for instance, if you’re looking at concert tickets on Ticketmaster, Scout can identify the address of the venue and suggest finding you directions from your current location to it, or find the price of an Uber, or suggest what time you should leave. If you’ve connected Scout into your calendar, then Scout can check to see if you’re available at the time of the event and add it to your calendar for you.

In the future, we foresee Scout users being able to communicate to their friends through the platform to discuss the things they’re doing online such as event planning, shopping, work etc.

 

Dialogue triggers will be used for Scout to initiate communications. What are some of these triggers?

By default, Scout won’t disturb you unless it encounters a trigger that tells it you may need help. There are several types of trigger:

  • Visiting a specific website.
  • Visiting a type of website (such as news, shopping etc.).
  • Visiting a website containing a certain type of information (i.e. an address, a video etc.).
  • Clicking links or buttons on webpages.
  • Interacting with Scout by pressing buttons
  • Scout retrieving certain types of media such as videos, music, tweets etc.

We plan to allow users to fine-tune what type of triggers they want Scout to respond to, and eventually, learn their preference automatically.

 

Can you discuss some of the difficulties behind ensuring that Scout is genuinely helpful when it decides to interact with a user without becoming annoying?

We take user engagement very seriously and try to measure whether interactions led to positive or negative outcomes. We try to maintain a good ratio for how often Scout tries to start a conversation and how often it’s used. However, it’s a tricky balance to get right and we’re always trying to improve.

Because of the intrusive nature of this product, getting the interface and UX right is critical. We’ve spent a lot of time trying completely different interfaces and user interaction methods. This work has led us to the current, chatbot style interface, which we find gives us the greatest flexibility in the help we can provide, coupled with user familiarity and minimal user effort for interactions.

 

Can you provide other scenarios of how Scout can assist end users?

Our focus at the moment is in market-testing specific applications for Scout. Shopping and event planning have already been mentioned, but we’re also looking at how Scout can help academics (with finding research papers, author details and reference networks) and even guitarists (finding guitar sheet music, playing music and videos alongside sheet music online and helping to tune a guitar). We’ve also spent some time exploring professional scenarios such as online recruitment, financial analysis and law.

Ultimately, Scout can potentially work on any website and help in any scenario, which is what makes the technology incredibly exciting, but also makes it difficult to get started.

 

Is there anything else that you would like to share about Scout?

If you’d like to see what it’s like if your browser could talk to you, you can read more on Scout’s blog.

Thank you for the fascinating take on designing a unite type of chatbot. We are excited to follow this project. You may visit the Scout website or Marc Sloan’s website to learn more.

Spread the love
Continue Reading