Connect with us

Interviews

Lisa Falzone, CEO of Athena Security – Interview Series

mm

Published

 on

Lisa Falzone, CEO of Athena Security - Interview Series

Athena Security and its CEO Lisa Falzone are improving personal security by reducing the time it takes police and medical teams to arrive at crime scenes. According to Athena Security, it’s driving motto is not to “profile or resell identities or user data,” but to “simply protect the public.”

Detailing her and the company’s work, Ms. Falzone responded to a number of Unite.AI questions.

 

How you came out of retirement to build a life-saving tech business?

After exiting my first co-founded company, the iPad point-of-sale company, Revel Systems, I felt the need to create something again, but this time I wanted to use technology to address a human need and protect life. After the High School Stoneman Douglas shooting happened in 2017 and Congress was still unable to take action to resolve gun violence, I thought I could use technology and computer vision in a proactive way to help prevent these crimes from happening and possibly save lives. That’s what Athena does. Athena’s cameras use computer vision to identify weapons or violent behavior and then alert police or business owners immediately. It’s basically like a fire alarm system for guns.

 

How A.I and Facial Recognition have become interchangeable, and how the media/Hollywood has perpetuated the myth of its misuse?

There’s been a lot of controversy in the media with computer vision and facial recognition systems racially profiling people or invading people’s privacy, but Athena’s cameras are different because they only focus on identifying and flagging weapons or violent behavior.  Athena eliminates bias or racial profiling because our computer vision is programmed to recognize guns, not the face or skin color of the person holding it. We do not do any profiling of people, we simply want to protect people from gun violence and that’s what our cameras do. Our cameras are also proactive, meaning that rather than just record what’s happening they can alert police and medical to reduce the time it takes for first responders to arrive, which can mean the difference between life and death.

 

How does your company Athena advise on-premise computing to clients to avoid the cloud and big brother’s grasp?

Another concern that comes up with computer vision and surveillance is the idea that the government is watching you 24/7, but with Athena’s cameras, we offer on-premise computing, which eliminates the risk of hosting private data on the cloud where it can become vulnerable. Our goal isn’t to spy on people, it’s to save lives by reducing the response time of emergency responders and eliminating human error.

 

Can schools like Archbishop Wood High School and places of worship like Al-Noor Mosque in New Zealand take comfort in having an extra layer of security always on?

We’re constantly hearing about shootings in the news at schools, concerts, or even places of worship and Congress hasn’t done anything to stop them, so we wanted to be part of the solution. Athena has implemented our cameras in schools such as Archbishop Wood High School to allow police to be alerted and respond faster when there’s an active shooter, or even prevent the shooting altogether. We’ve also been able to implement our cameras internationally in the Al-Noor Mosque in New Zealand, which was one of the mosques targeted in the terrorist attack in March. We have the technology and we’re using it to prevent these tragedies from happening.

 

How this unique form of computer vision and object detection was achieved through hiring professional actors to train the A.I. brain to achieve 99%+ accuracy?

It was important for us to figure out how to train Athena’s computer vision brain to detect violent behavior so that it wouldn’t set off false alarms. We brought in trained actors to enact violent situations or wield weapons until the cameras were able to detect this behavior with 99% accuracy. When you have one security guard watching several screens, the chance that they’ll miss something is extremely high, but computer vision removes the human error by being able to watch 100% of the screen at all times. Our cameras also supply police or business owners with real-time information on what is happening at that moment, rather than 24-48 hours later.

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Autonomous Vehicles

William Santana Li, CEO of Knightscope – Interview Series

mm

Published

on

William Santana Li, CEO of Knightscope - Interview Series

Knightscope is a leader in developing autonomous security capabilities with a vision to one day be able to predict and prevent crime disrupting the $500 billion security industry. The technology is a profound combination of self-driving technology, robotics and artificial intelligence.

William Santana Li,  is the Chairman and CEO of  Knightscope. He is also a seasoned entrepreneur, intrapreneur and former corporate executive at Ford Motor Company. He is also the Founder and COO of GreenLeaf, which became the world’s 2nd largest automotive recycler.

Knightscope was launched in 2013 which was very forward thinking for the time. What was the inspiration behind launching this company?

A professional and a personal motivation.  The professional answer is as a former automotive executive, I believe deeply that autonomous self-driving technology is going to turn the world upside down – but just not in agreement on how to commercialize the technology.  Over $80 billion has been invested autonomous technology with something like 200 companies on it – for years.  Yet, no one has shipped anything commercially viable.  I believe Knightscope is literally the only company in the world operating fully autonomous 24/7/365 across an entire country, without human intervention, generating real revenue, with real clients, in the real world.  Our crawl, walk, run approach is likely more suitable for this extremely complicated and execution intensive technology.  My personal motivation: someone hit my town on 9/11 and I’m still furious – and I am dedicating the rest of my life to better securing our country.  You can learn more about why we built Knightscope here.

 

Knightscope offers clients a Machine-as-a-Service (MaaS) subscription which aggregates data from the robots, analyzes it for anything out of the ordinary and serves that information to clients. What type of data is being collected?

Today we can read 1,200 license plates per minute, can detect a person, run a thermal scan, check for rogue mobile devices….it is over 90 terabytes of data a year that no human could ever process.  So our clients utilize our state-of-the-art browser-based user interface to interact with the machines.  You can get a glimpse of it here – we call the KSOC (Knightscope Security Operations Center).  In the future, our desire is to have the machines be able to ‘see, feel, hear and smell’ and do 100 times more than a human could ever do – giving law enforcement and security professionals ‘superpowers’ – so they can do their jobs much more effectively.

 

K1 is a stationary machine which is ideal for entry and exit points. What are the capabilities that are offered with this machine?

Yes, the K1 operates primarily at ingress/egress points for either humans and/or vehicles.  All our machines have the same suite of technologies – but at this time the K1 does have facial recognition capabilities which has proven to be quite useful in securing a location.

William Santana Li, CEO of Knightscope - Interview Series

The K3 is an indoor autonomous robot, and the K5 is an outdoor autonomous robot, both capable of autonomous recharging and of having conversations with humans. What else can you tell us about these robots, and is there anything else that differentiates the two robots from each other?

The K3 is the smaller version capable of handling much smaller and dynamic indoor environments.

William Santana Li, CEO of Knightscope - Interview Series

Obviously the K5 is weatherproof and can even go up ramps for vehicles – one of our clients is a 9-story parking structure – and the robot patrols autonomously on multiple levels on its own, which is a bit of a technical feat.

William Santana Li, CEO of Knightscope - Interview Series

 

Your robots have been tested in multiple settings including shopping malls and parking lots. What are some other settings or use cases which are ideal for these robots?

Basically, anywhere outdoors or indoors you may see a security guard.  Commercial real estate, corporate campuses, retail, warehouses, manufacturing plants, healthcare, stadiums, airports, rail stations, parks, data centers – the list is massive.  Usually we do well when the client has a genuine crime problem and/or budget challenges.

 

Could you share with us some of the noteworthy clients which are currently using the robots in a commercial setting?

Ten of the Fortune 1000 major corporations are clients, Samsung, Westfield Malls, Sacramento Kings, City of Hayward, City of Huntington Park, Citizens Bank, XPO Logistics, Faurecia, Dignity Health, Houston Methodist Hospital – are just a few that come to mind.   We operate across 4 time zones in the U.S. only.  Can check them out on our homepage at www.knightscope.com

 

The K7 is Multi-Terrain Autonomous robot which is currently under development. The pictures of this robot look very impressive. What can you tell us about the future capabilities of the K7?

The K7 is technically challenging but is intended to handle much more difficult terrain and much larger environments – with gravel, dirt, sand, grass, etc.  It is the size of a small car.

William Santana Li, CEO of Knightscope - Interview Series

 

Knightscope is currently fundraising on StartEngine. What are the investment terms for investors?

We are celebrating our 7th anniversary and have raised over $40 million since inception to build all this technology from scratch. We design, engineer, build, deploy and support it.  Made in the USA – and we are backed by over 7,000 investors and 4 major corporations and you learn about our investor base here.  We are now raising $50 million in growth capital to scale the Company up to profitability – we can accept accredited and unaccredited investors as well as domestic and international investors from $1,000 to $10M completely online.  You can learn more about the terms and buy shares here: www.startengine.com/knightscope

 

Is there anything else that you would like to share about Knightscope?

As I write this response, we are in complete lockdown in Silicon Valley due to the global pandemic.  The crazy thing is that our clients are ‘essential services’ (law enforcement agencies, hospitals, security teams) so we must continue to operate 24/7/365.  You can read more about why I think you should consider investing in Knightscope here – but these days the important thing to remember is that robots are immune!

Thank you for sharing information about your amazing startup. Readers who wish to learn more may visit Knightscope or the StartEngine investment page.

Spread the love
Continue Reading

Big Data

Anthony Macciola, Chief Innovation Officer at ABBYY – Interview Series

mm

Published

on

Anthony Macciola, Chief Innovation Officer at ABBYY - Interview Series

Anthony is recognized as a thought leader and primary innovator of products, solutions, and technologies for the intelligent capture, RPA, BPM, BI and mobile markets.

ABBYY is an innovator and leader in artificial intelligence (Al) technology including machine learning and natural language processing that helps organizations better understand and drive context and outcomes from their data. The company sets a goal to grow and strengthen its leadership positions by satisfying the ever-increasing demand for AI-enabled products and solutions.

ABBYY has been developing semantic and AI technologies for many years. Thousands of organizations from over 200 countries and regions have chosen ABBYY solutions that transform documents into business value by capturing information in any format. These solutions help organizations of diverse industries boost revenue, improve processes, mitigate risk, and drive competitive advantage.

What got you initially interested in AI?

I first became interested in AI in the 90s. In my role, we were utilizing support vector machines, neural networks, and machine learning engines to create extraction and classification models. At the time, it wasn’t called AI. However, we were leveraging AI to address problems surrounding data and document-driven processes, problems like effectively and accurately extracting, classifying and digitizing data from documents. From very early on in my career, I’ve known that AI can play a key role in transforming unstructured content into actionable information. Now, AI is no longer seen as a futuristic technology but an essential part of our daily lives – both within the enterprise and as consumers. It has become prolific. At ABBYY, we are leveraging AI to help solve some of today’s most pressing challenges. AI and related technologies, including machine learning, natural language processing, neural networks and OCR, help power our solutions that enable businesses to obtain a better understanding of their processes and the content the fuels them.

 

You’re currently the Chief Innovation Officer at ABBYY. What are some of the responsibilities of this position? 

In my role as Chief Innovation Officer for ABBYY, I’m responsible for our overall vision, strategy, and direction relative to various AI initiatives that leverage machine learning, robotic process automation (RPA), natural language processing and text analytics to identify process and data insights that improve business outcomes.

As CIO, I’m responsible for overseeing the direction of our product innovations as well as identifying outside technologies that are a fit to integrate into our portfolio. I initiated the discussions that lead to acquisition of TimelinePI, now ABBYY Timeline, the only end-to-end Process Intelligence platform in the market. Our new offering enables ABBYY to provide an even more robust and dynamic solution for optimizing the processes a business runs on and the data within those processes. We provide enterprises across diverse industries with solutions to accelerate digital transformation initiatives and unlock new opportunities for providing value to their customers.

I also guide the strategic priorities for the Research & Development and Product Innovation teams. My vision for success with regards to our innovations is guided by the following tenants:

  • Simplification: make everything we do as easy as possible to deploy, consume and maintain.
  • Cloud: leverage the growing demand for our capabilities within a cloud-based SaaS model.
  • Artificial Intelligence: build on our legacy expertise in linguistics and machine learning to ensure we take a leadership role as it relates to content analytics, automation and the application of machine learning within the process automation market.
  • Mobility: ensure we have best-of-breed on device and zero footprint mobile capture capabilities.

 

ABBYY uses AI technologies to solve document-related problems for enterprises using intelligent capture. Could you walk us through the different machine learning technologies that are used for these applications?

ABBYY leverages several AI enabling technologies to solve document-related and process-related challenges for businesses. More specifically, we work with computer vision, neural networks, machine learning, natural language processing and cognitive skills. We utilize these technologies in the following ways:

Computer Vision: utilized to extract, analyze, and understand information from images, including scanned documents.

Neural Networks: leveraged within our capture solutions to strengthen the accuracy of our classification and extraction technology. We also utilize advanced neural network techniques within our OCR offerings to enhance the accuracy and tolerance of our OCR technology.

Machine Learning: enables software to “learn” and improve, which increases accuracy and performance. In a workflow involving capturing documents and then processing with RPA, machine learning can learn from several variations of documents.

Natural Language Processing: enables software to read, interpret, and create actionable and structured data around unstructured content, such as completely unstructured document such as contracts, emails and other free-form communications.

Cognitive Skill: the ability to carry out a given task with determined results within a specific amount of time and cost. Examples within our products including extracting data and classifying a document.

 

ABBYY Digital Intelligence solutions help organizations accelerate their digital transformation. How do you define Digital Intelligence, how does it leverage RPA, and how do you go about introducing this to clients?

Digital Intelligence means gaining the valuable, yet often hard to attain, insight into an organization’s operation that enables true business transformation. With access to real-time data about exactly how their processes are currently working and the content that fuels them, Digital Intelligence empowers businesses to make tremendous impact where it matters most: customer experience, competitive advantage, visibility, and compliance.

We are educating our clients as to how Digital Intelligence can accelerate their digital transformation projects by addressing the challenges they have with unstructured and semi-structured data that is locked in documents such as invoices, claims, bills of lading, medical forms, etc. Customers focused on implementing automation projects can leverage Content Intelligence solutions to extract, classify, and validate documents to generate valuable and actionable business insights from their data.

Another component of Digital Intelligence is helping customers solve their process-related challenges. Specifically in relation to using RPA, there is often a lack of visibility of the full end-to-end process and consequently there is a failure to consider the human workflow steps in the process and the documents on which they work. By understanding the full process with Process Intelligence, they can make better decisions on what to automate, how to measure it and how to monitor the entire process in production.

We introduce this concept to clients via the specific solutions that make up our Digital Intelligence platform. Content Intelligence enables RPA digital workers to turn unstructured content into meaningful information. Process Intelligence provides complete visibility into processes and how they are performing in real time.

 

What are the different types of unstructured data that you can currently work with?

We transform virtually any type of unstructured content, from simple forms to complex and free-form documents. Invoices, mortgage applications, onboarding documents, claim forms, receipts, and waybills are common use cases among our customers. Many organizations utilize our Content Intelligence solutions, such as FlexiCapture, to transform their accounts payable operations, enabling companies to reduce the amount of time and costs associated with tedious and repetitive administrative tasks while also freeing up valuable personnel resources to focus on high-value, mission critical responsibilities.

 

Which type of enterprises best benefit from the solutions offered by ABBYY?

Enterprises of all sizes, industries, and geographic markets can benefit from ABBYY’s Digital Intelligence solutions. In particular, organizations that are very process-oriented and document driven see substantial benefits from our platform. Businesses within the insurance, banking and financial services, logistics, and healthcare sectors experience notable transformation from our solutions.

For financial service institutions, extracting and processing content effectively can enhance application and onboarding operations, and also enable mobile capabilities, which is becoming increasingly important to remain competitive. With Content Intelligence, banks are able to easily capture documents submitted by the customer – including utility bills, pay stubs, W-2 forms – on virtually any device.

In the insurance industry, Digital Intelligence can significantly improve claims processes by identifying, extracting, and classifying data from claim documents then turning this data into information that feeds into other systems, such as RPA.

Digital Intelligence is a cross-industry solution. It enables enterprises of all compositions to improve their processes and generate value from their data, helping businesses increase operational efficiencies and enhance overall profit margins.

 

Can you give some examples of how clients would benefit from the Digital Intelligence solutions that are offered by ABBYY?

Several recent examples come to mind relating to transforming accounts payable and claims. A billion-dollar manufacturer and distributor of medical supplies was experiencing double-digit sales growth year-over-year. It used ABBYY solutions with RPA to automate its 2,000/day invoices and achieved significant results in productivity and cost efficiencies. Likewise, and insurance company digitized its 150,000+ annual claims processing. From claim setup to invoice clarity it achieved more than 5,000 hours of productivity benefits.

Another example is with a multi-billion global logistics company that had a highly manual invoice processing challenge. It had dozens of people processing hundreds of thousands of invoices from 124 different vendors annually. When it first considered RPA for its numerous finance activities, it shied away from invoice processing because of the complexity of semi-structured documents. It used our solutions to extract, classify and validate invoice data, which included machine learning for ongoing training of invoices. If there was data that could not be matched, invoices went to a staff member for verification, but the points that needed to be checked were clearly highlighted to minimize effort. The invoices were then processed in the ERP system using RPA software bots. As a result, its accounts payables are now completely automated and is able to processes thousands of invoices at a fraction of the time with significantly less errors.

 

What are some of the other interesting machine learning powered applications that are offered by ABBYY?

Machine learning is at the heart of our Content Intelligence solutions. ML fuels how we train our classification and extraction technology. We utilize this technology in our FlexiCapture solution to acquire, process, and validate data from documents – even complex or free form ones – and then feed this data into business applications including BPM and RPA. Leveraging machine learning, we are able to transform content-centric processes in a truly advanced way.

 

Is there anything else that you would like to share about ABBYY?

It goes without saying that these are uncertain and unprecedented times. ABBYY is fully committed to helping businesses navigate these challenging circumstances. It is more important than ever that businesses have what it takes to make timely, intelligent decisions. There is a lot of data coming in and it can be overwhelming. We are committed to making sure organizations are equipped with the technologies they need to deliver outcomes and help customers.

I really enjoyed learning about your work, for anyone who wishes to learn more please visit ABBYY

Spread the love
Continue Reading

Interviews

Marc Sloan, Co-Founder & CEO of Scout – Interview Series

mm

Published

on

Marc Sloan, Co-Founder & CEO of Scout - Interview Series

Marc Sloan is the Co-Founder & CEO of Scout, the world’s first web browser chatbot, a digital assistant for getting anything done online. Scout suggests useful things it can do for you based on what you’re doing online.

What initially attracted you to AI?

My first experience of working on AI was during a gap year I spent working in the natural language processing research team at GCHQ during my Bachelor’s degree. I got to see first-hand the impact machine learning could have on real world problems and the difference it makes.

It flipped a switch in my mind about how computers can be used to solve problems: software engineering teaches you to create programs that take data and produce results, but machine learning lets you take data and describe the results you want to produce a program. Meaning you can use the same framework to solve thousands of different problems. To me this felt far more impactful than having to write a program for each problem.

I was already studying optimisation problems in mathematics alongside computer science, so once I got back to university I focused on AI and completed my dissertation on speech processing before applying for a PhD in Information Retrieval at UCL.

 

You researched reinforcement learning in web search under supervision of David Silver, the founder of AlphaGo. Could you discuss some of this research?

My PhD was on the topic of applying reinforcement learning to learning to rank problems in information retrieval, a field I helped create called Dynamic Information Retrieval. I was supervised by Prof Jun Wang and Prof David Silver, both experts in agent-based reinforcement learning.

Our research looked at how search engines could learn from user behaviour to improve search results autonomously over time. Using a Multi-Armed Bandit approach, our system would attempt different search rankings and collect click behaviour to determine if they were effective or not. It could also adapt to individual users over time and was particularly effective in handling ambiguous search queries. At the time, David was focusing deeply on the Go problem and he helped me determine the appropriate reinforcement learning setup of states and value function for this particular problem.

 

What are some of the entrepreneur lessons that you learned from working with David Silver?

Research at UCL is often entrepreneurial. David had previously founded Elixir studios with Demis Hassabis and then of course joined DeepMind to work on Alpha Go. But other members of our Media Futures research group also ended up spinning out a range of different startups: Jun founded Mediagamma (applying RL to online ad spend), Simon Chan started prediction.io (acquired by SalesForce) and Jagadeesh Gorla started Jaggu (a recommendation service for e-commerce). Our team often discussed the commercial impact our research could have, I think perhaps because UCL’s base in London makes it a natural starting point for creating a business.

 

You recently launched Scout, the world’s first web browser chatbot. What was the inspiration behind launching Scout?

The idea naturally evolved from my PhD research. I went straight from finishing my PhD to joining Entrepreneur First where I started to think about how I could turn my research into a product.

Before I started this, I completed an internship at Microsoft Research where I applied my research to Bing. At the time, the main thing I learned from my research was that information finding could be predicted based on online user behaviour. But I became frustrated that the only real way to surface these predictions in a search engine was by making auto-suggest better. So I started to think about how the user’s entire online experience could be improved using these predictions, not just the search experience.

It was this thinking that led me and my new co-founder on Entrepreneur First to create a browser add-on that observes user behaviour, predicts what information the user is likely to need next online, and fetches it for them. After a few years of experiments and prototypes, this evolved into a chatbot interface where the browser ‘chats’ to you about what you’re up to online and tries to help you along the way.

 

Which web browsers will Scout be compatible with?

We’re focusing on Chrome at the moment due to it being the most popular web browser and having a mature add-on architecture, but we have prototypes working on Firefox and Safari and even a mobile app.

 

The Scout shopping assistant functionality sounds like it could save users both time and money. Assuming someone is researching a product on Amazon, what happens in the backend, and how does Scout interact with the user?

The idea is that once you have Scout installed, you just continue using the web as normal. If you’re shopping, you may visit Amazon to look at products. At this point, Scout recognises that you’re shopping on Amazon, and the product you’re looking at, and it will say “Hello”. It pops up as a chat widget on the webpage, kind of like how Intercom works, except Scout can appear on potentially any webpage. You can see what it looks like on my website.

Because you’re shopping, it’ll start to suggest ways it can help. It’ll ask you if you want to see reviews online, other prices, YouTube videos of the product and more. You interact by pressing buttons and the chatbot tailors the experience to what you want it to do. Whenever it finds information (like a YouTube video), it will embed it within the chat thread, just like how a friend might share media with you on WhatsApp. Over time, you end up having a dialogue with the browser about what you are doing online, with the browser helping you along the way.

The webpage processing happens within the browser itself. The only information our backend sees is the chat thread, meaning that the privacy implications are minimal.

We have a bespoke architecture for understanding online browsing behaviour and managing dialogues with the user. We use machine learning to identify what tasks we can help with online and how we should help. Originally, we used reinforcement learning to adapt to user preferences over time. However, one of the biggest lessons I’ve learned from running an AI startup is to keep processes simple and to try to only use machine learning to optimise an existing process. So instead, we now have a sophisticated rules engine for handling tasks over time that can be managed by reinforcement learning once we need to scale.

 

What are some examples of how Scout can assist with event planning?

We realised that event planning (and travel booking) are not so different from shopping online. You’re still looking at products, reading reviews and committing to purchase/attend. So a lot of what we’ve built for shopping also applies here.

The biggest difference is that time and location are now important. So for instance, if you’re looking at concert tickets on Ticketmaster, Scout can identify the address of the venue and suggest finding you directions from your current location to it, or find the price of an Uber, or suggest what time you should leave. If you’ve connected Scout into your calendar, then Scout can check to see if you’re available at the time of the event and add it to your calendar for you.

In the future, we foresee Scout users being able to communicate to their friends through the platform to discuss the things they’re doing online such as event planning, shopping, work etc.

 

Dialogue triggers will be used for Scout to initiate communications. What are some of these triggers?

By default, Scout won’t disturb you unless it encounters a trigger that tells it you may need help. There are several types of trigger:

  • Visiting a specific website.
  • Visiting a type of website (such as news, shopping etc.).
  • Visiting a website containing a certain type of information (i.e. an address, a video etc.).
  • Clicking links or buttons on webpages.
  • Interacting with Scout by pressing buttons
  • Scout retrieving certain types of media such as videos, music, tweets etc.

We plan to allow users to fine-tune what type of triggers they want Scout to respond to, and eventually, learn their preference automatically.

 

Can you discuss some of the difficulties behind ensuring that Scout is genuinely helpful when it decides to interact with a user without becoming annoying?

We take user engagement very seriously and try to measure whether interactions led to positive or negative outcomes. We try to maintain a good ratio for how often Scout tries to start a conversation and how often it’s used. However, it’s a tricky balance to get right and we’re always trying to improve.

Because of the intrusive nature of this product, getting the interface and UX right is critical. We’ve spent a lot of time trying completely different interfaces and user interaction methods. This work has led us to the current, chatbot style interface, which we find gives us the greatest flexibility in the help we can provide, coupled with user familiarity and minimal user effort for interactions.

 

Can you provide other scenarios of how Scout can assist end users?

Our focus at the moment is in market-testing specific applications for Scout. Shopping and event planning have already been mentioned, but we’re also looking at how Scout can help academics (with finding research papers, author details and reference networks) and even guitarists (finding guitar sheet music, playing music and videos alongside sheet music online and helping to tune a guitar). We’ve also spent some time exploring professional scenarios such as online recruitment, financial analysis and law.

Ultimately, Scout can potentially work on any website and help in any scenario, which is what makes the technology incredibly exciting, but also makes it difficult to get started.

 

Is there anything else that you would like to share about Scout?

If you’d like to see what it’s like if your browser could talk to you, you can read more on Scout’s blog.

Thank you for the fascinating take on designing a unite type of chatbot. We are excited to follow this project. You may visit the Scout website or Marc Sloan’s website to learn more.

Spread the love
Continue Reading