Connect with us

Interviews

Netanel Eliav, CEO of Sightbit – Interview Series

mm

Published

 on

Netanel Eliav is the CEO of Sightbit, a global development project that harnesses advances in AI and image recognition technology to prevent drowning and save lives.

How did the concept for Sightbit originate?

Friends Netanel Eliav and Adam Bismut were interested in using tech to improve the world. On a visit to the beach, their mission became clear. Adam noticed the lack of tech support for lifeguards, who monitored hard-to-see swimmers with binoculars.

The system uses standard cameras that cover a defined area and transmits that information in real-time to lifeguards. What type of range are the cameras capable of? Also, how much of the accuracy becomes reduced with greater range?

Sightbit’s innovation is in the software. We work with various off-the-shelf cameras of different ranges, customizing camera setup to meet the needs of each customer and to ensure that the desired area is protected.

At Israel’s Palmahim Beach, where we are conducting a pilot, we built a dedicated cement platform that holds three cameras. Each camera covers 300 meters out to sea in normal conditions, the range required at Palmahim Beach.

A monitor displays a panoramic view of the water and beach, like a security camera display. A dashboard is superimposed over the video feed. Sightbit alerts appear as flashing boxes around individuals and hazards. Multiple views from different camera vantage points are available on a single screen. When a lifeguard clicks on an alert, the program zooms in, allowing the lifeguard to see the swimmer much clear than is possible with the naked eye.  Four additional cameras will be installed shortly.


Can you discuss some of the computer vision challenges behind being able to differentiate between a human swimming and a human struggling to stay afloat?

We can detect some of the signs of distress base on the following: Location of person who might be caught in a rip current, located far from shore or in a dangerous area. Movement/lack of movement or lack of movement. Our system can distinguish swimmers bobbing up and down in the water, floating face down, or waving for help as signs of distress.

Sightbit has developed software that incorporates AI, based convolutional neural networks, image detection, and other proprietary algorithms to detect swimmers in distress and avoid false positives.

What are the risk factors for false positives such as misidentifying someone as drowning, or false negatives such as misidentifying a potential drowning?

The drowning detection feature sometimes generates a low-level warning when a swimmer has remained underwater for long stretches of time.

Like lifeguards, Sightbit primarily detects swimmers in distress. A drowning alert is an alert that has come too late. We focus on dangerous situations that can lead to drowning, allowing for de-escalation before they get out of control. For example, we warn when swimmers are caught in rip currents so that lifeguards or other rescue personnel can reach the individual in time.

Our real-time alerts include:

  • Swimmers in distress
  • Rip currents
  • Children alone in or by the water
  • Water vessels entering the swim area
  • Swimmers entering dangerous areas. This may be choppy water, deep water, are hazardous areas alongside breakwater structures or rocks.
  • Drowning incidents – soon to be deployed at Palmahim
  • And other situations

What type of training is needed to use the Sightbit system?

No special training is needed. Sightbit’s user interface takes five minutes to learn. We designed the system with lifeguards to ensure that it is easy for them to use and master.

Can you discuss what happens in the backend once an alert is triggered for a potential drowning?

The beach cameras feed into a GPU for video analysis and a CPU for analytics. When the CPU detects a threat, it generates an alert. This alert is customized to customer needs. At Palmahim, we sound alarms and generate visual alerts on the screen. Sightbit can also be configured to call emergency rescue.

Could you discuss some of your current pilot programs and the types of results that have been achieved?

Sightbit is conducting a pilot at Palmahim Beach in partnership with the Israel Nature and Parks Authority. The system is installed at the Palmahim lifeguard tower and is in use by lifeguards (see above for details about camera placement, warnings, and the Sightbit monitor). The pilot went live at the end of May.

At Palmahim, three lifeguards, all stationed at one central tower, guard the one-kilometer beach. Sightbit provides instantaneous alerts when swimmers are in danger and camera views of swimmers far from the tower.

Prior to the pilot partnership at Palmahim Beach, we conducted proof-of-concept testing at beaches throughout Israel at the invitation of local authorities.

How have government officials reacted so far when introduced to the technology?

Extreme enthusiasm! Cities and major government-run beaches as well as private beaches in Israel, the United States, the Balkans, and Scandinavia have invited Sightbit to conduct pilots. We have been granted permissions by all relevant government bodies.

Is there anything else that you would like to share about Sightbit?

Yes!

  1. We are currently raising funds as part of a seed round. Investors around the world have reached out to us, and we have already received funding offers. We previously received pre-seed funding from the Cactus Capital VC fund in Israel.

 

  1. Long-Term Potential: People are not optimized for tracking dozens, and certainly not hundreds, of swimmers from a watchtower. Looking long term, Sightbit can enable agencies to guard more shoreline at lower costs by using Sightbit systems for front-line monitoring. Lifeguards can be assigned to headquarters or patrol duty, allowing teams to respond faster to incidents anywhere along the beach. This is lifesaving. Currently, even during peak summer months, lifeguards monitor less than half of the shoreline at designated public swimming beaches.

 

  1. Sightbit can safeguard sites 24/7, all year round. Where there is no lifeguard service, Sightbit alerts emergency dispatch or local rescue services when a swimmer is in danger (for example, a swimmer swept out to sea in a rip current). Sightbit software can also pinpoint and track a swimmer’s location and deliver rescue tubes via small drones.

 

  1. Sightbit can bring monitoring to many different aquatic sites that do not currently employ lifeguards. With Sightbit, aquatic work sites, marinas, reservoirs, and other sites can benefit from water safety alerts.

Sightbit also provides risk analytics and management insights, which allow customers to anticipate hazards in advance and improve operations. Customers can track water and weather conditions, crowding, and more.

Thank you for the interview regarding this important project, readers who wish to learn more should visit of Sightbit.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.

Interviews

Kevin Tubbs, PhD, SVP Strategic Solutions Group at Penguin Computing – Interview Series

mm

Published

 on

Kevin Tubbs, PhD,  is the Senior Vice President of Strategic Solutions Group at Penguin Computing. Penguin Computing custom designs agnostic, end-to-end (hardware/software/cloud/services) solutions to solve the complex scientific, analytical and engineering problems facing today’s Fortune 500 companies, startups, academic institutions, and federal organizations

What initially attracted you to the field of computer science?

My mom and dad bought me a computer when I was very young, and I’ve always had an interest and knack for computers and tinkering. Through my education I consistently gravitated towards STEM fields and that led me to want to be involved in a more applied field. My background is physics and High Performance Computing (HPC). Having a love for computers early on allowed me to keep computer science at the forefront of any other science, math or engineering interest that I’ve had, which has led me to where I am today.

Penguin Computing works closely with the Open Compute Project (OCP) – what is that precisely?

Since the start of the Open Compute Project (OCP) movement, Penguin Computing has been an early adopter, supporter and major contributor to the effort to bring the benefits of OCP to High Performance Computing (HPC) and artificial intelligence (AI).

The focus of OCP is bringing together a global community of developers to create a full ecosystem of infrastructure technology reimagined to be more efficient, flexible and scalable. Penguin Computing joined OCP because of the Open technologies and the idea of a community. What we’ve done over time is ensure that the heritage and technologies from traditional HPC and emerging trends in AI and Analytics can scale efficiently – Penguin Computing drives those things into OCP.

One of the benefits of OCP is that it lowers total cost of ownership (TCO) – lower capital expenses, thanks to removal of all vanity elements, and lower operating expenses due to service from the front, shared power and other design changes – which makes OCP-based technology perfect for scale out.

Penguin Computing has several OCP products including the Penguin Computing Tundra Extreme Scale Platform and Penguin Computing Tundra AP. The Tundra platforms are also compatible with HPC and AI workloads.

Tundra AP, the latest generation of our highly dense Tundra supercomputing platform, combines the processing power of Intel® Xeon® Scalable 9200 series processors with Penguin Computing’s Relion XO1122eAP Server in an OCP form factor that delivers a high density of CPU cores per rack.

When it comes to big data, to optimize performance levels users need to remove bottlenecks that slow down their access to data. How does Penguin Computing approach this problem?

Penguin Computing has leveraged our ability to use Open technologies and move fast with current trends – one of which is big data or the growth of data and data driven workloads. In response to that, we’ve built out our Strategic Solutions Group to address this problem head on.

In addressing the problem, we’ve found that the majority of workloads, even from traditional technical compute, are all motivated to be more data driven. As a result, Penguin Computing designs complete end-to-end solutions by trying to understand the users workload. In order to create a workload optimized end-to-end solution, we focus on the workload optimized software layer which includes orchestration and workload delivery. Essentially, we need to understand how the user will make use of the infrastructure.

Next, we try to focus on workload optimized compute infrastructure. There are varying levels of data and IO challenges which put a lot of pressure on the compute part. For example, different workloads require different combinations of accelerated compute infrastructure from CPUs, GPUs, memory bandwidth and networking that allows that data to be flowed through and be computed on.

Finally, we need to figure out what types of solutions will allow us to deliver that data. We look at workload optimized data infrastructures to understand how the workload interacts with the data, what’s the capacity requirements and IO patterns. Once we have that information, it helps us design a workload optimized system.

Once we have all the information we leverage our internal expertise at Penguin Computing to architect a design and a complete solution. Knowing it’s designed from a performance perspective, we need to understand where it’s deployed (on premises, cloud, edge, combination of all, etc.). That is Penguin Computing’s approach to delivering an optimized solution for data driven workloads.

Could you discuss the importance of using a GPU instead of a CPU for deep learning?

One of the biggest trends I’ve seen in regards to the importance of GPUs for Deep Learning (DL) was the move from using general purpose GPUs (GPGPU) as a data parallel piece of hardware that allowed us to massively accelerate the amount of compute cores that you can deliver to solve a parallel computing problem. This has been going on for the last ten years.

I participated in the early stages of GPGPU programming when I was in graduate school and early on in my career. I believe having that jump in compute density, where a GPU provides a lot of dense compute and analytics cores on a device and allows you to get more in a server space and being able to repurpose something that was originally meant for graphics to a compute engine was a real eye-opening trend in HPC and eventually AI communities.

However, a lot of that relied on converting and optimizing code to run on GPUs instead of CPUs. As we did all of that work, we were waiting for the concept of the killer app – the application or use case that really takes off or is enabled by a GPU. For the GPGPU community, DL was that killer application which galvanized efforts and development in accelerating HPC and AI workloads.

Over time, there was a resurgence of AI and machine learning (ML), and DL came into play. We realized that training a neural network using DL actually mapped very well to the underlying design of a GPU. I believe once those two things converged you have the ability to do the kinds of DL that was not made possible previously by CPU processors and ultimately limited our ability to do AI both at scale and in practice.

Once GPUs came into place it actually re-energized the research and development community around AI and DL because you just didn’t have the level of compute to do it efficiently and it wasn’t democratized. The GPU really allows you to deliver a denser compute that at its core is designed well for DL and brought it to a level of hardware architecture solutions that made it easier to get to more researchers and scientists. I believe that is one of the big reasons GPUs are better for studying DL.

What are some of the GPU-accelerated computing solutions that are offered by Penguin Computing?

Penguin Computing is currently focused on end to end solutions being worked on by our Strategic Solutions Group, particularly with Penguin Computing’s AI and Analytics Practice. Within this practice we’re focused on three high level approaches to GPU-accelerated solutions.

First, we offer a reference architecture for edge analytics, where we’re looking to design solutions that fit in non-traditional data centers (out at the edge or near edge). This can include Teleco edge datacenters, retail facilities, gas stations and more. These are all inference based AI solutions. Some solutions are geared towards video analytics for contact tracing and gesture recognition to determine if someone is washing their hands or wearing a mask. These are applications of complete solutions that include GPU-accelerated hardware that is fine-tuned for non-traditional or edge deployments as well as the software stacks to enable researchers and end-users to use them effectively.

The next class of Penguin Computing solutions are built for data center and core AI training and inferencing reference architectures. You could think about sitting inside of a large scale data center or in the cloud (Penguin Computing Cloud) where some of our customers are doing large scale training on using thousands of GPUs to accelerate DL. We look at how we deliver complete solutions and reference architectures that support all of these software workloads and containerization through GPU design and layout, all the way through the data infrastructure requirements that supports it.

The third class of reference architectures in this practice is a combination of the previous two. What we’re looking for in our third reference architecture family is how do we create the data fabrics and pathways and workflows to enable continuous learning so that you can run inferencing using our edge GPU-accelerated solutions, push that data to private or public cloud, continue to train on it, and as the new training models are updated, push that back out to inferencing. This way we have an iterative cycle of continuous learning and AI models.

Penguin Computing recently deployed a new supercomputer for LLNL in partnership with Intel and CoolIT. Could you tell us about this supercomputer and what it was designed for?

The Magma Supercomputer, deployed at LLNL was procured through the Commodity Technology Systems (CTS-1) contract with the National Nuclear Security Administration (NNSA) and is one of the first deployments of Intel Xeon Platinum 9200 series processors with support from CoolIT Systems complete direct liquid cooling and Omni-Path interconnect.

Funded through NNSA’s Advanced Simulation & Computing (ASC) program, Magma will support NNSA’s Life Extension Program and efforts critical to ensuring the safety, security and reliability of the nation’s nuclear weapons in the absence of underground testing.

The Magma Supercomputer is an HPC system that is enhanced by artificial intelligence and is a converged platform that allows AI to accelerate HPC modeling. Magma was ranked in the June 2020 Top500 list, breaking into the top 100, coming in at #80.

Under the CTS-1 contract, Penguin Computing has delivered more than 22 petaflops of computing capability to support the ASC program at the NNSA Tri-Labs of Lawrence Livermore, Los Alamos and Sandia National Laboratories.

What are some of the different ways Penguin Computing is supporting the fight against COVID-19?

In June 2020, Penguin Computing officially partnered with AMD to deliver HPC capabilities to researchers at three top universities in the U.S. – New York University (NYU), Massachusetts Institute of Technology (MIT) and Rice University – to help in the fight against COVID-19.

Penguin Computing partnered directly with AMD’s COVID-19 HPC Fund to provide research institutions with significant computing resources to accelerate medical research on COVID-19 and other diseases. Penguin Computing and AMD are collaborating to deliver a constellation of on-premises and cloud-based HPC solutions to NYU, MIT and Rice University to help elevate the research capabilities of hundreds of scientists who will ultimately contribute to a greater understanding of the novel coronavirus.

Powered by the latest 2nd Generation AMD EPYC processors and Radeon Instinct MI50 GPU accelerators, the systems donated to the universities are each expected to provide over one petaflop of compute performance. An additional four petaflops of compute capacity will be made available to researchers through our HPC cloud service, Penguin Computing® On-Demand™ (POD). Combined, the donated systems will provide researchers with more than seven petaflops of GPU accelerated compute power that can be applied to fight COVID-19.

The recipient universities are expected to utilize the new compute capacity across a range of pandemic-related workloads including genomics, vaccine development, transmission science and modeling.

Anything else you’d like to share about Penguin Computing?

For more than two decades, Penguin Computing has been delivering custom, innovative, and open solutions to the high performance and technical computing world.  Penguin Computing solutions give organizations the agility and freedom they need to leverage the latest technologies in their compute environments. Organizations can focus their resources on delivering products and ideas to market in record time instead of on the underlying technologies. Penguin Computing’s broad range of solutions for AI/ML/Analytics, HPC, DataOps, and Cloud native technologies can be customized, and combined to not only fit current needs, but rapidly adapt to future needs and technology changes. Penguin Computing Professional and Managed Services help with integrating, implementing, and managing solutions. Penguin Computing Hosting Services can help with the “where” of the compute environment by giving organizations ownership options and the flexibility to run on-premises, on public or dedicated cloud, hosted or as-a-service.

Thank you for great interview, readers who wish to learn more should visit Penguin Computing.

Spread the love
Continue Reading

Autonomous Vehicles

Andrew Stein, Software Engineer Waymo – Interview Series

mm

Published

 on

Andrew Stein is a Software Engineer who leads the perception team for Waymo Via, Waymo’s autonomous delivery efforts. Waymo is an autonomous driving technology development company that is a subsidiary of Alphabet Inc, the parent company of Google.

What initially attracted you to AI and robotics?

I always liked making things that “did something” ever since I was very young. Arts and crafts could be fun, but my biggest passion was working on creations that were also functional in some way. My favorite parts of Mister Rogers’ Neighborhood were the footage of conveyor belts and actuators in automated factories, seeing bottles and other products filled or assembled, labeled, and transported. I was a huge fan of Legos and other building toys. Then, thanks to some success in Computer Aided Design (CAD) competitions through the Technology Student Association in middle and high school, I ended up landing an after-school job doing CAD for a tiny startup company, Clipper Manufacturing. There, I was designing factory layouts for an enormous robotic sorter and associated conveyor equipment for laundering and organizing hangered uniforms for the retail garment industry. From there, it was off to Georgia Tech to study in electrical engineering, where I participated in the IEEE Robotics Club and took some classes in Computer Vision. Those eventually led me to the Robotics Institute at Carnegie Mellon University for my PhD. Many of my fellow graduate students from CMU have been close colleagues ever since, both at Anki and now at Waymo.

You previously worked as a lead engineer at Anki a robotics startup. What are some of the projects that you had the opportunity to work on at Anki?

I was the first full-time hire on the Cozmo project at Anki, where I had the privilege of starting the code repository from scratch and saw the product through to over one million cute, lifelike robots shipped into people’s homes. That work transitioned into our next product, Vector, which was another, more advanced and self-contained version of Cozmo. I got to work on many parts of those products, but was primarily responsible for computer vision for face detection, face recognition, 3D pose estimation, localization, and other aspects of perception. I also ported TensorFlow Lite to run on Vector’s embedded OS and helped deploy deep learning models to run onboard the robot for hand and person detection.

I also built Cozmo’s and Vector’s eye rendering systems, which gave me the chance to work particularly closely with much of Anki’s very talented and creative animation team, which was also a lot of fun.

In 2019, Waymo hired you and twelve other robotics experts from Anki to adapt its self-driving technology to other platforms, including commercial trucks. What was your initial reaction to the prospect of working at Waymo?

I knew many current and past engineers at Waymo and certainly was aware of the company’s reputation as a leader in the field of autonomous vehicles. I very much enjoyed the creativity of working on toys and educational products for kids at Anki, but I was also excited to join a larger company working in such an impactful space for society, to see how software development and safety are done at this organizational scale and level of technical complexity.

Can you discuss what a day working at Waymo is like for you?

Most of my role is currently focused on guiding and growing my team as we identify and solve trucking-specific challenges in close collaboration with other engineering teams at Waymo. That means my days are spent meeting with my team, other technical leads, and product and program managers as we plan for technical and organizational approaches to develop and deploy our self-driving system, called the Waymo Driver, and extend its capabilities to our growing fleet of trucks. Besides that, given that we are actively hiring, I also spend significant time interviewing candidates.

What are some of the unique computer vision and AI challenges that are faced with autonomous trucks compared to autonomous vehicles?

While we utilize the same core technology stack across all of our vehicles, there are some new considerations specific to trucking that we have to take into account. First and foremost, the domain is different: compared to passenger cars, trucks spend a lot more time on freeways, which are higher-speed environments. Due to a lot more mass, trucks are slower to accelerate and brake than cars, which means the Waymo Driver needs to perceive things from very far away. Furthermore, freeway construction uses different markers and signage and can even involve median crossovers to the “wrong” side of the road; there are freeway-specific laws like moving over for vehicles stopped on shoulders; and there can be many lanes of jammed traffic to navigate. Having a potentially larger blind spot caused by a trailer is another challenge we need to overcome.

Waymo’s recently began testing a driverless fleet of heavy-duty trucks in Texas with trained drivers on-board. At this point in the game, what are some of the things that Waymo hopes to learn from these tests?

Our trucks test in the areas in which we operate (AZ / CA / TX / NM) to gain meaningful experience and data in all different types of situations we might encounter driving on the freeway. This process exercises our software and hardware, allowing us to learn how we can continue to improve and adapt our Waymo Driver for the trucking domain.

Looking at Texas specifically: Dallas and Houston are known to be part of the biggest freight hubs in the US. Operating in that environment, we can test our Waymo Driver on highly dense highways and shipper lanes, further understand how other truck and passenger car drivers behave on these routes, and continue to refine the way our Waymo Driver reacts and responds in these busy driving regions. Additionally, it also enables us to test in a place with unique weather conditions that can help us drive our capabilities in that area forward.

Can you discuss the Waymo Open Dataset which includes both sensor data and labeled data, and the benefits to Waymo for sharing this valuable dataset?

At Waymo, we’re tackling some of the hardest problems that exist in machine learning. To aid the research community in making advancements in machine perception and self-driving technology, we’ve released the Waymo Open Dataset, which is one of the largest and most diverse publicly available fully self-driving datasets. Available at no cost to researchers at waymo.com/open, the dataset consists of 1,950 segments of high-resolution sensor data and covers a wide variety of environments, from dense urban centers to suburban landscapes, as well as data collected during day and night, at dawn and dusk, in sunshine and rain. In March 2020, we also launched the Waymo Open Dataset Challenges to provide the research community a way to test their expertise and see what others are doing.

In your personal opinion, how long will it be until the industry achieves true level 5 autonomy?

We have been working on this for over ten years now and so we have the benefit of that experience to know that this technology will come to the world step by step. Self-driving technology is so complex and we’ve gotten to where we are today because of advances in so many fields from sensing in hardware to machine learning. That’s why we’ve been taking a gradual approach to introduce this technology to the world. We believe it’s the safest and most responsible way to go, and we’ve also heard from our riders and partners that they appreciate this thoughtful and measured approach we’re taking to safely deploy this technology in their communities.

Thank you for the great interview, readers who wish to learn more should visit Waymo Via.

Spread the love
Continue Reading

Interviews

Michael Schrage, Author of Recommendation Engines (The MIT Press) – Interview Series

mm

Published

 on

Michael Schrage is a Research Fellow at the MIT Sloan School of Management’s Initiative on the Digital Economy. A sought-after expert on innovation, metrics, and network effects, he is the author of Who Do You Want Your Customers to Become?The Innovator’s Hypothesis: How Cheap Experiments Are Worth More than Good Ideas (MIT Press), and other books.

In this interview we discuss his book “Recommendation Engines” which explores the history, technology, business, and social impact of online recommendation engines.

What inspired you to write a book on such a narrow topic as “Recommendation Engines”?

The framing of your question gives the game away…..When I looked seriously at the digital technologies and touchpoints that truly influenced people’s lives all over the world, I almost always found a ‘recommendation engine’ driving decision. Spotify’s recommenders determine the music and songs people hear; TikTok’s recommendation engines define the ‘viral videos’ people put together and share; Netflix’s recommenders have been architected to facilitate ‘binge watching’ and ‘binge watchers;’ Google Maps and Waze recommend the best and/or fastest and/or simplest ways to get there; Tinder and Match.com recommend who you might like to be with or, you know, ‘be’ with; Stitch Fix recommends what you might want to wear that makes you ‘you;’ Amazon will recommend what you really should be buying; Academia and ResearchGate will recommend the most relevant research you should be up to date on….I could go on – and do, in the book – but both technically and conceptually, ‘Recommendation Engines’ are the antithesis of ’narrow.’ Their point and purpose covers the entire sweep of human desire and decision.

A quote in your book is as follows: “Recommenders aren’t just about what we might buy, they’re about who we might want to become”.  How could this be abused by enterprises or bad actors?

There’s no question or doubt that recommendation can be abused. The ‘classic’ classic question – Cui bono? – ‘Who benefits?’ – applies. Are the recommendations truly intended to benefit the recipient or the entity/enterprise making the recommendation? Just as its easy for a colleague, acquaintance or ‘friend’ who knows you to offer up advice that really isn’t in your best interest, it’s a digital snap for ‘data driven’ recommenders to suggest you buy something that increases ’their’ profit at the expense of ‘your’ utility or satisfaction. On one level, I am very concerned about the potential – and reality – of abuse. On the other, I think most people catch on pretty quickly to when they’re being exploited or manipulated by people or technology. Fool me once, shame on you; fool me twice or thrice, shame on me. Recommendation is one of those special domains where it’s smart to be ethical and ethical to be smart. 

Are echo chambers where users are just fed what they want to see regardless of accuracy a societal issue?

Eli Pariser coined the excellent phrase ’the filter bubble’ to describe this phenomenon and pathology. I largely agree with his perspective. In truth, I think it now fair to say that ‘confirmation bias’ – not sex – is what really drives most adult human behavior. Most people are looking for agreement most of the time. Recommenders have to navigate a careful course between novelty, diversity relevance and serendipity because – while too much confirmation is boring and redundant – too much novelty and challenge can annoy and offend. So, yes, the quest for confirmation is both a personal and social issue. That said, recommenders offer a relatively unobnoxious way to bring alternative perspectives and options to people’s attention., However, I do, indeed, wonder whether regulation and legal review will increasingly define the recommendation future.

Filter bubbles currently limit exposure to conflicting, contradicting, and or challenging/viewpoints. Should there be some type of regulation that discourages this type of over-filtering?

I prefer light-touch to heavy-handed regulatory oversight. Most platforms I see do a pretty poor job of labelling ‘fake news’ or establishing quality control. I’d like to see more innovative mechanisms explored: swipe left for a contrarian take; embed links that elaborate on stories or videos in ways that deepen understanding or decontextualize the ‘bias’ that’s being confirmed. But let’s be clear: choice architectures that ‘discourage’ or create ‘frictions’ require different data and design sensibilities than those that ‘forbid’ or ‘censor’ or ‘prevent.’ I think this a very hard problem for people and machines alike. What makes it particularly hard is that human beings – in fact – are less predictable than a lot of psychologists and social scientists believe. There are a lot of competing ‘theories of the mind’ and ‘agency’ these days. The more personalized recommendations and recommenders become, the more challenging and anachronistic ‘one size fits all’ approaches become. It’s one of the many reasons this domain interests me so.

Should end users and society demand explainability as to why specific recommendations are made?

Yes, yes and yes. Not just ‘explainability’ but ‘visibility,’ ’transparency’ and ‘interpretability,’ too. People should have the right to see and understand the technologies being used to influence them. They should be able to appreciate the algorithms used to nudge and persuade them. Think of this as the algorithmic counterpart to ‘informed consent’ in medicine. Patients have the right to get- and doctors have the duty to provide – the reasons and rationales for choosing ’this’ course of action to ’that’ one. Indeed, I argue that ‘informed consent’ – and its future – in medicine and health care offers a good template for the future of ‘informed consent’ for recommendation engines. 

Do you believe it is possible to “hack” the human brain using Recommender Engines?

The brain or the mind? Not kidding. Are we materially – electrically and chemically – hacking neurons and lobes? Or are we using less invasive sensory stimuli to evoke predictable behaviors? Bluntly, I believe some brains – and some minds – are hackable some of the time. But do I believe people are destined to become ‘meat puppets’ who dance to recommendation’s tunes? I do not. Look, some people do become addicts. Some people do lose autonomy and self control. And, yes, some people do want to exploit others. But the preponderance of evidence doesn’t make me worry about the ‘weaponization of recommendation.’ I’m more worried about the abuse of trust.

A quote in a research paper by Jason L. Harman and Jason L. Harman states the following: “The trust that humans place on recommendations is key to the success of recommender systems”. Do you believe that social media has betrayed that trust?

I believe in that quote. I believe that trust is, indeed, key. I believe that smart and ethical people truly understand and appreciate the importance of trust. With apologies to Churchill’s comment on courage, trust is the virtue that enables healthy human connection and growth. That said, I’m comfortable arguing that most social media platforms – yes, Twitter and Facebook, I’m looking at you! – aren’t built around or based on trust. They’re based on facilitating and scaling self-expression. The ability to express one’s self at scale has literally nothing to do with creating or building trust. There was nothing to betray. With recommendation, there is.

You state your belief that the future of Recommender Engines will feature the best recommendations to enhance one’s mind. In your opinion are any Recommendation Engines currently working on such a system?

Not yet. I see that as the next trillion dollar market. I think Amazon and Google and Alibaba and Tencent want to get there. But, who knows, there may be an entrepreneurial innovator who surprises us all: maybe a Spotify incorporating mindfulness and just-in-time whispered ‘advice’ may be the mind-enhancing breakthrough.

How would you summarize how Recommendation Engines enables users to better understand themselves?

Recommendations are about good choices…. sometimes, even great choices. What are the choices you embrace? What are the choices you ignore? What are the choices you reject?  Having the courage to ask – and answer – those questions gives you remarkable insight into who you are and who you might want to become. We are the choices we make; whatever influences those choices has remarkable impact and influence on us.

Is there anything else that you would like to share about your book?

Yes – in the first and final analysis, my book is about the future of advice and the future of who you ‘really’ want to become. It’s about the future of the self – your ’self.’ I think that’s both an exciting and important subject, don’t you?

Thank you for taking the time to share your views.

To our readers I highly recommend this book, it is currently available on Amazon in Kindle or paperback. You can also view more ordering options on the MIT Press page.

Spread the love
Continue Reading