Adam Rodnitzky, is the COO & Co-Founder of Tangram Robotics, a company specializing in assisting robotic companies to integrate sensors quickly and maximize uptime.
What initially attracted you to Robotics?
I’ve always loved mechanical things, and I’ve always loved cutting-edge technology. Robots sit right at the intersection of those two interests. Beyond that foundation of what they are, however, is what they can do. For the longest time, robots were largely relegated to factory settings, where they worked under relatively constrained circumstances. That meant that for most, robots were something they knew about, but never experienced. It’s only been recently that robots have started to play a larger role in society, and that is largely because the technology required to let them operate safely and consistently in the human world is just now becoming viable. The future of robotics is being built as we speak, and the level of interaction between them and humans is going to grow exponentially in the next decade. I’m very excited to witness that.
You were a mentor at StartX a seed stage accelerator out of Stanford University for over a decade. What did you learn from this experience?
Being a company founder comes with a lot of uncertainty, as you face new challenges you’ve never faced, and try to pattern match on prior experience to make sense of the day-to-day realities of running a new company. Looking to mentors for guidance is a natural response to having that uncertainty. But there is a challenge in taking advice from mentors. Mentors will prescribe advice based on their own past experiences. Yet those experiences occurred in different contexts, at different company stages and for different reasons. As a mentor, you’ve got to remember this when giving advice. You may have the best intentions, but you might lead a company astray by not properly contextualizing advice based on past experience. I’ve tried to keep this in mind as I mentor companies at StartX.
You previously worked as a General Manager for Occipital which develops state-of-the-art mobile computer vision applications and hardware. Could you tell us what this role involved in a day to day setting?
When I was at Occipital, our core product was the Structure Sensor and SDK, which made it simple to add 3D sensing to mobile devices, and develop applications to take advantage of that 3D data stream. On a day-to-day basis, I saw my role as combining a short-term tactical and long-term strategic pursuit of revenue and revenue growth. For instance, the SDK was free, and therefore it generated no revenue on a daily basis. However, as developers used the SDK to create apps to use Structure Sensor, there was a direct relationship between the number of apps published on our platform and the rate of sensor sales. So on a daily basis, I’d pursue these indirect revenue opportunities around developer community support, while also setting up programs to sell our sensors in as many channels as possible – including directly through those developers.
When did you first get the idea to launch a robotics startup?
Much of the credit here goes to my co-founder, Brandon Minor. Brandon is a co-founder of Colorado Robotics, and has had his finger on the pulse of the robotics community as long as I have known him. We had both left Occipital independently with the idea of starting companies. Earlier this year, we met up and he proposed that we join forces to build on our past experience with robots, computer vision and sensors. And that is how Tangram Robotics was created.
Could you tell us what Tangram Robotics does?
Tangram Robotics offers sensors-as-a-service to robotics platforms. All robots need perception sensors, but not all of those sensors meet the performance needs of robotics. We infuse trusted hardware with Tangram software that makes integration, calibration, and maintenance a breeze during development and deployment. This means that roboticists don’t need to make any trade-offs; they can start using the best sensors for their platform from day one, and keep that momentum as they deploy.
What are some of the existing challenges companies face when it comes to the integration of Robotic Perception Sensors?
Our interviews with robotics companies of all types have led us to the conclusion that hardware companies make great hardware, but marginal software. The process of developing the right streaming and integration software for a sensor therefore falls to the robotics company themselves and can take months to get right. Not only that, but every robotics company is going through this same process, for the same sensors, over and over as they develop up their perception stack. This results in a major loss of engineering time and customer revenue. We’ve set up our solution so that it can help robotics companies at any stage, from design through development and ultimately to deployment.
Could you discuss Tangram Robotics web-based diagnostics and monitoring systems?
Tangram understands that the key to improvement is in metrics, both during development and in the field. With that in mind, we are creating remote diagnostics systems that work on top of our integration software that allow robotics developers to better understand what’s happening during operation. This includes data transmission rates, processing time, and metrics directly related to other aspects of our platform. Setting this up over a web portal means that decisions can be made competently without needing the physical presence of an engineer.
One of the solutions Tangram Robotics is working on is developing full-stack tools for robotic companies to add to their project. Could you discuss the vision behind these tools?
Sensor integration is much more than streaming. We look at sensors from a holistic perspective, focusing on the tools needed to develop faster and work longer. This includes competent calibration tools that work in the field, as well as diagnostics and monitoring of data and performance. By solving the base requirements of many robot platforms out-of-the-box, Tangram’s tools dramatically improve time-to-market. We anticipate that various other tools will be requested as our platform matures.
Is there anything else that you would like to share about Tangram Robotics?
As we’ve gone through the process of talking with roboticists, we’ve been blown away at the diversity of applications that robotics companies are pursuing. We’ve spoken to companies building all sorts of wild solutions, from strawberry pickers to sous chefs to boat captains to groundskeepers!
Thank you for the interview. I believe that sensors is often something that is overlooked by different companies and I look forward to following your progress. Readers who wish to learn more should visit Tangram Robotics.
Matt Carlson, VP Business Development at WiBotic – Interview Series
Matt Carlson is the Vice President of Business Development at WiBotic Inc, a company that provides reliable wireless power solutions to charge aerial, mobile and aquatic robot systems.
Why are wireless charging solutions so important to the future of robotics?
Robots need the ability to autonomously charge for most applications. It simply isn’t cost effective to hire a staff of workers to manage battery charging or battery swapping. However, most autonomous charging today is done using docking stations that require physical mating of electrical contacts.
This requires very precise navigation into the charging dock which is difficult to program and is not always reliable. Failing to properly align the contacts can mean a missed charging cycle and robot downtime. Contact based stations will also wear out over time, or the contacts may become dirty or corroded – again resulting in inconsistent charging. Finally, robot OEMs use a wide range of electrical contact types, making it nearly impossible to have a single charging station that can charge any robot.
Wireless systems have none of these issues. WiBotic systems offer several centimeters of alignment tolerance, so it’s not necessary to have an extremely precise navigation stack. Because the antennas can be fully sealed to the elements and don’t make physical contact with one another, wireless systems are also highly reliable and can handle an unlimited number of charge cycles. Finally, as robot use grows, most companies will employ more than one type of robot. Rather than having a wall or room dedicated to many different charging docks, a single wireless charging station can recharge any robot that is retrofitted with a simple receiver antenna, saving money and space.
Wibotic’s initial focus was on powering medical devices, what was the reason to pivot towards robots, drones, and Autonomous Underwater Vehicles (AUVs)?
WiBotic’s two founders, Ben Waters and Josh Smith, did indeed focus on wireless power for medical devices during much of their research at the University of Washington. Their technology increased the range and reliability of wireless power, which were both critical for the medical market. However, when Ben received his PhD and founded WiBotic, the company immediately focused on robotics as its primary market. This was based on demand from the robotics industry.
Robot and drone OEMs and end-users recognized the benefits of WiBotic technology in terms of power level and range when compared with other wireless systems. They were also beginning to struggle with the deployment of contact-based chargers for large fleets of robots and were looking for more reliable solutions.
For the drone market, contact based charging is a non-starter in most cases since drones operate outdoors (mostly) where water becomes an issue with any physical electrical contacts. And, of course, underwater applications also benefit from the fully sealed nature of wireless power.
What are the power transfer technologies being used?
WiBotic uses elements of both electrical induction and magnetic resonance for power transfer. These two methods are relatively well proven at a wide range of power levels. What sets WiBotic apart is our ability to manage the connection (technically the impedance) between antennas in real time. We call this Adaptive Impedance Matching.
One of the biggest challenges with wireless power, especially for robotics, is that the electrical environment is constantly changing. If the robot docks in a slightly different position, if it’s internal electronics turn on and off during charging, and as the battery itself charges up, the impedance between the transmit and receive sides of the system changes. This can dramatically affect efficiency and range. Our AIM technology constantly monitors changes in impedance so we can maintain efficiency and power levels, even as all of those other elements in the system are changing.
Could you discuss the efficiency of the units, such as how much power is lost during power transmission?
For WiBotic’s 250-300 Watt systems we have an end-to-end efficiency level of between 70% and 80%. This represents the full system efficiency from the input to our transmitter all the way to the output to the battery. The actual antenna-to-antenna portion of that equation is about 95% efficient, but there are losses in the transmitter circuitry and also in the battery charging circuitry. That last part is important to note since even a very well designed “plug in” battery charger is typically only around 90-95% efficient.
Using a wireless system like ours therefore results in about 10% less efficiency than the status quo of contact based charging.
What are the distance constraints with how close the robotic unit needs to be near the power source?
This depends on the size of the antennas used. Our standard transmitter antenna is 20cm in diameter and the receiver antenna is 10cm in diameter. With those antenna sizes, we allow for 5cm of face-to-face air gap between antennas and up to 5cm of side-to-side offset from a concentric position (so 10 total cm of side to side range).
Unlike other wireless power systems, and due to our AIM technology, we deliver full power to the battery at any point within that range. Ranges can be increased by increasing the diameter of the antennas. Because our antennas are relatively simple PCBAs (which are also very thin and lightweight) we’re able to modify and produce custom versions of them relatively inexpensively for customers who prefer a different size.
Are multiple robots able to use the same charging station?
Absolutely! Only one robot can charge at a wireless charging station at a time, but entire fleets of diverse robots can all share the same charging station (or set of charging stations). This is possible because, unlike most contact based chargers, the transmitter station is not sending out a specific voltage and current level. Instead it is sending wireless power at a designated frequency. Our Onboard Charger, installed on the robot, then converts that wireless energy into the specific voltage and current needed by that vehicle.
We support batteries from 0-60V and current levels from 0-30A with our current product line.
Could you discuss some of the power optimization software that is currently offered?
Our wireless power hardware ships with a web-based GUI that allows customers to configure the system for a wide range of parameters. For instance, users can choose to charge to the typical “100% charge” level for a particular battery. But if they do this every time, they may not get as many charge cycles out of the battery. So if 100% charge isn’t needed, the maximum voltage level can be adjusted downward to extend battery lifespan.
Similarly, if the battery is always charged with the maximum current (amps) it’s lifespan will be reduced. Using our GUI and APIs, users can actually proactively schedule charging so they charge as fast as possible when the robot needs to get back into service, or more slowly when they know they have more time (overnight for example). These configurability and battery optimization features are available with our standard GUI and by using our APIs.
We also offer a new software product that allows users to map and then aggregate charging information from across and entire fleet of WiBotic transmitters and receivers. This allows robots to know when and where charging stations are available to help them maximize uptime. It also allows detailed reporting on the charging performance of batteries over time, helping identify battery issues and optimizing power delivery across the entire fleet. These features become particularly useful if the end-user is able to implement opportunity charging schemes, where robots are charging many times per day for shorter periods of time, rather than leaving service for several hours at a time for charging.
Offering wireless power to Autonomous Underwater Vehicles (AUVs) seems like it would be extremely challenging, could you discuss this?
Yes, there are definitely many challenges with underwater applications. From a power transfer perspective a couple centimeters of saltwater will attenuate power transfer by about 50%, so it will take longer to charge the same sized batteries underwater than it would in air.
The antenna range is also more restricted for that same reason, which means the UAV must have very good navigation to successfully find and dock at the charging station. This is usually aided by some sort of physical alignment device that directs the UAV into the charging station and helps to align the antennas.
The benefit of wireless power underwater however, is that the antennas can be fully potted or sealed. WiBotic systems are currently operating at the MBARI MARS research station off the coast of Monterey, CA at a depth of nearly 3000ft. In that case, the transmitter and receiver electronics are housed in 1atm pressure bottles, but electronics can also be designed for oil filled enclosures to withstand even greater depth.
WiBotic continues to work with the DoD, various universities, non-profits and commercial partners to expand the use of our systems underwater, but it is definitely a challenging environment!
WiBotic has recently announced equipment authorization from the Federal Communications Commission (FCC) for its high power transmitters and receivers. These products are the first systems – operating at up to 300 Watts – to receive FCC approval for use in mobile robots, drones, and other industrial devices. Why is this important and what does this mean for the future of robotics and drones?
As the robotics industry continues to grow, OEMs and robot end-users are facing an increasing level of regulation and stricter safety requirements. It’s important for our customers to know that WiBotic products, as a component within their larger robotic solutions, will meet those regulatory requirements. In short, it allows robot and drone manufacturers to focus on additional features and functionality for end-users rather that dealing with certification questions. This will let them deploy larger fleets faster than would otherwise be possible.
Is there anything else that you would like to share about Wibotic?
Because most people think of the physical antennas and circuit boards when they think of wireless power, the immense amount of work we have put into our software and firmware is often overlooked. In many ways, it is the advanced firmware we’ve developed that allows our hardware to perform at such useful ranges and power levels.
We’re also continuing to add to our fleet power optimization software capabilities to allow for even greater analysis and benchmarking of the use of power and durability of batteries across a wide range of robotic applications.
Thank you for the great interview, readers who wish to learn more should visit at WiBotic Inc, or read about how WiBotic Received an Industry-First FCC Approval for High Power Wireless Charging of Robots & drones.
Dave Ryan, General Manager, Health & Life Sciences Business at Intel – Interview Series
Dave Ryan leads the Global Health & Life Sciences business unit at Intel that focuses on digital transformation from edge-to-cloud in order to make precision, value-based care a reality. His customers are the manufacturers who build life sciences instruments, medical equipment, clinical systems, compute appliances and devices used by research centers, hospitals, clinics, residential care settings and the home. Dave has served on the boards of Consumer Technology Association Health & Fitness Division, HIMSS’ Personal Connected Health Alliance, the Global Coalition on Aging and the Alliance for Connected Care.
What is Intel’s Health & Life Sciences Business?
Intel’s Health & Life Sciences business helps customers create solutions in the areas of medical imaging, clinical systems, and lab and life sciences, enabling distributed, intelligent, and personalized care.
Intel’s Health business focuses on population health, medical imaging, clinical systems, and digital infrastructure.
- Population Health examines diverse patient data to give providers insights into risks for medical issues and improved treatments across cohorts. Optimized and tuned ML and AI helps “tier” groups, so payers and providers prioritize patients at most risk.
- Medical Imaging (e.g. MRI, CT), generate enormous data sets requiring accurate evaluation with no room for error. HPC and AI help more quickly scan image data and identify critical factors to assist radiologists in diagnosis.
- Clinical Systems use computer vision, AI, HPC and edge computing for patient monitoring, robotic surgery, and telehealth and many others. These intelligent systems reconcile diverse source data for a complete patient view and better diagnosis, with flexibility and scalability to support changing organizational needs.
- Digital Infrastructure integrates many technologies to enable novel approaches to patient interaction including anywhere anytime care where clinicians collaborate across space and time for condition management, surgery, and analytics.
Intel’s Lab and Life Sciences business is focused in 3 primary areas: Data Analytics, ‘Omics, and Pharma.
- Data Analytics uses AI to drive a cascade of discoveries and insights that help enable, among other things, precision medicine by ensuring that patients get the drugs that are most effective for them, and so reducing the risk of side effect profiles.
- ‘omics describes and quantifies biological molecule groups, using bioinformatics and computational biology. The massive data sets involved here require high-throughput processing to receive results within reasonable timeframes. With this throughput and new databases, toolkits, libraries, and code optimizations, ‘omics institutions can reduce time to results and development costs.
- Pharma is the study of drugs and how they interact with human biological systems, including at a molecular level where data science needs AI and ML to assist with lead generation and optimizations, target ID and preclinical research. This results in better clinical trials, smarter reaction insights and faster new drug discovery.
When did you personally initially become interested in using AI for the benefit of healthcare?
The proliferation of AI across many industries has largely been about automating those tasks routinely performed by humans. In healthcare, AI has become a tool through which we augment or assist, not replace, existing human expertise to deliver truly transformative approaches to diagnosis and treatment. And nowhere is this clearer than in medical imaging, in which data volume and complexity is both barrier and opportunity. Today, AI, and inferencing in particular, is able to perform more rapid and detailed scans of vast arrays of information than any human can and in so doing not only reveals insights previously hidden but also maximizes the valuable time of the radiologist to give reach a better diagnostic conclusion and for more patients. For example, AI solutions from customers help radiologists by analyzing data in X-rays which could indicate the presence of a collapsed lung (pneumothorax) or COVID. That is a truly remarkable achievement that is revolutionizing the efficacy of both medical imaging itself and how the human expertise is applied. Witnessing that kind of transformation in this one field naturally motivates one to seek out the next great leap in other health and life sciences pursuits where man and machine combine to produce a new whole so much greater the sum of the parts. Taking that a step further is the idea that AI can democratize knowledge across care disciplines and make scarce human expertise and experienced-based nuance go even further, raising the level of quality.
How important is AI to analyzing big data in a clinical setting?
The Health and Life Sciences industries generate more data with greater complexity than any other single industry in the world today. And unlike other industries, effectively managing and analyzing that data is a matter of life and death. Given these magnitudes, AI is now an indispensable enabler of a range of needs, both mundane and breakthrough, in both the clinical and lab settings to address the industry’s Triple Aim: Improve care quality and access while lowering costs.
For example, electronic health records (EHR) have enabled a digital revolution in the quality and efficiency of care delivery. Unfortunately, within these records is a messy mix of both unstructured and structured data which AI can help digitize into more unified and useful data sets. Optical character recognition (OCR) and natural language processing (NLP) are just two AI-enabled models that can convert the analogs of handwriting and voice into EHR data. And once digitized, AI can be applied across these data sets in many exciting use cases.
In other instances, data captured from medical devices and cameras is growing and, when combined with patient history data, analytics can help drive new insights to further personalize treatment. At a census level, many hospitals have already deployed algorithms that can predict sepsis onset for quicker intervention, and in ICUs, software can combine data across multiple isolated devices to create an impressively complete picture of that patient in near-real-time. Over time, all that captured and stored data can also be analyzed for better predictions in the future.
What are some of the more notable use cases that you are seeing for machine learning analyzing this data?
As mentioned above, NLP tools can help replace manual scribing or data entry to generate new documents, like patient visit summaries and detailed clinical notes. This enables clinicians to see more patients, and providers to improve documentation, workflow, and billing accuracy by entering orders and documentation sooner in the day.
More broadly, AI-enabled analytics help providers understand and manage a wide range of clinical applications that improve efficiency and lower costs. This allows hospitals to better manage resources and fine tune best practices, and care teams to collaborate on diagnoses and coordinate treatments and overall care they deliver to improve patient outcomes.
Clinicians can analyze for targeted abnormalities using appropriate ML approaches and filter out structured information from other raw data. This can lead to quicker and more accurate diagnosis and optimal treatments. For example, ML algorithms can convert the diagnostic system of medical images into automated decision-making by converting images to machine readable text. ML and pattern recognition techniques can also draw insights from massive volumes of clinical image data, unmanageable by human alone, to transform the diagnosis, treatment and monitoring of patients.
To assess and manage population health, ML algorithms can help predict future risk trajectories, identify risk drivers, and provide solutions for best outcomes. Deep learning modules integrated with AI technologies allow the researchers to interpret complex genomic data sets, to predict specific types of cancer (based on the gene expression profiles obtained from various large data sets) and identify multiple druggable targets.
Could you elaborate on how Intel is collaborating with the genomics community to transform large datasets into biomedical insights that accelerate personalized care?
Precision medicine supplies individual-level health data sources that enable better selection of disease targets and identification of patient populations that demonstrate improved clinical outcomes to novel preventative and therapeutic approaches.
Genomics is the cornerstone of this precision medicine. It provides the blueprint of who we are, and why and how we are unique which is critical for providers to understand as they combine this information with other data (images, clinical chemistry, medical history, cohort data, etc.). Clinicians use this information to develop and deliver patient-specific treatments that are lower risk and more effective.
Intel is collaborating with the genomics community by optimizing the most commonly used genetic analysis tools used in the industry to run best and across Intel architecture-based platforms and the processors that power them. For example, optimization of the Broad Institute’s industry leading genetic variant software, the Genomic Analysis Toolkit (GATK), on Intel hardware using OpenVINO to ease AI model development debug and scalable deployment, highlights our impact and commitment to this industry. The GATK toolkit provides benefits to biomedical research such as Genomics DB which efficiently stores files ~200GB in size (typical for genomic datasets) and the Genome Kernel Library running AVX512 which takes advantage of specific Intel architecture hardware instructions to accelerate genomic workloads and AI utilization.
Accelerating the speed and reducing the cost of genomic analysis while maintaining the accuracy of that analysis, continues to be compelling to biomedical and other life sciences researchers as they use Intel compute solutions to discover and harness new medical insights.
Could you discuss why you believe that remote healthcare is so important?
The Health industry has been working on various forms and aspects of remote care for many years. The reasons for this have been, up until recently, an intuitive and hoped for belief that remote care can be for many care delivery situations, as good as or better than traditional in-clinic models. Now spurred by the pandemic crisis and its impact, health care delivery systems around the world are forced to adopt telehealth or collapse. This sudden rush to implement is now proving those long held beliefs to be true and care at a distance to be both vital and highly viable.
Remote care has many benefits. Patient comfort and satisfaction with telehealth care delivery is rising rapidly. They are able to remain calmer and at ease in their home with less disruption and time/schedule impact. Providers like it because it allows them to see more patients, and better manage their own time and better allocate scare clinical resources. And of course, what has become the clearest and most compelling reason these past few months for everyone is the inherent ability of remote care to limit contagion and the need for in-person contact when a video chat with augmented device and compute telemetry can get most care delivery tasks done just as well.
Can you discuss some of the technologies that are currently being used for remote patient monitoring?
There are several critical technology elements. The most important is ease of use for the patient, quickly followed by security and privacy of the data, and the robustness of the application and the data it captures. For example, we need to prevent a user from deleting a monitoring app from her iPad by accident.
Another critical aspect for a care provider deploying across multiple patients is fleet management and the ability to send updates or tech support down the wire and tailored to each user or cohort of user. This requires:
- standardization of the data exchange and privacy with industry standards such as FHIR and Continua;
- secure and power-efficient compute platform to orchestrate the data and communicate it back to the clinician including appropriate software and encryption;
- connectivity through a cellular network to make the user devices stand-alone and not dependent on Wi-Fi at home that may be unreliable or even non-existent;
- cloud storage and analytics on the backend.
In addition, the ability to gather and aggregate the data flowing in from users is fundamental to enabling clinicians to do patient monitoring and support, and for the software and analytics to inform care teams of a nominal state or initiate an alarm notification for results that are out of tolerance.
We believe that AI will play a much larger role in patient monitoring moving forward, improving the patient experience through natural voice surveys (“How are you feeling today?”, “Your blood pressure seems a bit high”) and allowing care teams to better understand a patient’s health and identify appropriate treatments. Through the use of AI models, population health management will also progress with all patient data folding into ever larger data sets which improve accuracy of an iterative learning model. This is essential for remote monitoring at scale.
What are some of the problems that need to be overcome to increase the success rate of remote healthcare?
Many of the same issues that plague our current system of traditional care delivery are also factors in enhancing or inhibiting the success of remote care. These include societal sub-segment beliefs and stigmas surrounding healthcare, or socio-economic barriers stemming from lack of insurance, technology fluency, required devices, and connectivity. Data silos prevent maximizing value that larger shared data sets could produce especially now that our ability to harness learning programs is truly emerging.
But there are challenges that are unique to remote care:
- policy and payment issues, though much improved of late, must continue their positive momentum to expand with relaxed restrictions on what is allowable and reimbursable under via remote care modality;
- financial challenges and lack capital to invest in technology in health care requires a conversion from a CapEx model to an OpEx model. Rather than investing in facilities and capex equipment, providers can shift to a “pay as you go” model, foregoing the need for a lot of fixed infrastructure and, like phone service, pay for the minutes (or data) used;
- user experience, for both patient and provider, must continue to improve, ultimately to where the technology disappears into the background, and the capabilities are intuitive and seamless and the process compelling with equivalent or better outcomes and cost structures.
Ultimately, we want the technology to support the provision of care, not get in the way of it. If we are successful (and we believe we are and will continue to be), then the technology truly will allow a bridge to tomorrow’s better model of remote care delivery, making the best possible case for the normalization of remote care as standard of care delivery.
Thank you for the fantastic interview, I enjoyed learning more about Intel’s health efforts. Reader’s who wish to learn more should visit Intel’s Global Health & Life Sciences business.
Andrea Sommer, Founder & Business Lead at UvvaLabs – Interview Series
Andrea Sommer is the Founder & Business Lead at UvvaLabs, a female-founded technology company that uses AI to help companies make better decisions that create more diverse and accessible workforces.
Could you discuss how UvvaLabs uses AI to assist companies in creating more diverse and accessible workforces?
Our approach looks at offering structural solutions to the very structural problem of inequity in the workplace. Through our research and experience, we’ve built a model of what the ‘ideal’ organization looks like from a diversity and accessibility perspective. Our AI analyzes and evaluates data across an organization to create a version of that organization’s ‘current state’ from a diversity perspective. By comparing the two sides – the ideal to the current – we can offer recommendations on what structures to build and which to remove to bring the organization closer to that ideal state.
What was the inspiration for launching UvvaLabs?
My co-founder and I are childhood friends who have had a lifelong passion for dismantling the barriers to equity, but we’ve done so in very different ways. My co-founder Laura took the academic path, getting a PhD in Sociology from UC Berkeley. Her research and experience has been focused on building rigorous methodologies that work in low-quality data environments, especially studying racial bias. I went down the business path, first working as a strategist across global technology brands, getting an MBA from London Business School and then building my first business in the analytics space. Despite our divergent paths we have stayed in touch throughout the years. When I returned to the US after living in London for the last 11 years, the opportunity to collaborate on a project together presented itself and UvvaLabs was born.
One current issue with using AI to hire staff is that it can unintentionally reinforce societal biases such as racism and sexism. How big of an issue do you believe this to be?
This is a huge issue. Frequently decision makers believe that AI can solve all problems instead of understanding that it is a tool that requires a human counterpart to make smart decisions. Recruitment is no different – there are many products out there that claim to reduce or remove bias from the process. But AI is only as strong as the algorithm running it, and this is always built by people. Even the strongest AI system cannot be completely free of bias since all humans have biases.
For example, many AI recruitment tools are designed to offer or match candidates to a role in the most cost-effective way possible. This unintended focus on cost actually creates a huge inflection point for bias. In typical organizations, hiring diverse talent takes more time and effort because power structures tend to reproduce themselves and tend to be homogenous. However, the benefits of building a more diverse workforce far outweigh any initial costs.
How does UvvaLabs avoid having these biases into the AI system?
The best way to build any technology including AI that is free from bias is by having a team that is composed of both people who have been historically marginalized and who are experts in research methods designed to minimize bias. That’s the approach we take at UvvaLabs.
Uvvalabs uses a broad variety of data sources to understand an organization’s diversity environment. Could you touch on what some of these data sources are?
Organizations are low-quality data environments. Frequently there is little consistency between companies or even departments in terms of what is created and how. Our technology is designed to provide rigorous analysis in these types of environments by combining a mixture of quantitative and qualitative data sources. The key for us is that we only analyze what is readily available and easily shareable – so that the approach is as low-touch as possible.
Uvvalabs offers a dashboard showing various indicators for organizational health. Could you discuss what these indicators are and the type of actionable insight that is provided?
Every organization is different, so each organization will likely use Uvva in a slightly different way. This is because every organization is at a different stage in their diversity journey. There is no one size fits all formula – our approach flexes to each organization’s priorities, what is currently being measured and available, as well as where the organization wants to go. This exercise is what defines the recommendations our tool provides.
As a woman serial entrepreneur do you have any advice for women who are contemplating launching a new business?
Startups are a boy’s club and it is objectively harder for women, and even harder for women of color. We shouldn’t shy away from the reality that women and people of color have been systematically shut out of opportunities, capital, communities and networks of access. That said, this is slowly changing. For instance, more and more funds are opening up that specifically are geared towards women or BIPOC. Incubators and accelerators are thinking and acting more inclusively as they shape their programs and practices. Diverse entrepreneurial communities are emerging and growing.
My advice for anyone who aspires to be an entrepreneur is to take a stab. It won’t always be easy. And it might not work. But entrepreneurship is filled with people who break with convention and prove naysayers wrong. We need more women and minorities in this community. We need their dreams, their products and their stories.
You are also the founder of Hive Founders, a non-profit network that brings female founders together. Could you give us some details on this non-profit and how it can help women?
Hive Founders is a global network of support for women across the globe, no matter what stage they are in. Every business is unique but there are many lessons we can learn from each other. In addition to the community, Hive Founders hosts events, podcasts, and a newsletter – all designed to bring resources and knowledge to our community of founders.
Is there anything else that you would like to share about UvvaLabs?
Every organization has the potential to transform itself into a more productive, diverse and accessible workplace, regardless of what structures are in place today. There are competitive reasons for investing in diversity. For one, the customer landscape is changing – the United States for instance will be majority minority by 2044. In practice this means customer profiles are changing too. Every company wants to be as attractive as possible to their customers and as competitive as possible against similar offerings. Diversity is that competitive asset. Smart companies and their leaders understand this and will get ahead of the curve to ensure their workplaces and products serve and support as many different types of people as possible.
Thank you for the great interview, I really enjoyed learning about your views on diversity and AI bias. Readers who wish to learn more should visit UvvaLabs.
- Matt Carlson, VP Business Development at WiBotic – Interview Series
- U.S. National Institutes of Health Turns to AI for Fight Against COVID-19
- WiBotic Receives Industry-First FCC Approval for High Power Wireless Charging of Robots
- AI Browser Tools Aim To Recognize Deepfakes and Other Fake Media
- Dave Ryan, General Manager, Health & Life Sciences Business at Intel – Interview Series