By Alfred Crews, Jr the Vice president & chief counsel for the Intelligence & Security sector of BAE Systems Inc.
Earlier this year, before the global pandemic, I attended The Citadel’s Intelligence Ethics Conference in Charleston, where we discussed the topic of ethics in intelligence collection as it relates to protecting national security. In the defense industry, we are seeing the proliferation of knowledge, computing, and advanced technologies, especially in the area of artificial intelligence (AI) and machine learning (ML). However, there could be significant issues when deploying AI within the context of intelligence gathering or real-time combat.
AI coupled with quantum computing presents risks
What we must question, analyze and determine a path forward is when using AI coupled with quantum computing capabilities in the process of war-time decision making. For example, remember the Terminator? As our technology makes leaps and bounds, the reality of what Skynet presented is before us. We could be asking ourselves, “Is Skynet coming to get us?” Take a stroll down memory lane with me; the AI machines took over because they had the capability to think and make decisions on their own, without a human to direct it. When the machines deducted that humans were a bug, they set out to destroy humankind. Don’t get me wrong, AI has great potential, but I believe it must have control parameters because of the risk factor involved.
AI’s ethical ambiguities & philosophical dilemma
I believe this is precisely why the U.S. Department of Defense (DoD) issued its own Ethical Principles for AI, because the use of AI raises new ethical ambiguities and risks. When combining AI with quantum computing capabilities, the ability to make decisions changes and the risk of losing control increases –more than we might realize today. Quantum computing puts our human brain’s operating system to shame because super computers can make exponentially more calculations quicker and with more accuracy than our human brains will ever be able to.
Additionally, the use of AI coupled with computing presents a philosophical dilemma. At what point will the world allow machines to have a will of their own; and, if machines are permitted to think on their own, does that mean the machine itself has become self-aware? Does being self-aware constitute life? As a society, we have not yet determined how to define this situation. Thus, as it stands today, machines taking action on their own without a human to control it, could lead to ramifications. Could a machine override a human’s intervention to stop fire? If the machine is operating on its own, will we be able to pull the plug?
As I see it, using AI from a defensive standpoint is easy to do. However, how much easier would it be to transfer to the offensive? On the offense, machines would be making combat firing decisions on the spot. Would a machine firing down an enemy constitute a violation of the Geneva Convention and laws of armed conflict? Moving into this space at a rapid rate, the world must agree that the use of AI and quantum computing in combat must play into the laws we currently have in place.
The DoD has a position when using AI with autonomous systems and states that there will always be a person engaged with the decision making process; a person would make the final call on pulling a trigger to fire a weapon. That’s our rule, but what happens if an adversary decides to take another route and have an AI-capable machine make all the final decisions? Then the machine, which, as we discussed, is already faster, smarter and more accurate, would have the advantage.
Let’s look at a drone equipped with AI and facial recognition: The drone fires on its own will because of a pre-determined target labelled as a terrorist. Who is actually responsible for the firing? Is there accountability if there is a biased mistake?
Bias baked in to AI/ML
Research points to the fact that a machine is less likely to make mistakes than a human would. However, research also proves there are bias in machine learning based on the human “teacher” teaching the machine. The DoD’s five Ethical Principles of AI referenced existing biases when it states, “The Department will take deliberate steps to minimize unintended bias in AI capabilities.” We already know through proven studies that in the use of facial recognition applications there are bias toward people of color with false positives. When a person creates the code that teaches the machine how to make decisions, there will be biases. This could be unintentional because the person creating the AI was not aware of the bias that existed within themselves.
So, how does one eliminate bias? AI output is only as good as the input. Therefore, there must be controls. You must control the data flowing in because that is what could make AI results less valid. Developers will constantly have to re-write the code to eliminate the bias.
The world to define best use of technology
Technology in and of itself is not good or bad. It is how a nation puts it to use that could take the best of intentions and have it go wrong. As technology advances in ways that impact human lives, the world must work together to define appropriate action. If we take the human out of the equation in AI applications, we also take that pause before pulling the trigger – that moral compass that guides us; that pause when we stop and question, “Is this right?” A machine taught to engage will not have that pause. So, the question is, in the future, will the world stand for this? How far will the world go to allow machines to make combat decisions?
Appen Partners with World Economic Forum to Create Responsible AI Standards
Appen, a global leader in high-quality training data for machine learning systems, has partnered with the World Economic Forum to design and release standards and best practices for responsible training data when building machine learning and artificial intelligence applications. As a World Economic Forum Associate Partner, Appen will collaborate with industry leaders to release the new standards within the “Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning” platform, which enables a global footprint and guidepost for responsible training data collection and creation across countries and industries.
The standards and best practices for responsible training data aim to improve quality, efficiency, transparency, and responsibility for AI projects while promoting inclusivity and collaboration. The adoption of these standards by the larger technology community will increase the value of – and trust in – the use of AI by businesses and the general public.
Modern AI applications largely depend on human-annotated data to train machine learning models that rely on deep learning and neural net technology. Responsible training data practices include paying fair wages and adhering to labor wellness guidelines and standards. Appen’s Crowd Code of Ethics, released in 2019.
“Ethical, diverse training data is essential to building a responsible AI system,” said CEO of Appen, Mark Brayan. “A solid training data platform and management strategy is often the most critical component of launching a successful, responsible machine learning powered product into production. We are delighted to share our 20+ years of expertise in this area, along with our Crowd Code of Ethics, with the World Economic Forum to accelerate standards and responsible practices across the technology industry.”
A key focus of the partnership will bring together leaders in the AI industry to:
- Contribute to the Human-Centered AI for Human Resources project
- Empower AI leadership with a C-Suite Toolkit and Model AI Governance Framework
“Getting access to large volumes of responsibly-sourced training data has been a longstanding challenge in the machine learning industry,” said Kay Firth-Butterfield, Head of AI and Machine Learning at the World Economic Forum. “The industry needs to respond with guidelines and standards for what it means to acquire and use responsible training data, addressing topics ranging from user permission, privacy, and security to how individuals are compensated for their work as part of the AI supply chain. We look forward to working with Appen and our multi-stakeholder community to provide practical guidance for responsible machine learning development around the world.”
Join industry leaders on October 14th for Appen’s annual Train AI conference providing leaders with the confidence to launch AI beyond pilot and into production. A curated collection of topics will teach how to successfully scale AI programs with actionable insights and get to ROI faster. Kay Firth-Butterfield will be the keynote speaker presenting on the importance of responsible AI practices and the tools available to leaders to ensure that ethical standards are being met.
Andrea Sommer, Founder & Business Lead at UvvaLabs – Interview Series
Andrea Sommer is the Founder & Business Lead at UvvaLabs, a female-founded technology company that uses AI to help companies make better decisions that create more diverse and accessible workforces.
Could you discuss how UvvaLabs uses AI to assist companies in creating more diverse and accessible workforces?
Our approach looks at offering structural solutions to the very structural problem of inequity in the workplace. Through our research and experience, we’ve built a model of what the ‘ideal’ organization looks like from a diversity and accessibility perspective. Our AI analyzes and evaluates data across an organization to create a version of that organization’s ‘current state’ from a diversity perspective. By comparing the two sides – the ideal to the current – we can offer recommendations on what structures to build and which to remove to bring the organization closer to that ideal state.
What was the inspiration for launching UvvaLabs?
My co-founder and I are childhood friends who have had a lifelong passion for dismantling the barriers to equity, but we’ve done so in very different ways. My co-founder Laura took the academic path, getting a PhD in Sociology from UC Berkeley. Her research and experience has been focused on building rigorous methodologies that work in low-quality data environments, especially studying racial bias. I went down the business path, first working as a strategist across global technology brands, getting an MBA from London Business School and then building my first business in the analytics space. Despite our divergent paths we have stayed in touch throughout the years. When I returned to the US after living in London for the last 11 years, the opportunity to collaborate on a project together presented itself and UvvaLabs was born.
One current issue with using AI to hire staff is that it can unintentionally reinforce societal biases such as racism and sexism. How big of an issue do you believe this to be?
This is a huge issue. Frequently decision makers believe that AI can solve all problems instead of understanding that it is a tool that requires a human counterpart to make smart decisions. Recruitment is no different – there are many products out there that claim to reduce or remove bias from the process. But AI is only as strong as the algorithm running it, and this is always built by people. Even the strongest AI system cannot be completely free of bias since all humans have biases.
For example, many AI recruitment tools are designed to offer or match candidates to a role in the most cost-effective way possible. This unintended focus on cost actually creates a huge inflection point for bias. In typical organizations, hiring diverse talent takes more time and effort because power structures tend to reproduce themselves and tend to be homogenous. However, the benefits of building a more diverse workforce far outweigh any initial costs.
How does UvvaLabs avoid having these biases into the AI system?
The best way to build any technology including AI that is free from bias is by having a team that is composed of both people who have been historically marginalized and who are experts in research methods designed to minimize bias. That’s the approach we take at UvvaLabs.
Uvvalabs uses a broad variety of data sources to understand an organization’s diversity environment. Could you touch on what some of these data sources are?
Organizations are low-quality data environments. Frequently there is little consistency between companies or even departments in terms of what is created and how. Our technology is designed to provide rigorous analysis in these types of environments by combining a mixture of quantitative and qualitative data sources. The key for us is that we only analyze what is readily available and easily shareable – so that the approach is as low-touch as possible.
Uvvalabs offers a dashboard showing various indicators for organizational health. Could you discuss what these indicators are and the type of actionable insight that is provided?
Every organization is different, so each organization will likely use Uvva in a slightly different way. This is because every organization is at a different stage in their diversity journey. There is no one size fits all formula – our approach flexes to each organization’s priorities, what is currently being measured and available, as well as where the organization wants to go. This exercise is what defines the recommendations our tool provides.
As a woman serial entrepreneur do you have any advice for women who are contemplating launching a new business?
Startups are a boy’s club and it is objectively harder for women, and even harder for women of color. We shouldn’t shy away from the reality that women and people of color have been systematically shut out of opportunities, capital, communities and networks of access. That said, this is slowly changing. For instance, more and more funds are opening up that specifically are geared towards women or BIPOC. Incubators and accelerators are thinking and acting more inclusively as they shape their programs and practices. Diverse entrepreneurial communities are emerging and growing.
My advice for anyone who aspires to be an entrepreneur is to take a stab. It won’t always be easy. And it might not work. But entrepreneurship is filled with people who break with convention and prove naysayers wrong. We need more women and minorities in this community. We need their dreams, their products and their stories.
You are also the founder of Hive Founders, a non-profit network that brings female founders together. Could you give us some details on this non-profit and how it can help women?
Hive Founders is a global network of support for women across the globe, no matter what stage they are in. Every business is unique but there are many lessons we can learn from each other. In addition to the community, Hive Founders hosts events, podcasts, and a newsletter – all designed to bring resources and knowledge to our community of founders.
Is there anything else that you would like to share about UvvaLabs?
Every organization has the potential to transform itself into a more productive, diverse and accessible workplace, regardless of what structures are in place today. There are competitive reasons for investing in diversity. For one, the customer landscape is changing – the United States for instance will be majority minority by 2044. In practice this means customer profiles are changing too. Every company wants to be as attractive as possible to their customers and as competitive as possible against similar offerings. Diversity is that competitive asset. Smart companies and their leaders understand this and will get ahead of the curve to ensure their workplaces and products serve and support as many different types of people as possible.
Thank you for the great interview, I really enjoyed learning about your views on diversity and AI bias. Readers who wish to learn more should visit UvvaLabs.
Huma Abidi, Senior Director of AI Software Products at Intel – Interview Series
Huma Abidi is a Senior Director of AI Software Products at Intel, responsible for strategy, roadmaps, requirements, machine learning and analytics software products. She leads a globally diverse team of engineers and technologists responsible for delivering world-class products that enable customers to create AI solutions. Huma joined Intel as a software engineer and has since worked in a variety of engineering, validation and management roles in the area of compilers, binary translation, and AI and deep learning. She is passionate about women’s education, supporting several organizations around the world for this cause, and was a finalist for VentureBeat’s 2019 Women in AI award in the mentorship category.
What initially sparked your interest in AI?
I’ve always found it interesting to imagine what could happen if machines could speak, or see, or interact intelligently with humans. Because of some big technical breakthroughs in the last decade, including deep learning gaining popularity because of the availability of data, compute power, and algorithms, AI has now moved from science fiction to real world applications. Solutions we had imagined previously are now within reach. It is truly an exciting time!
In my previous job, I was leading a Binary Translation engineering team, focused on optimizing software for Intel hardware platforms. At Intel, we recognized that the developments in AI would lead to huge industry transformations, demanding tremendous growth in compute power from devices to Edge to cloud and we sharpened our focus to become a data-centric company.
Realizing the need for powerful software to make AI a reality, the first challenge I took on was to lead the team in creating AI software to run efficiently on Intel Xeon CPUs by optimizing deep learning frameworks like Caffe and TensorFlow. We were able to demonstrate more than 200-fold performance increases due to a combination of Intel hardware and software innovations.
We are working to make all of our customer workloads in various domains run faster and better on Intel technology.
What can we do as a society to attract women to AI?
It’s a priority for me and for Intel to get more women in STEM and computer science in general, because diverse groups will build better products for a diverse population. It’s especially important to get more women and underrepresented minorities in AI, because of potential biases lack of representation can cause when creating AI solutions.
In order to attract women, we need to do a better job explaining to girls and young women how AI is relevant in the world, and how they can be part of creating exciting and impactful solutions. We need to show them that AI spans so many different areas of life, and they can use AI technology in their domain of interest, whether it’s art or robotics or data journalism or television. Exciting applications of AI they can easily see making an impact e.g. virtual assistants like Alexa, self-driving cars, social media, how Netflix knows which movies they want to watch, etc.
Another key part of attracting women is representation. Fortunately, there are many women leaders in AI who can serve as excellent role models, including Fei-Fei Li, who is leading human-centered AI at Stanford, and Meredith Whittaker, who is working on social implications through the AI Now Institute at NYU.
We need to work together to adopt inclusive business practices and expand access of technology skills to women and underrepresented minorities. At Intel, our 2030 goal is to increase women in technical roles to 40% and we can only achieve that by working with other companies, institutes, and communities.
How can women best break into the industry?
There are a few options if you want to break into AI specifically. There are numerous online courses in AI, including UDACITY’s free Intel Edge AI Fundamentals course. Or you could go back to school, for example at one of Maricopa County’s community colleges for an AI associate degree – and study for a career in AI e.g. Data Scientist, Data Engineer, ML/DL developer, SW Engineer etc.
If you already work at a tech company, there are likely already AI teams. You could check out the option to spend part of your time on an AI team that you’re interested in.
You can also work on AI if you don’t work at a tech company. AI is extremely interdisciplinary, so you can apply AI to almost any domain you’re involved in. As AI frameworks and tools evolve and become more user-friendly, it becomes easier to use AI in different settings. Joining online events like Kaggle competitions is a great way to work on real-world machine learning problems that involve data sets you find interesting.
The tech industry also needs to put in time, effort, and money to reach out to and support women, including women who are also underrepresented ethnic minorities. On a personal note, I’m involved in organizations like Girls Who Code and Girl Geek X, which connect and inspire young women.
With Deep learning and reinforcement learning recently gaining the most traction, what other forms of machine learning should women pay attention to?
AI and machine learning are still evolving, and exciting new research papers are being published regularly. Some areas to focus on right now include:
- Classical ML techniques that continue to be important and are widely used.
- Responsible/Explainable AI, that has become a critical part of AI lifecycle and from a deployability of deep learning and reinforcement learning point-of-view.
- Graph Neural Networks and multi-modal learning that get insights by learning from rich relation information among graph data.
AI bias is a huge societal issue when it comes to bias towards women and minorities. What are some ways of solving these issues?
When it comes to AI, biases in training samples, human labelers and teams can be compounded to discriminate against diverse individuals, with serious consequences.
It is critical that diversity is prioritized at every step of the process. If women and other minorities from the community are part of the teams developing these tools, they will be more aware of what can go wrong.
It is also important to make sure to include leaders across multiple disciplines such as social scientists, doctors, philosophers and human rights experts to help define what is ethical and what is not.
Can you explain the AI blackbox problem, and why AI explainability is important?
In AI, models are trained on massive amounts of data before they make decisions. In most AI systems, we don’t know how these decisions were made — the decision-making process is a black box, even to its creators. And it may not be possible to really understand how a trained AI program is arriving at its specific decision. A problem arises when we suspect that the system isn’t working. If we suspect the system of algorithmic biases, it’s difficult to check and correct for them if the system is unable to explain its decision making.
There is currently a major research focus on eXplainable AI (XAI) that intends to equip AI models with transparency, explainability and accountability, which will hopefully lead to Responsible AI.
In your keynote address during MITEF Arab Startup Competition final award ceremony and conference you discussed Intel’s AI for Social Good initiatives. Which of these Social Good projects has caught your attention and why is it so important?
I continue to be very excited about all of Intel’s AI for Social Good initiatives, because breakthroughs in AI can lead to transformative changes in the way we tackle problem solving.
One that I especially care about is the Wheelie, an AI-powered wheelchair built in partnership with HOOBOX Robotics. The Wheelie allows extreme paraplegics to regain mobility by using facial expressions to drive. Another amazing initiative is TrailGuard AI, which uses Intel AI technology to fight illegal poaching and protect animals from extinction and species loss.
As part of Intel’s Pandemic Response Initiative, we have many on-going projects with our partners using AI. One key initiative is contactless fever detection or COVID-19 detection via chest radiography with Darwin AI. We’re also working on bots that can answer queries to increase awareness using natural language processing in regional languages.
For women who are interested in getting involved, are there books, websites, or other resources that you would recommend?
There are many great resources online, for all experience levels and areas of interest. Coursera and Udacity offer excellent online courses on machine learning and seep learning, most of which can be audited for free. MIT’s OpenCourseWare is another great, free way to learn from some of the world’s best professors.
Companies such as Intel have AI portals that contain a lot of information about AI including offered solutions. There are many great books on AI: foundational computer science texts like Artificial Intelligence: A Modern Approach by Peter Norvig and Stuart Russell, and modern, philosophical books like Homo Deus by historian Yuval Hararri. I’d also recommend Lex Fridman’s AI podcast on great conversations from a wide range of perspectives and experts from different fields.
Do you have any last words for women who are curious about AI but are not yet ready to leap in?
AI is the future, and will change our society — in fact, it already has. It’s essential that we have honest, ethical people working on it. Whether in a technical role, or at a broader social level, now is a perfect time to get involved!
Thank you for the interview, you are certainly an inspiration for women the world over. Readers who wish to learn more about the software solutions at Intel should visit AI Software Products at Intel.
- Dimitris Vassos, CEO, Co-founder, and Chief Architect of Omilia – Interview Series
- Human Brain’s Light Processing Ability Could Lead to Better Robotic Sensing
- Game Developers Look To Voice AI For New Creative Opportunities
- Udacity Launches RPA Developer Nanodegree Program in Conjunction with UiPath
- AI Used To Identify Gene Activation Sequences and Find Disease-Causing Genes