Connect with us


Paper From Insitute For Humanity Argues That Companies Should Compensate Society For Jobs Lost To AI




Automation, and the loss of jobs that the company has the car a major point of discussion in the AI field past couple years, and seems poised to become an even greater point of discussion in the coming decade. Current Democratic presidential candidate Andrew Yang has made job loss to automation a key issue of his platform. The Institute for Humanity, an AI think tank lead by Nick Bostrom, the philosopher, recently made a paper available for preview on arXiv. As ZDNet reports, The paper suggests that AI companies with excess profit should pay some amount of money beyond their normal taxes, money which would go towards ameliorating the societal damage from jobs lost to automation.

The AI researchers write in the paper that there is consensus among most AI researchers that the vast majority of human work can potentially be automated, and the researchers also predict that by 2060 AI will be able to outperform humans at most tasks that contribute to economic activity. Because of this, the researchers suggest that there should be a plan in place to mitigate the potentially harmful effects of automation, including job displacement, lowered wages, and the loss of whole job types.

The researchers suggest that there should be a scale of obligation and remuneration, which is dependent upon the profit of the company in relation to the gross world profit. This could range anywhere from zero to 50% of the profit over the point of excess profit. The paper’s authors offer an example of an internet company that makes around $5 trillion dollars in excess profit in 2060 (based on 2010 dollars) having to pay around $488.12 billion if it’s assumed that the gross world product is a$268 billion.

The researchers argue that a quantifiable metric of remuneration is something that companies will be able to plan for, and therefore they can reduce risk. Companies could potentially bring the amount they pay into the “Windfall Clause” into alignment with their philanthropic giving amount through the process of discounting. For example, that hypothetical $488 billion dollars could be discounted buy at least 10% of the average cost of capital for an internet company and then further discount because of the low probability of actually earning the amount needed to make a payment that large. After discounting, the annual cost to a company that makes enough money to potentially pay in $488 billion would be around $649 million a year, approximately in line with the amount large companies spend on philanthropic giving. The researchers suggest thinking of the Windfall Clause as an extension to stock option compensation.

The authors of the paper note that it may be a plan that is easier to implement than an excess profit task, as instituting an excess profit tax would require convincing political majorities and companies, whereas the Windfall Clause plan only requires convincing individual companies to buy-in. The Institute for Humanity researchers offers up the paper in preview on arXiv in the spirit of generating discussion, acknowledging that for the plan to be feasible many topics and aspects of the plan will have to be considered.

Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.


Andrea Sommer, Founder & Business Lead at UvvaLabs – Interview Series




Andrea Sommer is the Founder & Business Lead at UvvaLabs, a female-founded technology company that uses AI to help companies make better decisions that create more diverse and accessible workforces.

Could you discuss how UvvaLabs uses AI to assist companies in creating more diverse and accessible workforces?

Our approach looks at offering structural solutions to the very structural problem of inequity in the workplace. Through our research and experience, we’ve built a model of what the ‘ideal’ organization looks like from a diversity and accessibility perspective. Our AI analyzes and evaluates data across an organization to create a version of that organization’s ‘current state’ from a diversity perspective. By comparing the two sides – the ideal to the current – we can offer recommendations on what structures to build and which to remove to bring the organization closer to that ideal state.

What was the inspiration for launching UvvaLabs?

My co-founder and I are childhood friends who have had a lifelong passion for dismantling the barriers to equity, but we’ve done so in very different ways. My co-founder Laura took the academic path, getting a PhD in Sociology from UC Berkeley. Her research and experience has been focused on building rigorous methodologies that work in low-quality data environments, especially studying racial bias. I went down the business path, first working as a strategist across global technology brands, getting an MBA from London Business School and then building my first business in the analytics space. Despite our divergent paths we have stayed in touch throughout the years. When I returned to the US after living in London for the last 11 years, the opportunity to collaborate on a project together presented itself and UvvaLabs was born.

One current issue with using AI to hire staff is that it can unintentionally reinforce societal biases such as racism and sexism. How big of an issue do you believe this to be?

This is a huge issue. Frequently decision makers believe that AI can solve all problems instead of understanding that it is a tool that requires a human counterpart to make smart decisions. Recruitment is no different – there are many products out there that claim to reduce or remove bias from the process. But AI is only as strong as the algorithm running it, and this is always built by people. Even the strongest AI system cannot be completely free of bias since all humans have biases.

For example, many AI recruitment tools are designed to offer or match candidates to a role in the most cost-effective way possible. This unintended focus on cost actually creates a huge inflection point for bias. In typical organizations, hiring diverse talent takes more time and effort because power structures tend to reproduce themselves and tend to be homogenous. However, the benefits of building a more diverse workforce far outweigh any initial costs.

How does UvvaLabs avoid having these biases into the AI system?

The best way to build any technology including AI that is free from bias is by having a team that is composed of both people who have been historically marginalized and who are experts in research methods designed to minimize bias. That’s the approach we take at UvvaLabs.

Uvvalabs uses a broad variety of data sources to understand an organization’s diversity environment. Could you touch on what some of these data sources are?

Organizations are low-quality data environments. Frequently there is little consistency between companies or even departments in terms of what is created and how. Our technology is designed to provide rigorous analysis in these types of environments by combining a mixture of quantitative and qualitative data sources. The key for us is that we only analyze what is readily available and easily shareable – so that the approach is as low-touch as possible.

Uvvalabs offers a dashboard showing various indicators for organizational health. Could you discuss what these indicators are and the type of actionable insight that is provided? 

Every organization is different, so each organization will likely use Uvva in a slightly different way. This is because every organization is at a different stage in their diversity journey. There is no one size fits all formula – our approach flexes to each organization’s priorities, what is currently being measured and available, as well as where the organization wants to go. This exercise is what defines the recommendations our tool provides.

As a woman serial entrepreneur do you have any advice for women who are contemplating launching a new business?

Startups are a boy’s club and it is objectively harder for women, and even harder for women of color. We shouldn’t shy away from the reality that women and people of color have been systematically shut out of opportunities, capital, communities and networks of access. That said, this is slowly changing. For instance, more and more funds are opening up that specifically are geared towards women or BIPOC. Incubators and accelerators are thinking and acting more inclusively as they shape their programs and practices. Diverse entrepreneurial communities are emerging and growing.

My advice for anyone who aspires to be an entrepreneur is to take a stab. It won’t always be easy. And it might not work. But entrepreneurship is filled with people who break with convention and prove naysayers wrong. We need more women and minorities in this community. We need their dreams, their products and their stories.

You are also the founder of Hive Founders, a non-profit network that brings female founders together. Could you give us some details on this non-profit and how it can help women?

Hive Founders is a global network of support for women across the globe, no matter what stage they are in. Every business is unique but there are many lessons we can learn from each other. In addition to the community, Hive Founders hosts events, podcasts, and a newsletter – all designed to bring resources and knowledge to our community of founders.

Is there anything else that you would like to share about UvvaLabs?

Every organization has the potential to transform itself into a more productive, diverse and accessible workplace, regardless of what structures are in place today. There are competitive reasons for investing in diversity. For one, the customer landscape is changing – the United States for instance will be majority minority by 2044. In practice this means customer profiles are changing too. Every company wants to be as attractive as possible to their customers and as competitive as possible against similar offerings. Diversity is that competitive asset. Smart companies and their leaders understand this and will get ahead of the curve to ensure their workplaces and products serve and support as many different types of people as possible.

Thank you for the great interview, I really enjoyed learning about your views on diversity and AI bias. Readers who wish to learn more should visit UvvaLabs.

Spread the love
Continue Reading


Huma Abidi, Senior Director of AI Software Products at Intel – Interview Series




Photo By O’Reilly Media

Huma Abidi is a Senior Director of AI Software Products at Intel, responsible for strategy, roadmaps, requirements, machine learning and analytics software products. She leads a globally diverse team of engineers and technologists responsible for delivering world-class products that enable customers to create AI solutions. Huma joined Intel as a software engineer and has since worked in a variety of engineering, validation and management roles in the area of compilers, binary translation, and AI and deep learning. She is passionate about women’s education, supporting several organizations around the world for this cause, and was a finalist for VentureBeat’s 2019 Women in AI award in the mentorship category.

What initially sparked your interest in AI?

I’ve always found it interesting to imagine what could happen if machines could speak, or see, or interact intelligently with humans. Because of some big technical breakthroughs in the last decade, including deep learning gaining popularity because of the availability of data, compute power, and algorithms, AI has now moved from science fiction to real world applications. Solutions we had imagined previously are now within reach. It is truly an exciting time!

In my previous job, I was leading a Binary Translation engineering team, focused on optimizing software for Intel hardware platforms. At Intel, we recognized that the developments in AI would lead to huge industry transformations, demanding tremendous growth in compute power from devices to Edge to cloud and we sharpened our focus to become a data-centric company.

Realizing the need for powerful software to make AI a reality, the first challenge I took on was to lead the team in creating AI software to run efficiently on Intel Xeon CPUs by optimizing deep learning frameworks like Caffe and TensorFlow. We were able to demonstrate more than 200-fold performance increases due to a combination of Intel hardware and software innovations.

We are working to make all of our customer workloads in various domains run faster and better on Intel technology.


What can we do as a society to attract women to AI?

It’s a priority for me and for Intel to get more women in STEM and computer science in general, because diverse groups will build better products for a diverse population. It’s especially important to get more women and underrepresented minorities in AI, because of potential biases lack of representation can cause when creating AI solutions.

In order to attract women, we need to do a better job explaining to girls and young women how AI is relevant in the world, and how they can be part of creating exciting and impactful solutions. We need to show them that AI spans so many different areas of life, and they can use AI technology in their domain of interest, whether it’s art or robotics or data journalism or television. Exciting applications of AI they can easily see making an impact e.g. virtual assistants like Alexa, self-driving cars, social media, how Netflix knows which movies they want to watch, etc.

Another key part of attracting women is representation. Fortunately, there are many women leaders in AI who can serve as excellent role models, including Fei-Fei Li, who is leading human-centered AI at Stanford, and Meredith Whittaker, who is working on social implications through the AI Now Institute at NYU.

We need to work together to adopt inclusive business practices and expand access of technology skills to women and underrepresented minorities. At Intel, our 2030 goal is to increase women in technical roles to 40% and we can only achieve that by working with other companies, institutes, and communities.


How can women best break into the industry?  

There are a few options if you want to break into AI specifically. There are numerous online courses in AI, including UDACITY’s free Intel Edge AI Fundamentals course. Or you could go back to school, for example at one of Maricopa County’s community colleges for an AI associate degree – and study for a career in AI e.g. Data Scientist, Data Engineer, ML/DL developer, SW Engineer etc.

If you already work at a tech company, there are likely already AI teams. You could check out the option to spend part of your time on an AI team that you’re interested in.

You can also work on AI if you don’t work at a tech company. AI is extremely interdisciplinary, so you can apply AI to almost any domain you’re involved in. As AI frameworks and tools evolve and become more user-friendly, it becomes easier to use AI in different settings. Joining online events like Kaggle competitions is a great way to work on real-world machine learning problems that involve data sets you find interesting.

The tech industry also needs to put in time, effort, and money to reach out to and support women, including women who are also underrepresented ethnic minorities. On a personal note, I’m involved in organizations like Girls Who Code and Girl Geek X, which connect and inspire young women.


With Deep learning and reinforcement learning recently gaining the most traction, what other forms of machine learning should women pay attention to?

AI and machine learning are still evolving, and exciting new research papers are being published regularly. Some areas to focus on right now include:

  1. Classical ML techniques that continue to be important and are widely used.
  2. Responsible/Explainable AI, that has become a critical part of AI lifecycle and from a deployability of deep learning and reinforcement learning point-of-view.
  3. Graph Neural Networks and multi-modal learning that get insights by learning from rich relation information among graph data.


AI bias is a huge societal issue when it comes to bias towards women and minorities. What are some ways of solving these issues?

When it comes to AI, biases in training samples, human labelers and teams can be compounded to discriminate against diverse individuals, with serious consequences.

It is critical that diversity is prioritized at every step of the process. If women and other minorities from the community are part of the teams developing these tools, they will be more aware of what can go wrong.

It is also important to make sure to include leaders across multiple disciplines such as social scientists, doctors, philosophers and human rights experts to help define what is ethical and what is not.


Can you explain the AI blackbox problem, and why AI explainability is important?

In AI, models are trained on massive amounts of data before they make decisions. In most AI systems, we don’t know how these decisions were made — the decision-making process is a black box, even to its creators. And it may not be possible to really understand how a trained AI program is arriving at its specific decision. A problem arises when we suspect that the system isn’t working. If we suspect the system of algorithmic biases, it’s difficult to check and correct for them if the system is unable to explain its decision making.

There is currently a major research focus on eXplainable AI (XAI) that intends to equip AI models with transparency, explainability and accountability, which will hopefully lead to Responsible AI.


In your keynote address during MITEF Arab Startup Competition final award ceremony and conference you discussed Intel’s AI for Social Good initiatives. Which of these Social Good projects has caught your attention and why is it so important?

I continue to be very excited about all of Intel’s AI for Social Good initiatives, because breakthroughs in AI can lead to transformative changes in the way we tackle problem solving.

One that I especially care about is the Wheelie, an AI-powered wheelchair built in partnership with HOOBOX Robotics. The Wheelie allows extreme paraplegics to regain mobility by using facial expressions to drive. Another amazing initiative is TrailGuard AI, which uses Intel AI technology to fight illegal poaching and protect animals from extinction and species loss.

As part of Intel’s Pandemic Response Initiative, we have many on-going projects with our partners using AI. One key initiative is contactless fever detection or COVID-19 detection via chest radiography with Darwin AI. We’re also working on bots that can answer queries to increase awareness using natural language processing in regional languages.


For women who are interested in getting involved, are there books, websites, or other resources that you would recommend?  

There are many great resources online, for all experience levels and areas of interest. Coursera and Udacity offer excellent online courses on machine learning and seep learning, most of which can be audited for free. MIT’s OpenCourseWare is another great, free way to learn from some of the world’s best professors.

Companies such as Intel have AI portals that contain a lot of information about AI including offered solutions. There are many great books on AI: foundational computer science texts like Artificial Intelligence: A Modern Approach by Peter Norvig and Stuart Russell, and modern, philosophical books like Homo Deus by historian Yuval Hararri. I’d also recommend Lex Fridman’s AI podcast on great conversations from a wide range of perspectives and experts from different fields.


Do you have any last words for women who are curious about AI but are not yet ready to leap in?

AI is the future, and will change our society — in fact, it already has. It’s essential that we have honest, ethical people working on it. Whether in a technical role, or at a broader social level, now is a perfect time to get involved!

Thank you for the interview, you are certainly an inspiration for women the world over. Readers who wish to learn more about the software solutions at Intel should visit AI Software Products at Intel.

Spread the love
Continue Reading

Data Science

Wilson Pang, Chief Technology Officer at Appen – Interview Series




Wilson Pang is the Chief Technology Officer at Appen, where he leads a group of world-class data scientists, engineers, and product managers to combine the power of technology and humans to solve the AI data problems.

In this interview we discuss AI ethics,  Appen’s 2020 State of AI and Machine Learning Report and current industry challenges.

What was it that attracted you personally to software engineering and data science?

I received the opportunity to work on data and AI 10 years ago. In the AI world, developers are no longer controlling logic in code, instead, data is deciding the logic of the AI model which is fascinating. I started my career as a developer with IBM, building large systems for banks, telecom operators and securities exchange companies. I was excited by the power of software and AI.

I am also lucky to see how data and machine learning help to grow business firsthand. My team at eBay leveraged AI to increase buyer purchases thus increasing tens of millions in revenue. We used AI to increase internal efficiency and reduce operational costs significantly for Now at Appen, we are helping companies from all kinds of industries use AI to drive success of their business.


Could you describe some of Appen’s AI and data labeling offerings?

We are a training data provider working with over 1 million contractors to collect and label images, text, speech, audio, video, and other data. Those contractors reside in over 130 countries and speak 180 languages and dialects which gives us the ability to provide high quality data for AI projects. We also have professional teams who have worked in AI data for more than 20 years and have a lot of knowledge on how to get the training data right. Last but not least, we have an industry leading AI-assisted annotation platform which has built-in features to assure quality and productivity. Our customers can choose between our managed service solution where we partner with their team, or the self-service platform. Our AI-assisted data annotation platform gives customers the ability to manage projects along with machine learning assisted tools to enhance quality, accuracy, and annotation speed.

With more than 20 years of experience, our services offer world-class training data, the most advanced AI- assisted data annotation platform, and a diverse, global crowd to ensure high-quality data.


What are some of the open source data sets that are available?

Appen has several open source data sets available online from image annotation to handwriting recognition. These data sets are free to download and can be used to help build and train your AI model. One of the most interesting sets available is on nucleus segmentation from medical images. The data contains more than 21000 nuclei annotated and validated by medical experts.


What was your biggest personal take away from the 2020 State of AI and Machine Learning Report?

The most interesting finding from the report was the increase in C-Suite involvement. I had been hearing about it, but to see the increase by 31% from last year was a big surprise. AI is becoming a part of core business and not just with tech leaders.


One cause for concern in the report is that only 25% of companies stated that unbiased AI is mission critical. Do you believe that this is due to lack of education on the importance of removing AI bias? What needs to be done to improve these statistics?

Yes, education is the first step to improving that statistic. An AI model built on biased data will deliver biased results and never be fully successful. Business leaders need to learn the importance of having unbiased data and how that leads to a successful deployment.


What are some AI ethics that companies should consider when working with AI and large data sets?

It’s important to look at where the data came from. Is that data being ethically sourced and unbiased? When looking at where it was sourced, you want to know if the people were paid a fair wage and if the data came from a diverse group. We recently released our Crowd Code of Ethics in support of inclusion, diversity, fair pay, and communication for our contributors.


What do you view as the current biggest industry challenge?

Lack of data and data management is the biggest challenge for the industry. Teams have a lot of decisions that they need to make about the data and many have challenges recognizing what they need. They need to understand what data they have, where it came from and what data they still need. All that data management is important to building and training an AI model. Lack of data can lead to a biased model which in turn will not be successful.


Appen has been releasing the State of AI and Machine Learning Reports for many years now. When was the first report launched and what are some of the biggest changes that have been seen since this initial report?

The first report was launched in 2015 and the biggest change we’ve seen is in ownership of AI projects. The first report was primarily answered by data scientists who managed AI for their companies. Today, 71% state C-Suite ownership which indicates a huge shift in perspective of AI becoming more critical to businesses. Data scientists also faced many challenges from lack of resources to build valuable insights from the data to unclear goals and unrealistic expectations. However, one of the key challenges remains the same around data and data management.


Is there anything else that you would like to share about Appen?  

On July 16, we’re excited to host the first virtual round table on launching AI in the real world on The four-part series features industry-leading practitioners sharing personal experiences and insights into their own AI journeys, shedding light so others can accelerate progress toward their AI-initiatives. To succeed, companies will have to be prepared to overcome several common challenges around data, ethics, people, and lifecycles. In the first edition, bring together leading experts to share what it means to them and their organization to participate in creating responsible AI.

Thank you for the interview. Readers may wish to read the Appen’s 2020 State of AI and Machine Learning Report or to visit the Appen website.

Spread the love
Continue Reading