Connect with us

Ethics

AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues

mm

Published

 on

AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues

The AI Now Institute has released a report that urges lawmakers and other regulatory bodies to set hard limits on the use of emotion-detecting technology, banning it in cases where it may be used to make important decisions like employee hiring or student acceptance. In addition, the report contained a number of other suggestions regarding a range of topics in the AI field.

The AI Now Institute is a research institute based at NYU, possessing the mission of studying AI’s impact on society. AI Now releases a yearly report demonstrating their findings regarding the state of AI research and the ethical implications of how AI is currently being used. As the BBC reported, this year’s report addressed topics like algorithmic discrimination, lack of diversity in AI research, and labor issues.

Affect recognition, the technical term for emotion-detection algorithms, is a rapidly growing area of AI research. Those who employ the technology to make decisions often claim that the systems can draw reliable information about people’s emotional states by analyzing microexpressions, along with other cues like tone of voice and body language. The AI Now institute notes that the technology is being employed across a wide range of applications, like determining who to hire, setting insurance prices, and monitoring if students are paying attention in class.

Prof. Kate Crawford, co-founder of AI Now explained that its often believed that human emotions can accurately be predicted with relatively simple models. Crawford said that some firms are basing the development of their software on Paul Ekman’s work, a psychologist who hypothesized there are only six basic types of emotions that register on the face. However, Crawford notes that since Ekman’s theory was introduced studies have found that is far greater variability in facial expressions and that expressions can change across situations and cultures very easily.

“At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks,” said Crawford to the BBC.

For this reason, the AI Now institute argues that much of affect recognition is based on unreliable theories and questionable science. Hence, emotion detection systems shouldn’t be deployed until more research has been done and that “governments should specifically prohibit the use of affect recognition in high-stakes decision-making processes”. AI Now argued that we should especially stop using the technology in “sensitive social and political contexts”, contexts that include employment, education, and policing.

At least one AI-development firm specializing in affect recognition, Emteq, agreed that there should be regulation that prevents misuse of the tech. The founder of Emteq, Charles Nduka, explained to the BBC that while AI systems can accurately recognize different facial expressions, there is not a simple map from expression to emotion. Nduka did express worry about regulation being taken too far and stifling research, noting that if “things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater”.

As NextWeb reports, AI Now also recommended a number of other policies and norms that should guide the AI industry moving forward.

AI Now highlighted the need for the AI industry to make workplaces more diverse and stated that workers should be guaranteed a right to voice their concerns about invasive and exploitative AI. Tech workers should also have the right to know if their efforts are being used to construct harmful or unethical work.

AI Now also suggested that lawmakers take steps to require informed consent for the use of any data derived from health-related AI. Beyond this, it was advised that data privacy be taken more seriously and that the states should work to design privacy laws for biometric data covering both private and public entities.

Finally, the institute advised that the AI industry begin thinking and acting more globally, trying to address the larger political, societal, and ecological consequences of AI. It was recommended that there be a substantial effort to account for AI’s impact regarding geographical displacement and climate and that governments should make the climate impact of the AI industry publically available.

Spread the love

Ethics

AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized

mm

Published

on

AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized

In 2019, there was more focus on AI ethics than ever before. However much of this discussion seemed hazy, with no codified approach. Rather, different companies created their own frameworks and policies regarding AI ethics. Having a consensus on AI ethics issues is important because it helps policymakers create and adjusts policies, and it also informs the work done by researchers and scholars. Beyond that, AI companies must know where ethical limits are if they hope to avoid unethical AI implementation. In order to create a better picture of the trends in AI ethics, as VentureBeats reports, the Berkman Klein Center at Harvard University performed a meta-analysis of the various existing AI ethics principles and frameworks.

According to the authors of the analysis, the researchers wanted to compare the principles side-by-side to look for overlap and divergence. Jessica Fjeld, the assistant director of the Harvard Law School Cyberlaw Clinic, explained that the research team wanted to “uncover the hidden momentum in a fractured, global conversation around the future of AI, resulted in this white paper and the associated data visualization.”

During the analysis, the team examined 36 different AI principle documents originating from around the world and coming from many different organizational types. The results of the research found that there were eight themes that kept appearing across the many documents.

Privacy and accountability were two of the most commonly appearing ethical themes, as was AI safety/security. Transparency/explainability was also a commonly cited goal, with there many attempts to make algorithms more explainable over the course of 2019. Fairness/non-discrimination was another ethical focal point, reflecting growing concerns about data bias. Ensuring human control of technology, and not surrendering decision power to AI was heavily mentioned as well. Professional responsibility was the seventh common theme found by the researchers. Finally, the researchers found continual mention of promoting human values in the AI ethics documentation they examined.

The research team gave qualitative and quantitative breakdowns of how these themes manifested themselves within AI ethics documentation in their paper and in an accompanying map. The map displays where each of the themes were mentioned.

The research team noted that much of the AI ethics discussion revolved around concern for human values and rights. As the research paper notes:

“64% of our documents contained a reference to human rights, and five documents [14%] took international human rights as a framework for their overall effort.”

References to human rights and values were more common in documents produced by private sector groups and civil society groups. This indicates that AI private sector companies aren’t concerned just with profits but with producing AI in an ethical way. Meanwhile, government agencies seem less concerned or aware of AI ethics overall, with less than half of AI-related documents originating from government agencies concerning themselves with AI ethics.

The researchers also noted that if the documents they examined were more recent, they were more likely to address all of the eight most prominent themes instead of just a few. This fact implies that the ideas behind what constitutes ethical AI usage are beginning to coalesce among those leading the discussion about AI ethics. Finally, the researchers state that the success of these principles in guiding the development of AI will depend on how well integrated they are in the AI development community at large. The researchers state in the paper:

“Moreover, principles are a starting place for governance, not an end. On its own, a set of principles is unlikely to be more than gently persuasive. Its impact is likely to depend on how it is embedded in a larger governance ecosystem, including for instance relevant policies (e.g. AI national plans), laws, regulations, but also professional practices and everyday routines.”

Spread the love
Continue Reading

Ethics

What a Business AI Ethics Code Looks Like

mm

Published

on

What a Business AI Ethics Code Looks Like

By now, it’s safe to say that artificial intelligence (AI) has established itself in the mainstream, especially in the world of business. From customer service and marketing, to fraud detection and automation, this particular technology has helped streamline operations in recent years.

Unfortunately, our dependence on AI also means that it holds so much of our personal information – whether it’s our family history, the things we buy, places we go to, or even our favourite songs. Essentially, we’re giving technology free access to our lives. As AI continues to develop (and ask for even more data), it’s raising a lot of serious concerns.

For instance, when the South Wales Police rolled out its facial recognition systems, they were immediately questioned for being too “intrusive.” Of course, there’s the issue of safety and where all that data really goes.

On top of this, AI is also facing other hurdles, such as public distrust born from the fear of robots driving people into mass unemployment. Case in point, across the Atlantic, HP reports that 72% of Americans are worried about a future where robots and computers can do human jobs. While the latter may be a bit farfetched, especially since AI is still far from working or thinking like a human, you can’t deny that the rapidly growing AI industry must be controlled better than it is now. According to Stanford professor Emma Brunskill, if we truly want “AI [to value] its human users and [justify] the trust we place in autonomous systems,” then regulations have to be put in place. For that, businesses need to have an AI code of ethics.

AI Code of Ethics

The AI code of ethics isn’t meant for the AI itself, but for the people who develop and use said technology. Last year, the UK government published a report that aims to inform the public about its ethical use. All in all, the report can be summarised into five principles:

1. AI must be created and used for the benefit of all. AIs must be designed to help everyone and not just one faction. All involved parties – the government, businesses, and stockholders, for example – must be present during its creation to make sure that everyone’s interests are properly represented.

2. AI should not be used to diminish the data rights or privacy of individuals, families, and communities. AI can collect large amounts of consumer data that could prove dangerous if it gets into the wrong hands. Measures should be made to protect citizens and consumer privacy.

3. AI must operate within parameters understood by the human mind. To implement the necessary restrictions on AI’s programming, the machine has to be designed in a way that can be understood by humans still. This is also necessary to educate other people on the ins-and-outs of the machine.

4. Everybody has the right to be educated on the nuances of AI. Knowledge of AI should be available to everyone, even those outside of the business world. Fortunately, there are plenty of online resources available to aid anyone who wants to learn, from online videos to extensive courses. These topics can range from machine learning and Python, to R programming and Pandas – all of which are used in the development and implementation of AI. The commonality of such content proves just how accessible AI knowledge has become – and rightly so, given how ingrained it is in today’s society.

5. Humans must be able to flourish mentally, emotionally, and economically alongside AI. There is no doubt that AI has hugely influenced employment and our workforce. Whether it’s for the best or not is debatable.

In an employment survey published on Quartz, almost half of existing jobs are at high risk of being automated this coming decade. If AI wishes to remain ethical, businesses need to start creating new jobs to replace the ones threatened by AI.

New technologies such as AI are often a topic of concern, no matter what the benefits are. After all, it’s not enough to enjoy the convenience of technology without being critical of the possible repercussions. If all businesses implement these ethical principles, then the public might be more accepting of them. This additional support may be what tech companies need to push the development of AI even further.

Spread the love
Continue Reading

Ethics

Paper From Insitute For Humanity Argues That Companies Should Compensate Society For Jobs Lost To AI

mm

Published

on

Paper From Insitute For Humanity Argues That Companies Should Compensate Society For Jobs Lost To AI

Automation, and the loss of jobs that the company has the car a major point of discussion in the AI field past couple years, and seems poised to become an even greater point of discussion in the coming decade. Current Democratic presidential candidate Andrew Yang has made job loss to automation a key issue of his platform. The Institute for Humanity, an AI think tank lead by Nick Bostrom, the philosopher, recently made a paper available for preview on arXiv. As ZDNet reports, The paper suggests that AI companies with excess profit should pay some amount of money beyond their normal taxes, money which would go towards ameliorating the societal damage from jobs lost to automation.

The AI researchers write in the paper that there is consensus among most AI researchers that the vast majority of human work can potentially be automated, and the researchers also predict that by 2060 AI will be able to outperform humans at most tasks that contribute to economic activity. Because of this, the researchers suggest that there should be a plan in place to mitigate the potentially harmful effects of automation, including job displacement, lowered wages, and the loss of whole job types.

The researchers suggest that there should be a scale of obligation and remuneration, which is dependent upon the profit of the company in relation to the gross world profit. This could range anywhere from zero to 50% of the profit over the point of excess profit. The paper’s authors offer an example of an internet company that makes around $5 trillion dollars in excess profit in 2060 (based on 2010 dollars) having to pay around $488.12 billion if it’s assumed that the gross world product is a$268 billion.

The researchers argue that a quantifiable metric of remuneration is something that companies will be able to plan for, and therefore they can reduce risk. Companies could potentially bring the amount they pay into the “Windfall Clause” into alignment with their philanthropic giving amount through the process of discounting. For example, that hypothetical $488 billion dollars could be discounted buy at least 10% of the average cost of capital for an internet company and then further discount because of the low probability of actually earning the amount needed to make a payment that large. After discounting, the annual cost to a company that makes enough money to potentially pay in $488 billion would be around $649 million a year, approximately in line with the amount large companies spend on philanthropic giving. The researchers suggest thinking of the Windfall Clause as an extension to stock option compensation.

The authors of the paper note that it may be a plan that is easier to implement than an excess profit task, as instituting an excess profit tax would require convincing political majorities and companies, whereas the Windfall Clause plan only requires convincing individual companies to buy-in. The Institute for Humanity researchers offers up the paper in preview on arXiv in the spirit of generating discussion, acknowledging that for the plan to be feasible many topics and aspects of the plan will have to be considered.

Spread the love
Continue Reading