Connect with us

Ethics

Paper From Insitute For Humanity Argues That Companies Should Compensate Society For Jobs Lost To AI

mm

Published

 on

Paper From Insitute For Humanity Argues That Companies Should Compensate Society For Jobs Lost To AI

Automation, and the loss of jobs that the company has the car a major point of discussion in the AI field past couple years, and seems poised to become an even greater point of discussion in the coming decade. Current Democratic presidential candidate Andrew Yang has made job loss to automation a key issue of his platform. The Institute for Humanity, an AI think tank lead by Nick Bostrom, the philosopher, recently made a paper available for preview on arXiv. As ZDNet reports, The paper suggests that AI companies with excess profit should pay some amount of money beyond their normal taxes, money which would go towards ameliorating the societal damage from jobs lost to automation.

The AI researchers write in the paper that there is consensus among most AI researchers that the vast majority of human work can potentially be automated, and the researchers also predict that by 2060 AI will be able to outperform humans at most tasks that contribute to economic activity. Because of this, the researchers suggest that there should be a plan in place to mitigate the potentially harmful effects of automation, including job displacement, lowered wages, and the loss of whole job types.

The researchers suggest that there should be a scale of obligation and remuneration, which is dependent upon the profit of the company in relation to the gross world profit. This could range anywhere from zero to 50% of the profit over the point of excess profit. The paper’s authors offer an example of an internet company that makes around $5 trillion dollars in excess profit in 2060 (based on 2010 dollars) having to pay around $488.12 billion if it’s assumed that the gross world product is a$268 billion.

The researchers argue that a quantifiable metric of remuneration is something that companies will be able to plan for, and therefore they can reduce risk. Companies could potentially bring the amount they pay into the “Windfall Clause” into alignment with their philanthropic giving amount through the process of discounting. For example, that hypothetical $488 billion dollars could be discounted buy at least 10% of the average cost of capital for an internet company and then further discount because of the low probability of actually earning the amount needed to make a payment that large. After discounting, the annual cost to a company that makes enough money to potentially pay in $488 billion would be around $649 million a year, approximately in line with the amount large companies spend on philanthropic giving. The researchers suggest thinking of the Windfall Clause as an extension to stock option compensation.

The authors of the paper note that it may be a plan that is easier to implement than an excess profit task, as instituting an excess profit tax would require convincing political majorities and companies, whereas the Windfall Clause plan only requires convincing individual companies to buy-in. The Institute for Humanity researchers offers up the paper in preview on arXiv in the spirit of generating discussion, acknowledging that for the plan to be feasible many topics and aspects of the plan will have to be considered.

Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.

Ethics

AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized

mm

Published

on

AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized

In 2019, there was more focus on AI ethics than ever before. However much of this discussion seemed hazy, with no codified approach. Rather, different companies created their own frameworks and policies regarding AI ethics. Having a consensus on AI ethics issues is important because it helps policymakers create and adjusts policies, and it also informs the work done by researchers and scholars. Beyond that, AI companies must know where ethical limits are if they hope to avoid unethical AI implementation. In order to create a better picture of the trends in AI ethics, as VentureBeats reports, the Berkman Klein Center at Harvard University performed a meta-analysis of the various existing AI ethics principles and frameworks.

According to the authors of the analysis, the researchers wanted to compare the principles side-by-side to look for overlap and divergence. Jessica Fjeld, the assistant director of the Harvard Law School Cyberlaw Clinic, explained that the research team wanted to “uncover the hidden momentum in a fractured, global conversation around the future of AI, resulted in this white paper and the associated data visualization.”

During the analysis, the team examined 36 different AI principle documents originating from around the world and coming from many different organizational types. The results of the research found that there were eight themes that kept appearing across the many documents.

Privacy and accountability were two of the most commonly appearing ethical themes, as was AI safety/security. Transparency/explainability was also a commonly cited goal, with there many attempts to make algorithms more explainable over the course of 2019. Fairness/non-discrimination was another ethical focal point, reflecting growing concerns about data bias. Ensuring human control of technology, and not surrendering decision power to AI was heavily mentioned as well. Professional responsibility was the seventh common theme found by the researchers. Finally, the researchers found continual mention of promoting human values in the AI ethics documentation they examined.

The research team gave qualitative and quantitative breakdowns of how these themes manifested themselves within AI ethics documentation in their paper and in an accompanying map. The map displays where each of the themes were mentioned.

The research team noted that much of the AI ethics discussion revolved around concern for human values and rights. As the research paper notes:

“64% of our documents contained a reference to human rights, and five documents [14%] took international human rights as a framework for their overall effort.”

References to human rights and values were more common in documents produced by private sector groups and civil society groups. This indicates that AI private sector companies aren’t concerned just with profits but with producing AI in an ethical way. Meanwhile, government agencies seem less concerned or aware of AI ethics overall, with less than half of AI-related documents originating from government agencies concerning themselves with AI ethics.

The researchers also noted that if the documents they examined were more recent, they were more likely to address all of the eight most prominent themes instead of just a few. This fact implies that the ideas behind what constitutes ethical AI usage are beginning to coalesce among those leading the discussion about AI ethics. Finally, the researchers state that the success of these principles in guiding the development of AI will depend on how well integrated they are in the AI development community at large. The researchers state in the paper:

“Moreover, principles are a starting place for governance, not an end. On its own, a set of principles is unlikely to be more than gently persuasive. Its impact is likely to depend on how it is embedded in a larger governance ecosystem, including for instance relevant policies (e.g. AI national plans), laws, regulations, but also professional practices and everyday routines.”

Spread the love
Continue Reading

Ethics

What a Business AI Ethics Code Looks Like

mm

Published

on

What a Business AI Ethics Code Looks Like

By now, it’s safe to say that artificial intelligence (AI) has established itself in the mainstream, especially in the world of business. From customer service and marketing, to fraud detection and automation, this particular technology has helped streamline operations in recent years.

Unfortunately, our dependence on AI also means that it holds so much of our personal information – whether it’s our family history, the things we buy, places we go to, or even our favourite songs. Essentially, we’re giving technology free access to our lives. As AI continues to develop (and ask for even more data), it’s raising a lot of serious concerns.

For instance, when the South Wales Police rolled out its facial recognition systems, they were immediately questioned for being too “intrusive.” Of course, there’s the issue of safety and where all that data really goes.

On top of this, AI is also facing other hurdles, such as public distrust born from the fear of robots driving people into mass unemployment. Case in point, across the Atlantic, HP reports that 72% of Americans are worried about a future where robots and computers can do human jobs. While the latter may be a bit farfetched, especially since AI is still far from working or thinking like a human, you can’t deny that the rapidly growing AI industry must be controlled better than it is now. According to Stanford professor Emma Brunskill, if we truly want “AI [to value] its human users and [justify] the trust we place in autonomous systems,” then regulations have to be put in place. For that, businesses need to have an AI code of ethics.

AI Code of Ethics

The AI code of ethics isn’t meant for the AI itself, but for the people who develop and use said technology. Last year, the UK government published a report that aims to inform the public about its ethical use. All in all, the report can be summarised into five principles:

1. AI must be created and used for the benefit of all. AIs must be designed to help everyone and not just one faction. All involved parties – the government, businesses, and stockholders, for example – must be present during its creation to make sure that everyone’s interests are properly represented.

2. AI should not be used to diminish the data rights or privacy of individuals, families, and communities. AI can collect large amounts of consumer data that could prove dangerous if it gets into the wrong hands. Measures should be made to protect citizens and consumer privacy.

3. AI must operate within parameters understood by the human mind. To implement the necessary restrictions on AI’s programming, the machine has to be designed in a way that can be understood by humans still. This is also necessary to educate other people on the ins-and-outs of the machine.

4. Everybody has the right to be educated on the nuances of AI. Knowledge of AI should be available to everyone, even those outside of the business world. Fortunately, there are plenty of online resources available to aid anyone who wants to learn, from online videos to extensive courses. These topics can range from machine learning and Python, to R programming and Pandas – all of which are used in the development and implementation of AI. The commonality of such content proves just how accessible AI knowledge has become – and rightly so, given how ingrained it is in today’s society.

5. Humans must be able to flourish mentally, emotionally, and economically alongside AI. There is no doubt that AI has hugely influenced employment and our workforce. Whether it’s for the best or not is debatable.

In an employment survey published on Quartz, almost half of existing jobs are at high risk of being automated this coming decade. If AI wishes to remain ethical, businesses need to start creating new jobs to replace the ones threatened by AI.

New technologies such as AI are often a topic of concern, no matter what the benefits are. After all, it’s not enough to enjoy the convenience of technology without being critical of the possible repercussions. If all businesses implement these ethical principles, then the public might be more accepting of them. This additional support may be what tech companies need to push the development of AI even further.

Spread the love
Continue Reading

Cybersecurity

Startups Creating AI Tools To Detect Email Harassment

mm

Published

on

Startups Creating AI Tools To Detect Email Harassment

Since the Me Too movement came to prominence in late 2017, more and more attention is paid to incidents of sexual harassment, including workplace harassment and harassment through email or instant messaging.

As reported by The Guardian, AI researchers and engineers have been creating tools to detect harassment through text communications, dubbed MeTooBots. MeTooBots are being implemented by companies around the world in order to flag potentially harmful and harassing communications. One example of this is a bot created by the company Nex AI, which is currently being used by around 50 different companies. The bot utilizes an algorithm that examines company documents, chat and emails and compares it to its training data of bullying or harassing messages. Messages deemed potentially harassing or harmful can then be sent to an HR manager for review, although Nex AI has not revealed the specific terms that the bot looks for across communications it analyzes.

Other startups have also created AI-powered harassment detection tools. The AI startup Spot owns a chatbot that is capable of enabling employees to anonymously report allegations of sexual harassment. The bot will ask questions and give advice in order to collect more details and further an investigation into the incident. Spot wants to help HR teams deal with harassment issues in a sensitive manner while still ensuring anonymity is preserved.

According to The Guardian, Prof. Brian Subirana, MIT and Harvard AI professor, explained that attempts to use AI to detect harassment have their limitations. Harassment can be very subtle and hard to pick up, frequently manifesting itself only as a pattern that reveals itself when examining weeks of data. Bots also can’t, as of yet, go beyond the detection of certain trigger words and analyze the broader interpersonal or cultural dynamics that could potentially be at play. Despite the complexities of detecting harassment, Subirana does believe that bots could play a role in combating online harassment. Subirana could see the bots being used to train people to detect harassment when they see it, creating a database of potentially problematic messages. Subirana also stated that there could be a placebo effect that makes people less likely to harass their colleagues even they suspect their messages may be being scrutinized, even if they aren’t.

While Subirana does believe that bots have their potential uses in combating harassment, Subirana also argued that confidentiality of data and privacy is a major concern. Subirana states that such technology could potentially create an atmosphere of distrust and suspicion if misused. Sam Smethers, the chief executive of women’s rights NGO the Fawcett Society, also expressed concern about how the bots could be misused. Smethers stated:

“We would want to look carefully at how the technology is being developed, who is behind it, and whether the approach taken is informed by a workplace culture that is seeking to prevent harassment and promote equality, or whether it is in fact just another way to control their employees.”

Methods of using bots to detect harassment and still protect anonymity and privacy will have to be worked out between bot developers, companies, and regulators. Some possible methods of utilizing the predictive power of bots and AI while still safeguarding privacy include keeping communications anonymous. For instance, reports could be generated by the bot that only includes the presence of potentially harmful language and counts of how often the possibly harassing language appears. HR could then get an idea if uses of toxic language are dropping following awareness seminars, or conversely could determine if they should be on the lookout for increased harassment.

Despite the disagreement over appropriate uses of machine learning algorithms and bots in detecting harassment, both sides seem to agree that the ultimate decision to get intervene in cases of harassment should be by a human, and that bots should only ever alert people to matched patterns rather than saying definitively that something was an instance of harassment.

Spread the love
Continue Reading