Connect with us

Ethics

China Leading the Global Expansion and Exportation of AI Technology

Published

 on

China Leading the Global Expansion and Exportation of AI Technology

China is leading the world when it comes to the global expansion of AI technology, having exported it to over 60 countries, many of which have dismal human rights records. Some of those nations which Chinese companies have exported the technology to include Iran, Myanmar, Venezuela, and Zimbabwe.

According to the report released by U.S. think tank the Carnegie Endowment for International Peace, many states are deploying advanced AI surveillance tools to monitor and track citizens. The new index details the ways in which the countries are doing this. 

The report had several key findings, including how AI surveillance technology is spreading to other countries at a much faster rate than previously expected by experts. At least seventy-five out of 176 countries around the globe are currently using AI technologies for surveillance. Fifty-six countries are using it for smart city/safe city platforms, sixty-four countries are using it within facial recognition systems, and fifty-two are using it for smart policing. 

Another key finding was that China is a major provider of AI surveillance around the world. The technology is strongly linked to some of China’s biggest companies like Huawei, Hikvision, Dahua, and ZTE. AI surveillance technology connected to these companies supplies sixty-three countries with capabilities. Thirty-six of those countries are part of China’s Belt and Road Initiative (BRI). Huawei, one of the most talked about Chinese companies recently, provides AI surveillance technology to at least fifty countries worldwide, just by themselves. The next biggest non-Chinese supplier of the technology is Japan’s NEC Corporation, and they only provide it to fourteen countries. 

China often hands out soft loans to governments when pitching a product. The governments then turn around and use that money to purchase the product and equipment, and this technique has been specifically employed in countries such as Kenya, Laos, Mongolia, Uganda, and Uzbekistan. Without China, these countries would most likely not have access to the technology. This technique of handing out soft loans to purchase AI surveillance technology is concerning to many, and questions are being raised about how much the Chinese government is subsidizing the purchase of “advanced repressive technology.” 

China is not alone in supplying AI surveillance technology; technology supplied by U.S. firms is currently in thirty-two countries. Some of the big-name U.S. companies include IBM (in eleven countries), Palantir (in nine countries), and Cisco (in six countries). Outside of the U.S. and China, nations around the world who call themselves liberal democracies, such as France, Germany, Israel, and Japan, also have companies responsible for exporting and proliferating the technology. According to the report, there are not enough steps being taken to monitor and control the potential hazards of the spread of the technology.

According to the index, 51 percent of advanced democracies deploy AI surveillance systems, while 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology. While the numbers do not mean that the technology is being abused by all of the governments, the potential is there and many are in fact doing just that. 

Countries such as China, Russia, and Saudi Arabia are known to be exploiting AI technology for mass surveillance purposes, while other governments with bad human rights records are using it to reinforce repression. Specifically, the Communist Party in China is currently using facial recognition systems to target Uighurs and other Muslim minorities in the far western region of Xinjiang. 

The report also found that there is a strong connection between the military expenditures of a country and the government’s use of AI surveillance systems. Out of the top fifty military spending countries, forty of them are using AI surveillance technology. 

The new report by the Carnegie Endowment for International Peace highlights the dangers that were once foreshadowed by experts. These dangers are now a reality, and AI technology is seen by many nations as an extremely efficient way to track and surveil people. While it will be hard to go back, many still believe that international organizations and agreements need to start addressing the issues surrounding AI.

 

Spread the love

Ethics

AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized

mm

Published

on

AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized

In 2019, there was more focus on AI ethics than ever before. However much of this discussion seemed hazy, with no codified approach. Rather, different companies created their own frameworks and policies regarding AI ethics. Having a consensus on AI ethics issues is important because it helps policymakers create and adjusts policies, and it also informs the work done by researchers and scholars. Beyond that, AI companies must know where ethical limits are if they hope to avoid unethical AI implementation. In order to create a better picture of the trends in AI ethics, as VentureBeats reports, the Berkman Klein Center at Harvard University performed a meta-analysis of the various existing AI ethics principles and frameworks.

According to the authors of the analysis, the researchers wanted to compare the principles side-by-side to look for overlap and divergence. Jessica Fjeld, the assistant director of the Harvard Law School Cyberlaw Clinic, explained that the research team wanted to “uncover the hidden momentum in a fractured, global conversation around the future of AI, resulted in this white paper and the associated data visualization.”

During the analysis, the team examined 36 different AI principle documents originating from around the world and coming from many different organizational types. The results of the research found that there were eight themes that kept appearing across the many documents.

Privacy and accountability were two of the most commonly appearing ethical themes, as was AI safety/security. Transparency/explainability was also a commonly cited goal, with there many attempts to make algorithms more explainable over the course of 2019. Fairness/non-discrimination was another ethical focal point, reflecting growing concerns about data bias. Ensuring human control of technology, and not surrendering decision power to AI was heavily mentioned as well. Professional responsibility was the seventh common theme found by the researchers. Finally, the researchers found continual mention of promoting human values in the AI ethics documentation they examined.

The research team gave qualitative and quantitative breakdowns of how these themes manifested themselves within AI ethics documentation in their paper and in an accompanying map. The map displays where each of the themes were mentioned.

The research team noted that much of the AI ethics discussion revolved around concern for human values and rights. As the research paper notes:

“64% of our documents contained a reference to human rights, and five documents [14%] took international human rights as a framework for their overall effort.”

References to human rights and values were more common in documents produced by private sector groups and civil society groups. This indicates that AI private sector companies aren’t concerned just with profits but with producing AI in an ethical way. Meanwhile, government agencies seem less concerned or aware of AI ethics overall, with less than half of AI-related documents originating from government agencies concerning themselves with AI ethics.

The researchers also noted that if the documents they examined were more recent, they were more likely to address all of the eight most prominent themes instead of just a few. This fact implies that the ideas behind what constitutes ethical AI usage are beginning to coalesce among those leading the discussion about AI ethics. Finally, the researchers state that the success of these principles in guiding the development of AI will depend on how well integrated they are in the AI development community at large. The researchers state in the paper:

“Moreover, principles are a starting place for governance, not an end. On its own, a set of principles is unlikely to be more than gently persuasive. Its impact is likely to depend on how it is embedded in a larger governance ecosystem, including for instance relevant policies (e.g. AI national plans), laws, regulations, but also professional practices and everyday routines.”

Spread the love
Continue Reading

Ethics

What a Business AI Ethics Code Looks Like

mm

Published

on

What a Business AI Ethics Code Looks Like

By now, it’s safe to say that artificial intelligence (AI) has established itself in the mainstream, especially in the world of business. From customer service and marketing, to fraud detection and automation, this particular technology has helped streamline operations in recent years.

Unfortunately, our dependence on AI also means that it holds so much of our personal information – whether it’s our family history, the things we buy, places we go to, or even our favourite songs. Essentially, we’re giving technology free access to our lives. As AI continues to develop (and ask for even more data), it’s raising a lot of serious concerns.

For instance, when the South Wales Police rolled out its facial recognition systems, they were immediately questioned for being too “intrusive.” Of course, there’s the issue of safety and where all that data really goes.

On top of this, AI is also facing other hurdles, such as public distrust born from the fear of robots driving people into mass unemployment. Case in point, across the Atlantic, HP reports that 72% of Americans are worried about a future where robots and computers can do human jobs. While the latter may be a bit farfetched, especially since AI is still far from working or thinking like a human, you can’t deny that the rapidly growing AI industry must be controlled better than it is now. According to Stanford professor Emma Brunskill, if we truly want “AI [to value] its human users and [justify] the trust we place in autonomous systems,” then regulations have to be put in place. For that, businesses need to have an AI code of ethics.

AI Code of Ethics

The AI code of ethics isn’t meant for the AI itself, but for the people who develop and use said technology. Last year, the UK government published a report that aims to inform the public about its ethical use. All in all, the report can be summarised into five principles:

1. AI must be created and used for the benefit of all. AIs must be designed to help everyone and not just one faction. All involved parties – the government, businesses, and stockholders, for example – must be present during its creation to make sure that everyone’s interests are properly represented.

2. AI should not be used to diminish the data rights or privacy of individuals, families, and communities. AI can collect large amounts of consumer data that could prove dangerous if it gets into the wrong hands. Measures should be made to protect citizens and consumer privacy.

3. AI must operate within parameters understood by the human mind. To implement the necessary restrictions on AI’s programming, the machine has to be designed in a way that can be understood by humans still. This is also necessary to educate other people on the ins-and-outs of the machine.

4. Everybody has the right to be educated on the nuances of AI. Knowledge of AI should be available to everyone, even those outside of the business world. Fortunately, there are plenty of online resources available to aid anyone who wants to learn, from online videos to extensive courses. These topics can range from machine learning and Python, to R programming and Pandas – all of which are used in the development and implementation of AI. The commonality of such content proves just how accessible AI knowledge has become – and rightly so, given how ingrained it is in today’s society.

5. Humans must be able to flourish mentally, emotionally, and economically alongside AI. There is no doubt that AI has hugely influenced employment and our workforce. Whether it’s for the best or not is debatable.

In an employment survey published on Quartz, almost half of existing jobs are at high risk of being automated this coming decade. If AI wishes to remain ethical, businesses need to start creating new jobs to replace the ones threatened by AI.

New technologies such as AI are often a topic of concern, no matter what the benefits are. After all, it’s not enough to enjoy the convenience of technology without being critical of the possible repercussions. If all businesses implement these ethical principles, then the public might be more accepting of them. This additional support may be what tech companies need to push the development of AI even further.

Spread the love
Continue Reading

Ethics

Paper From Insitute For Humanity Argues That Companies Should Compensate Society For Jobs Lost To AI

mm

Published

on

Paper From Insitute For Humanity Argues That Companies Should Compensate Society For Jobs Lost To AI

Automation, and the loss of jobs that the company has the car a major point of discussion in the AI field past couple years, and seems poised to become an even greater point of discussion in the coming decade. Current Democratic presidential candidate Andrew Yang has made job loss to automation a key issue of his platform. The Institute for Humanity, an AI think tank lead by Nick Bostrom, the philosopher, recently made a paper available for preview on arXiv. As ZDNet reports, The paper suggests that AI companies with excess profit should pay some amount of money beyond their normal taxes, money which would go towards ameliorating the societal damage from jobs lost to automation.

The AI researchers write in the paper that there is consensus among most AI researchers that the vast majority of human work can potentially be automated, and the researchers also predict that by 2060 AI will be able to outperform humans at most tasks that contribute to economic activity. Because of this, the researchers suggest that there should be a plan in place to mitigate the potentially harmful effects of automation, including job displacement, lowered wages, and the loss of whole job types.

The researchers suggest that there should be a scale of obligation and remuneration, which is dependent upon the profit of the company in relation to the gross world profit. This could range anywhere from zero to 50% of the profit over the point of excess profit. The paper’s authors offer an example of an internet company that makes around $5 trillion dollars in excess profit in 2060 (based on 2010 dollars) having to pay around $488.12 billion if it’s assumed that the gross world product is a$268 billion.

The researchers argue that a quantifiable metric of remuneration is something that companies will be able to plan for, and therefore they can reduce risk. Companies could potentially bring the amount they pay into the “Windfall Clause” into alignment with their philanthropic giving amount through the process of discounting. For example, that hypothetical $488 billion dollars could be discounted buy at least 10% of the average cost of capital for an internet company and then further discount because of the low probability of actually earning the amount needed to make a payment that large. After discounting, the annual cost to a company that makes enough money to potentially pay in $488 billion would be around $649 million a year, approximately in line with the amount large companies spend on philanthropic giving. The researchers suggest thinking of the Windfall Clause as an extension to stock option compensation.

The authors of the paper note that it may be a plan that is easier to implement than an excess profit task, as instituting an excess profit tax would require convincing political majorities and companies, whereas the Windfall Clause plan only requires convincing individual companies to buy-in. The Institute for Humanity researchers offers up the paper in preview on arXiv in the spirit of generating discussion, acknowledging that for the plan to be feasible many topics and aspects of the plan will have to be considered.

Spread the love
Continue Reading