Connect with us

Ethics

The Pentagon Is Developing New AI Technology

Published

 on

The Pentagon Is Developing New AI Technology

The United States Defense Department is stepping up their efforts in developing artificial intelligence (AI) technology. This comes at a time when other nations around the world, especially China, are racing ahead in the AI arms race. 

The Pentagon is working on developing an artificial intelligence-powered drone swarm that is able to operate independently. They will be capable of identifying and tracking targets. There are other efforts including intelligence fusion, “all-domain” command and control, and autonomous systems.

The Pentagon’s Joint Artificial Intelligence Center, or JAIC, requested that AI developers and drone swarm builders collaborate to help with search and rescue missions. The JAIC has four core research areas, and search and rescue missions are part of humanitarian and disaster relief. The same program also plays a role in developing AI solutions for predictive maintenance, cyberspace operations, and robotic process automation.

The objective of the request for information (RFI) is to seek out a full-stack search and rescue drone swarm capable of self-piloting. The Pentagon also wants to be able to detect humans and other targets, as well as transmit data and video back to a central location. It also seeks out algorithms developed by companies or teams, machine training processes and data to supplement what the government already has. 

If everything works out as planned, the government would have a contract with multiple vendors “that together could provide the capability to fly to a predetermined location/area, find people and manmade objects–through onboard edge processing–and cue analysts to look at detections sent via a data link to a control station,” as written in the RFI. “Sensors shall be able to stream full motion video to an analyst station during the day or night; though, the system will not normally be streaming as the AI will be monitoring the imagery instead of a person.”

The system being sought out would be required to have enough processing power so that the AI can operate without any human intervention. It should be capable of detecting and monitoring, as well as streaming live video to an operator. That human operator would then be able to take control of the drones. 

These developments come at a time when the Pentagon is diving further into the world of artificial intelligence (AI). 

Back in October, the Defense Innovation Board, a Pentagon advisory organization, published a list of ethical principles as a guideline for the development of AI-enabled weapons. These guidelines are meant to help control how these weapons are used on the battlefield. There are no legal aspects to the board’s recommendations, and the Pentagon is able to decide whether or not they are used. 

According to Lt. Gen. Jack Shanahan, director of the Defense Department’s Joint Artificial Intelligence Center, he hopes that the recommendations will lead to the responsible and ethical use of AI. 

“The DIB’s recommendations will help enhance the DOD’s commitment to upholding the highest ethical standards as outlined in the DOD AI strategy, while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations,” Shanahan said in a statement emailed to reporters.

Back in 2017, 116 technology executives requested that the United Nations pursue an all-out ban on autonomous weapons. At the same time, Google banned the use of its AI algorithm in any weapons system. That came after employees began to complain about the company’s role in a program to analyze drone footage. Companies such as Microsoft and Amazon are currently working with the military while pushing for a better approach to the technology. 

 

Spread the love

Ethics

AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized

mm

Published

on

AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized

In 2019, there was more focus on AI ethics than ever before. However much of this discussion seemed hazy, with no codified approach. Rather, different companies created their own frameworks and policies regarding AI ethics. Having a consensus on AI ethics issues is important because it helps policymakers create and adjusts policies, and it also informs the work done by researchers and scholars. Beyond that, AI companies must know where ethical limits are if they hope to avoid unethical AI implementation. In order to create a better picture of the trends in AI ethics, as VentureBeats reports, the Berkman Klein Center at Harvard University performed a meta-analysis of the various existing AI ethics principles and frameworks.

According to the authors of the analysis, the researchers wanted to compare the principles side-by-side to look for overlap and divergence. Jessica Fjeld, the assistant director of the Harvard Law School Cyberlaw Clinic, explained that the research team wanted to “uncover the hidden momentum in a fractured, global conversation around the future of AI, resulted in this white paper and the associated data visualization.”

During the analysis, the team examined 36 different AI principle documents originating from around the world and coming from many different organizational types. The results of the research found that there were eight themes that kept appearing across the many documents.

Privacy and accountability were two of the most commonly appearing ethical themes, as was AI safety/security. Transparency/explainability was also a commonly cited goal, with there many attempts to make algorithms more explainable over the course of 2019. Fairness/non-discrimination was another ethical focal point, reflecting growing concerns about data bias. Ensuring human control of technology, and not surrendering decision power to AI was heavily mentioned as well. Professional responsibility was the seventh common theme found by the researchers. Finally, the researchers found continual mention of promoting human values in the AI ethics documentation they examined.

The research team gave qualitative and quantitative breakdowns of how these themes manifested themselves within AI ethics documentation in their paper and in an accompanying map. The map displays where each of the themes were mentioned.

The research team noted that much of the AI ethics discussion revolved around concern for human values and rights. As the research paper notes:

“64% of our documents contained a reference to human rights, and five documents [14%] took international human rights as a framework for their overall effort.”

References to human rights and values were more common in documents produced by private sector groups and civil society groups. This indicates that AI private sector companies aren’t concerned just with profits but with producing AI in an ethical way. Meanwhile, government agencies seem less concerned or aware of AI ethics overall, with less than half of AI-related documents originating from government agencies concerning themselves with AI ethics.

The researchers also noted that if the documents they examined were more recent, they were more likely to address all of the eight most prominent themes instead of just a few. This fact implies that the ideas behind what constitutes ethical AI usage are beginning to coalesce among those leading the discussion about AI ethics. Finally, the researchers state that the success of these principles in guiding the development of AI will depend on how well integrated they are in the AI development community at large. The researchers state in the paper:

“Moreover, principles are a starting place for governance, not an end. On its own, a set of principles is unlikely to be more than gently persuasive. Its impact is likely to depend on how it is embedded in a larger governance ecosystem, including for instance relevant policies (e.g. AI national plans), laws, regulations, but also professional practices and everyday routines.”

Spread the love
Continue Reading

Ethics

What a Business AI Ethics Code Looks Like

mm

Published

on

What a Business AI Ethics Code Looks Like

By now, it’s safe to say that artificial intelligence (AI) has established itself in the mainstream, especially in the world of business. From customer service and marketing, to fraud detection and automation, this particular technology has helped streamline operations in recent years.

Unfortunately, our dependence on AI also means that it holds so much of our personal information – whether it’s our family history, the things we buy, places we go to, or even our favourite songs. Essentially, we’re giving technology free access to our lives. As AI continues to develop (and ask for even more data), it’s raising a lot of serious concerns.

For instance, when the South Wales Police rolled out its facial recognition systems, they were immediately questioned for being too “intrusive.” Of course, there’s the issue of safety and where all that data really goes.

On top of this, AI is also facing other hurdles, such as public distrust born from the fear of robots driving people into mass unemployment. Case in point, across the Atlantic, HP reports that 72% of Americans are worried about a future where robots and computers can do human jobs. While the latter may be a bit farfetched, especially since AI is still far from working or thinking like a human, you can’t deny that the rapidly growing AI industry must be controlled better than it is now. According to Stanford professor Emma Brunskill, if we truly want “AI [to value] its human users and [justify] the trust we place in autonomous systems,” then regulations have to be put in place. For that, businesses need to have an AI code of ethics.

AI Code of Ethics

The AI code of ethics isn’t meant for the AI itself, but for the people who develop and use said technology. Last year, the UK government published a report that aims to inform the public about its ethical use. All in all, the report can be summarised into five principles:

1. AI must be created and used for the benefit of all. AIs must be designed to help everyone and not just one faction. All involved parties – the government, businesses, and stockholders, for example – must be present during its creation to make sure that everyone’s interests are properly represented.

2. AI should not be used to diminish the data rights or privacy of individuals, families, and communities. AI can collect large amounts of consumer data that could prove dangerous if it gets into the wrong hands. Measures should be made to protect citizens and consumer privacy.

3. AI must operate within parameters understood by the human mind. To implement the necessary restrictions on AI’s programming, the machine has to be designed in a way that can be understood by humans still. This is also necessary to educate other people on the ins-and-outs of the machine.

4. Everybody has the right to be educated on the nuances of AI. Knowledge of AI should be available to everyone, even those outside of the business world. Fortunately, there are plenty of online resources available to aid anyone who wants to learn, from online videos to extensive courses. These topics can range from machine learning and Python, to R programming and Pandas – all of which are used in the development and implementation of AI. The commonality of such content proves just how accessible AI knowledge has become – and rightly so, given how ingrained it is in today’s society.

5. Humans must be able to flourish mentally, emotionally, and economically alongside AI. There is no doubt that AI has hugely influenced employment and our workforce. Whether it’s for the best or not is debatable.

In an employment survey published on Quartz, almost half of existing jobs are at high risk of being automated this coming decade. If AI wishes to remain ethical, businesses need to start creating new jobs to replace the ones threatened by AI.

New technologies such as AI are often a topic of concern, no matter what the benefits are. After all, it’s not enough to enjoy the convenience of technology without being critical of the possible repercussions. If all businesses implement these ethical principles, then the public might be more accepting of them. This additional support may be what tech companies need to push the development of AI even further.

Spread the love
Continue Reading

Ethics

Paper From Insitute For Humanity Argues That Companies Should Compensate Society For Jobs Lost To AI

mm

Published

on

Paper From Insitute For Humanity Argues That Companies Should Compensate Society For Jobs Lost To AI

Automation, and the loss of jobs that the company has the car a major point of discussion in the AI field past couple years, and seems poised to become an even greater point of discussion in the coming decade. Current Democratic presidential candidate Andrew Yang has made job loss to automation a key issue of his platform. The Institute for Humanity, an AI think tank lead by Nick Bostrom, the philosopher, recently made a paper available for preview on arXiv. As ZDNet reports, The paper suggests that AI companies with excess profit should pay some amount of money beyond their normal taxes, money which would go towards ameliorating the societal damage from jobs lost to automation.

The AI researchers write in the paper that there is consensus among most AI researchers that the vast majority of human work can potentially be automated, and the researchers also predict that by 2060 AI will be able to outperform humans at most tasks that contribute to economic activity. Because of this, the researchers suggest that there should be a plan in place to mitigate the potentially harmful effects of automation, including job displacement, lowered wages, and the loss of whole job types.

The researchers suggest that there should be a scale of obligation and remuneration, which is dependent upon the profit of the company in relation to the gross world profit. This could range anywhere from zero to 50% of the profit over the point of excess profit. The paper’s authors offer an example of an internet company that makes around $5 trillion dollars in excess profit in 2060 (based on 2010 dollars) having to pay around $488.12 billion if it’s assumed that the gross world product is a$268 billion.

The researchers argue that a quantifiable metric of remuneration is something that companies will be able to plan for, and therefore they can reduce risk. Companies could potentially bring the amount they pay into the “Windfall Clause” into alignment with their philanthropic giving amount through the process of discounting. For example, that hypothetical $488 billion dollars could be discounted buy at least 10% of the average cost of capital for an internet company and then further discount because of the low probability of actually earning the amount needed to make a payment that large. After discounting, the annual cost to a company that makes enough money to potentially pay in $488 billion would be around $649 million a year, approximately in line with the amount large companies spend on philanthropic giving. The researchers suggest thinking of the Windfall Clause as an extension to stock option compensation.

The authors of the paper note that it may be a plan that is easier to implement than an excess profit task, as instituting an excess profit tax would require convincing political majorities and companies, whereas the Windfall Clause plan only requires convincing individual companies to buy-in. The Institute for Humanity researchers offers up the paper in preview on arXiv in the spirit of generating discussion, acknowledging that for the plan to be feasible many topics and aspects of the plan will have to be considered.

Spread the love
Continue Reading