Connect with us

Thought Leaders

Is the Risk of AI Worth the Reward?




When I reflect on the fictional content I have encountered involving AI, I would estimate it to be over 90% dystopian. Ironically, because large language models are trained on content from the internet, they are not just biased towards problematic aspects of society, but even themselves. The concept of self-loathing AI is humorous and brings to mind Marvin from Hitchhiker’s Guide to the Galaxy. However, it is one of many realities that we must consider as AI is integrated into society.

In his book, Life 3.0: Being Human in the Age of AI, MIT professor Max Tegmark explains his perspective on how to keep AI beneficial to society. He writes, “If machine learning can help reveal relationships between genes, diseases and treatment responses, it could revolutionize personalized medicine, make farm animals healthier and enable more resilient crops. Moreover, robots have the potential to become more accurate and reliable surgeons than humans, even without using advanced AI.”

There is no doubt that AI will impact individuals, society, and global systems, but there is uncertainty associated with this impact. AI will be entrusted with delicate work such as healthcare diagnosis, autonomous driving, and financial decision-making. By taking on the risk of trust, we anticipate returns in the form of automation, improved productivity, speedier workflows, and user interfaces that we cannot even predict today.

One example of this can be seen in Thomson Reuters Institute’s recently published 2024 Generative AI in Professional Services report, based on a global survey of 1,128 respondents qualified as being familiar with Generative AI technology. The research demonstrates a common theme of cautious optimism when it comes to adopting Generative AI in professional settings– in fact, 41% said they were excited because they expect increased efficiency and productivity.

This shows a healthy demand for automation that can create new efficiencies for professionals, a benefit that they are supportive to bring forward.

No workplace or industry wants to be left behind, so as long as this race toward leveraging AI in business continues to pick up momentum, you can expect that employees and professionals will continue to be exposed to these new technologies in a variety of ways to strengthen their future of work.

On the other hand, we are also hyper aware of potential risk we take on by entrusting AI. Tegmark also wrote this in Life 3.0, “In other words, the real risk with AGI (artificial general intelligence) isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”

Like any new technology, AI presents a new way of doing things, and change is often a challenge when you don’t know what outcome to expect. Some of this risk is highly dramatized in fiction commonly depicting AI as misanthropic–in Silicon Valley, you’ll at times hear joking references to “Skynet” from the Terminator film franchise in casual conversation regarding fears about AI. However, the reality about potential AI risk is much less exciting than what Hollywood presents, in that initial AI performance may simply be inaccurate and buggy. After all, AI is software, and shares all of the same pitfalls as traditional software.

As a researcher, I am constantly faced with the need to mitigate bias in AI algorithms, whether through careful data curation, algorithmic transparency, or robust testing protocols. The fact that we as humans are hyper-aware of the dangers of AI (as evidenced by the content we create) brings me comfort that significant attention is being paid towards ethical and responsible AI. This attention comes from stakeholders of all kinds: users, policymakers, and businesses are increasingly demanding transparency and accountability from AI systems.

It is a commonly held view that technology in the private sector moves fast, and government moves slow. It also is a reality that, once it becomes possible, capitalism will result in AI displacing millions of workers, forcing them to learn new skills in order to stay in the workforce.

According to a 2023 research report from McKinsey Global Institute about Generative AI and the future of work in America, “By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automated—a trend accelerated by generative AI. However, we see generative AI enhancing the way STEM, creative, and business and legal professionals work rather than eliminating a significant number of jobs outright. Automation’s biggest effects are likely to hit other job categories. Office support, customer service, and food service employment could continue to decline.”

It is difficult for me to imagine a world where the government does not play a role in helping these workers who will be displaced. Therefore, it is important that the public sector begin preparing solutions now. Examples of solutions include upskilling at-risk workers and providing a universal basic income. I also am hopeful that the private sector will play a role here, by creating new jobs that we may not be able to predict today.

Universal basic income has always been an exciting concept to me and brings to mind the phrase “don’t live to work, work to live.” Many people work to live. Call me polyannish, but if this work is automatable, I believe it is more than a pipe dream that humanity could enter an era where work is optional. This is a totally foreign concept to us today, but that does not mean it is impossible. In fact, we should expect nothing short of extraordinary from a technology as extraordinary as AI.

A former quant and data scientist, Sarah Nagy founded an analytics automation startup, Seek AI, in September 2021. Sarah most recently led the consumer data team at Citadel's Ashler Capital, and prior to Citadel, led the quant arms at two successfully exited startups and developed algorithmic trading strategies at ITG. Sarah has a Master in Finance degree from Princeton and dual bachelor's degrees in Astrophysics and Business Economics from UCLA.