stub Why Data Privacy Will Be 2024’s Defining Tech Issue - Unite.AI
Connect with us

Thought Leaders

Why Data Privacy Will Be 2024’s Defining Tech Issue

mm

Published

 on

In the spotlight of the tech world, AI-driven chatbots like ChatGPT are attracting attention, reshaping industries as we know them. With each advancement, traditional roles are fading into obsolescence – writers, marketing gurus, even IT experts find themselves on the chopping block. In June 2023 alone, a staggering 3,900 Americans lost their jobs to the AI. Yet, this disruption is merely a prelude to what lies ahead.

As AI continues its conquests across industries, a wave of apprehension swells regarding copyright infringement and privacy breaches. The question looms large: how can we ensure a delicate balance between progress and privacy?

Sparks of concern 

To start with, let me explain how AI models such as ChatGPT function. They generate outcomes based on data they learn during training. If the model can create text that appears as if it were written by Shakespeare in the same old English literature style, it means that it has already ‘seen' that content before during its preparation period, before its release.

In fact, the machine learning (ML) algorithms behind every AI model are trained on vast amounts of data to perform well. For instance, there are systems that help doctors in diagnostics — they analyze CT scans and find abnormalities that can indicate specific diseases such as lung cancer. They are usually trained on millions of medical images. Without that they could not recognise artifacts on scans. 

As the demand for AI tools increases exponentially, tech giants are increasingly collecting vast amounts of data to train their models. And sometimes that data includes sensitive information about people and organizations. Moreover, it is often obtained through scraping millions of web pages without any agreements from the owners.

This sparks public concerns about privacy, transparency, and control over personal information on the internet. A 2023 Deloitte survey unveiled that the majority of respondents seek more protection and control over how their data is used. Nearly nine in 10 expressed a desire to view and delete collected data, with 80% feeling they deserve compensation for companies profiting from their data. In the US alone, citizens have grown more worried about how their data is used, with about seven in ten U.S. adults (71%) sharing these concerns, up from 64% in 2019. 

Legal battles 

Additionally, some organizations are taking these issues to court. According to Fortune, as of November 2023, there were over 100 AI-related lawsuits navigating through the legal system. These cases cover a range of concerns, including intellectual property disputes, the propagation of harmful content, and instances of discrimination.

Among these cases were lawsuits filed by artists who accused developers of deep learning and text-to-image models such as Stable Diffusion and Midjourney of utilizing their digital art in AI training without consent. They argued that the companies behind these products had collected billions of images from the internet, including theirs, to instruct models in generating their own images.

In December 2023, the major American newspaper The New York Times entered these legal battles by suing OpenAI, the developer behind the thriving ChatGPT, for copyright infringement. The lawsuit emphasised that millions of articles published by media organizations were utilized to train automated chatbots, which now rival the news outlet as a source of dependable information.

Future issues 

In response to these pressing concerns, governments worldwide are rallying together to confront public anxieties. For example, representatives from twelve regulatory bodies globally issued a joint statement in August 2023, focusing on data scraping and privacy protection. The statement came from authorities in Argentina, Australia, Canada, Colombia, Hong Kong, Jersey, Mexico, Morocco, New Zealand, Norway, Switzerland, and the UK. Meanwhile, in California, the Delete Act was signed into law, targeting data brokers and establishing additional regulations for personal data collection and management. 

Despite concerted efforts, I foresee privacy and data ownership concerns retaining their prominence in public discourse throughout 2024 and beyond. Moreover, the surge in intellectual property lawsuits signals merely the tip of the iceberg. We are likely to witness a surge in cases focusing on data accuracy and safety, particularly amidst the rampant proliferation of deepfakes and misinformation.

While both governmental and business sectors must redouble their efforts, a cautious approach is imperative. Despite escalating apprehensions, it's worth noting that open data continues to play a pivotal role in driving research and development forward. Take, for instance, the invaluable role public access to health records played during the COVID-19 crisis, expediting the development of life-saving medical breakthroughs such as the vaccines pioneered by Moderna and Pfizer.

The significance of open data is underscored by the historical example of the U.S. Human Genome Project, where public sharing of gene data transformed genetics research. In a similar manner, AI analyzing and learning on the data can benefit society, from categorizing genetic mutations to addressing pressing challenges like climate change.

In business, data collected by web scrapers is invaluable for market intelligence, competitor tracking, and recognizing prevailing trends. If web scraping becomes more restricted, companies may face limited access to vital data for making informed decisions, potentially leading to reduced competition and transparency in pricing, hindrance of innovation, and subpar user experience due to slower and less accurate data update.

Yet amidst these burgeoning challenges, there exist avenues to confront them head-on. I place my faith in the power of proxies to navigate these turbulent waters. These innovative technologies cloak users' original IP addresses, channeling their online activities through alternate servers. Not only do they bolster data security, but they also emerge as indispensable tools in harmonizing technological progress with the safeguarding of individual liberties.

William Belov is the CEO of Infatica, a leading global proxy network. His experience spans investments, mergers and acquisitions, and various technologies, all underpinned by a dedicated focus on business development. William holds two MD degrees and an EMBA.