The speed with which artificial intelligence (AI), generative AI (GenAI) and large language models (LLMs) are being adopted is producing an increase in risks and unintended consequences relating to data privacy and data ethics.Â
New LLMs are processing vast swathes of data – often taken from many sources without permission. This is causing understandable concerns over citizens’ privacy rights, as well as the potential for AI to make biased decisions about loans, job applications, dating sites and even criminal cases.Â
Things are moving quickly and many regulatory authorities are just starting to develop frameworks to maximize AI’s benefits to society while mitigating its risks. These frameworks need to be resilient, transparent and equitable. While the EU has taken the most comprehensive approach with new and anticipated legislation on AI, efforts to understand and agree how AI should be regulated have been largely uncoordinated. So it’s little surprise that leading industry figures are calling on governments to step up and play a greater role in regulating the use of AI.1 To provide a snapshot of the evolving regulatory landscape, the EY organization has analyzed the regulatory approaches of eight jurisdictions: Canada, China, the European Union (EU), Japan, Korea, Singapore, the United Kingdom (UK) and the United States (US).
Ideally, businesses will develop adaptive strategies tailored to the rapidly changing AI environment; however, this may be difficult as many businesses are at the early stages of AI maturity. This creates a challenging situation for businesses wanting to progress but also needing to maintain regulatory compliance and customer confidence in how the business is handling data. “There’s tension between being first versus part of the pack. Organizations should implement an agile controls framework that allows innovation but protects the organization and its customers as regulations evolve,†notes Gita Shivarattan, UK Head of Data Protection Law Services, Ernst & Young LLP. In this article, we look at six key steps data privacy officers can take to help organizations stay true to their priorities and obligations around data privacy and ethics as they deploy new technologies like AI.