EU Investigates Google’s AI Model Over Data Privacy Concerns
Image Credit: PYMNTS

EU Investigates Google’s AI Model Over Data Privacy Concerns

EU officials are looking into whether Google’s AI model PaLM2 breaks the GDPR’s strict rules on data privacy.

Several regulators in the European Union have initiated a probe into one of Google’s AI models, which makes people get concerned over how compliant it is to the area’s rigorous regulations concerning data protection. The EU is taking this step in its sustainable campaign to ensure that the artificial intelligence technologies have an understanding of the General Data Protection Regulation (GDPR), which is the most stringent global privacy law.

The investigation is mainly focused on Google’s latest and biggest language model called Pathways Language Model 2 (PaLM2), through which Google provides many AI services, such as content creation and summarising emails. The review is being conducted under the aegis of Ireland’s Data Protection Commission (DPC), which oversees Google operations in the EU.

What’s the investigation about?

The purpose of the DPC inquiry is to assess whether Google has properly assessed the threats associated with processing PaLM2 data. In other words, EU regulators would like to know if the AI model poses a ‘‘high risk to the rights and freedoms of individuals.’’ People are getting more concerned about how personal data is processed and whether such data is kept safe under the General Data Protection Regulation as usage of AI systems such as PaLM2 grows with the help of processing vast data.

Massive data involving identity information is used to train big language models such as PaLM2. This could mean that private information and data protection are also susceptible to this. That is why the government is asking itself if giants such as Google are taking adequate measures to mitigate such risks.

Google has not responded to questions about it or the on-going investigation that has yet been conducted.

Even larger a trend in the EU has emerged recently, which could be seen in the investigation of the Irish counterpart, the LO. European officials are increasing monitoring of AI systems to see if they violate data privacy across the bloc’s 27 nations.

For example, in January of this year, Elon Musk’s social media platform X, formerly known as Twitter, was taken to court by Ireland’s Data Protection Commission. The commission was able to prevent X from continuing with the use of user data to feed Grok, its AI robot. It did not take long for the watchdog to seek an order from the High Court restrains X from processing the personal data of posts by members of the public. This goes to show how much the EU frowns at any form of privacy breach that might occur.

Similarly, Irish regulators ordered the Meta Platforms, the parent company of Facebook and Instagram, to halt its plans to utilize data from European individuals for Meta’s large language models. The company was able to give a large number of discussions to the DPC, and this led to the temporary halting of this plan.

Many people in the EU have spoken ill of AI before, much more before they became the world’s leading purchasers of those technologies. Last year, Italy’s data privacy regulator did something very drastic: they temporarily blocked the chatbot named ChatGPT, which is developed by OpenAI, because they feared data violations. Later on, after OpenAI addressed the needs of the regulator and made several improvements, the ban was subsequently removed.

These actions demonstrate that European regulators are willing to increase their efforts in order to ensure that AI models adhere to the region’s strict privacy standards. Issues concerning the collection, processing, and storage of data are posing growing interest in discussions regarding the regulation of AI technology.

EU is very stringent about data protection; hence, for firms that are developing AI models, they have to ensure that they do not violate this aspect while handling data from the users. Noncompliance with GDPR exposes one to big fines and other court issues, as experienced by X and Meta.

Due to the investigation into PaLM2, Google may have to make some major decisions happen. To the extent that the Irish Data Protection Commission determines that PaLM2’s means of processing data are averse to GDPR, the firm risks being penalised and being compelled to transform how it operates in the EU.

The Future of AI Regulation

The more AI systems are applied in as many aspects of human life as possible, the more one is likely to have concerns with data privacy. Europe’s regulators are gearing up to take venture capitalists to task for how they process data belonging to persons. Google PaLM2 review is one of the examples that recently fast followed this trend.

In the long run, such steps may impact the development of AI in Europe by placing privacy and safety as top priorities for businesses in their AI models. If companies do not do this, they might be locked out of court and denied access to one of the greatest and most controlled markets in the world.

For users, it means these investigations prove the government is committed to ensuring their data is safe despite the growing technological advancement. With the advancement of AI, the next thing that Europeans want to make sure that they protect, especially concerning the technology, is the privacy of individuals.

Leave a Reply

Your email address will not be published. Required fields are marked *

Meta Under Fire for Using Australian User Accounts in AI Training Previous post Meta Under Fire for Using Australian User Accounts in AI Training
OpenAI seeks $6.5 billion in funding and is aiming for a $150 billion valuation Next post OpenAI Secures $6.5B Investment Talks, Targets Massive $150B Valuation