How Generative AI can affect Data Privacy
For half a century Geoffrey Hinton, a renowned cognitive psychologist and computer scientist, nurtured the technology driving the current state of chatbots like ChatGPT, earning him the nickname of “Godfather of AI.” Now he is concerned that companies are launching headfirst into aggressive campaigns to create products based on generative artificial intelligence without care of the consequences. One critical consequence is the use of consumer data without proper consent.
What is Generative AI?
Generative AI refers to a type of artificial intelligence that is designed to generate new content, rather than simply recognizing patterns in existing data. It uses machine learning algorithms to learn the patterns and characteristics of a given dataset, and then uses this knowledge to create new data that is similar in style, content, or structure.
Generative AI can be used to create a wide range of content, including text, images, video, music, and even code. It has many practical applications, such as generating realistic images for computer games, generating personalized recommendations for users, or even creating entirely new works of art. This is all fun, however AI and chatbots do not guarantee the consumer’s data privacy.
Businesses are required to form contracts that are designed to ensure adherence to privacy regulations and/or confidentiality of PII. But, if a business enters any client, customer, or partner information into a chatbot, the AI algorithm will learn and use that data in different ways without permission.
What Data is collected by Chatbots?
While many chatbot companies’ terms and conditions state that they don’t use the information provided by its users, they do collect IP addresses, browser types and settings—and use cookies to collect a user’s browsing activities over time, all of which it could share without notice with its vendors or third parties.
Another privacy concern is that, in order to perform correctly, chatbots automatically opt everyone in. ChatGPT states it provides an opt-out feature in its terms and conditions, it also notes that opting out may limit the kind of answers users receive.
Do any privacy laws currently regulate generative AI?
Except for EU’s GDPR, data privacy regulators and lawmakers are still evaluating the potential issues and repercussions of this increase adoption of generative AI. Under the GDPR, users are allowed to rescind consent and require entities to “forget” their data. Businesses also must notify third parties – chatbots – to delete that personal information. Also, Chatbots do not have age controls to request or stop users under the age of 13 from using them.
Are businesses liable for generative AI’s use of data?
Many of the issues surrounding data privacy and generative AI are still up in the air. Chatbot developers and companies behind them are declaring their own limited liability (with no legal guidelines).
OpenAI’s Terms of Use state: “You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services.” Italy, worried that the text generator had no age-verification controls or any “legal basis” for gathering online user data to train the AI tool’s algorithms, banned ChatGPT. Countries such as North Korea, Iran, China, Cuba, and Syria are among the countries where ChatGPT is not accessible.
If a business is moving forward with ChatGPT or another generative AI product, make sure to vet the service like any other vendor by asking questions like:
- What security measures are in place?
- What will happen with the data that is collected?
- Will data be separate or combined?
- Is the data that has been collected minimized and anonymized?
ChatGPT Data Breach Confirmed
On March 28, OpenAI confirmed a ChatGPT data breach on the same day a security firm reported seeing the use of a component affected by an actively exploited vulnerability.
The issue was related to ChatGPT’s use of Redis-py, an open source Redis client library, and it was introduced by a change made by OpenAI on March 20. The chatbot’s developers use Redis to cache user information in their server, to avoid having to check the database for every request. This data breach exposed serious vulnerabilities in the code’s open-source library.
Click here to learn more about our Preference and Consent Management Platform! Clarip takes enterprise privacy governance to the next level and helps organizations reduce risks, engage better, and gain customers’ trust!
Contact us at www.clarip.com or call Clarip at 1-888-252-5653 for a demo.
Email Now:
Mike Mango, VP of Sales
mmango@clarip.com
Related Content:
Making the Case for Data Minimization
Automated Data Mapping
Data Discovery
Looking for Product Data Sheets?