ChatGPT in its incognito mode stores new conversations for 30 days and reviews them only as needed to monitor abuse, then permanently deletes them.
Heaptalk, Jakarta — OpenAI released a new feature on ChatGPT to turn off chat history, known as incognito mode. This feature allows users to choose which conversations can be used to train the language models. To access the new mode, users can switch off “Chat History & Training” in their settings.
“Conversations that are started when chat history is disabled will not be used to train and improve our models, and will not appear in the history sidebar. These controls, which are rolling out to all users starting today, can be found in ChatGPT’s settings and can be changed at any time,” stated OpenAI on its official blog.
When chat history is turned off, ChatGPT will store new conversations for 30 days and review them only as needed to monitor abuse. After that, the chatbot app will delete it permanently. With this new control, the company expects to provide an easier way to manage users’ data than its existing opt-out process.
This feature was raised following how chatbot apps, both ChatGPT and other apps, take advantage of hundreds of millions of users’ data to train artificial intelligence (AI) chatbot.
In March 2023, Italy banned ChatGPT due to possible privacy breaches as reported by Reuters. Italian authorities say OpenAI can resume services in the country if it meets demands, one of which is providing consumers with tools to object to the processing of their data. Apart from Italy, France and Spain are also investigating possible breaches of the chatbot app’s privacy.
Prioritizing user privacy
However, the Chief Technology Officer of OpenAI, Mira Murati, stated that the new feature was not due to the Italian ban on ChatGPT, but as a form of development by the company to put the user in control of data collection, quoted by Reuters. Further, she conveyed that OpenAI has complied with European privacy laws and is working to convince local authorities.
“We will be moving more and more in this direction of prioritizing user privacy. It is completely eyes off and the models are super aligned: they do the things that you want to do,” said Mira.
Further, Mira also delivered that thus far user information has helped OpenAI to develop its applications more reliably and reduce many issues, one of which is political bias, but the company still faces some challenges.
In addition, OpenAI also announced a ChatGPT Business subscription available in the coming months will not use conversations for AI model training by default. The company delivered, “We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users. ChatGPT Business will follow our API’s data usage policies, which means that end users’ data will not be used to train our models by default.”
The “Export” option in settings will help users easily export their chat data and understand what information ChatGPT stores. Users will receive a file with their conversations and all other relevant data in an email.