The Italian government’s privacy watchdog has temporarily blocked the ChatGPT artificial intelligence (AI) software developed by OpenAI, citing a recent data breach. The watchdog has restricted the company’s ability to hold Italian users’ data until it demonstrates adequate respect for privacy.
OpenAI, one of the world’s most renowned AI research institutes, has not yet commented on the situation. The company’s ChatGPT has become popular worldwide, particularly in education, where it is used to provide students with instant answers to queries.
Limited Effect on Other Companies
While some schools and universities have already blocked the ChatGPT website, it is unclear how Italy would block it at a nationwide level. However, companies that have licenses with OpenAI to use the same technology are unlikely to be affected.
OpenAI must report to the Italian watchdog within 20 days on measures taken to ensure user privacy. Otherwise, it could face a fine of up to €20 million or 4% of annual global revenue.
The Watchdog’s Concerns
The Italian watchdog is critical of the lack of notice to users and the absence of a juridical basis for retaining personal data. Additionally, there is concern about the absence of a filter to verify the age of users, potentially exposing minors to inappropriate content.
The watchdog’s decision follows a data breach in which an unauthorized individual accessed a database and obtained information on some ChatGPT users. The incident highlights the risk of data breaches and how AI-based systems could pose a threat to user privacy.
Call for Pause in Development of Powerful AI Models
Until autumn, a group of tech industry leaders and scientists has urged for a halt in the progress of creating more advanced AI models to evaluate potential risks. They warned that, if left unchecked, such models could become uncontrollable and pose a risk to global security.
OpenAI’s CEO, Sam Altman, plans to embark on a six-continent trip in May to discuss the technology with users and developers. During this journey, there will be a visit to Brussels, where EU policymakers are in discussions to establish fresh regulations restricting high-risk AI technology use.
The Italian watchdog’s decision to temporarily block OpenAI’s ChatGPT highlights the importance of user privacy and the risks associated with AI-based systems. While the impact of the decision may be limited to Italian users, it underscores the need for developers to prioritize privacy and security in AI-based systems.
As AI continues to evolve and become more powerful, there is a need for policymakers and stakeholders to ensure that it is developed and used responsibly. OpenAI’s CEO’s upcoming trip to discuss the technology with users and lawmakers is a step in the right direction towards addressing concerns around the technology.