Japan’s privacy regulator, the Personal Information Protection Commission, has warned OpenAI- the Microsoft-affiliated AI startup that developed the ChatGPT chatbot; against collecting sensitive data without individuals’ consent.
In their statement, the Commission stressed that OpenAI needs to limit the extent of sensitive data it employs for machine learning purposes, and further action could be initiated if the agency identifies more issues.
As generative artificial intelligence (AI) grows, capable of producing text and images, regulators globally are trying to establish rules to govern its use. The impact of this technology is often likened to the advent of the internet.
Despite Japan’s delay in keeping up with recent technology trends, it is viewed as having a stronger motivation to keep up-to-date with developments in AI and robotics to maintain productivity amidst its declining population.
In their statement, the privacy watchdog acknowledged the necessity of balancing privacy-related apprehensions with the potential advantages of generative AI, including fostering innovation and addressing challenges like climate change.
According to analytics firm Similarweb, Japan is the third-highest contributor to the traffic on OpenAI’s website.
Ahead of the Group of Seven (G7) leaders summit, where Prime Minister Fumio Kishida led a discourse on AI regulation, OpenAI CEO Sam Altman met with him, looking forward to expanding in Japan.
Meanwhile, as chatbots are spreading rapidly, regulators must rely on existing rules to cover the gaps until a comprehensive set of regulations can be enacted. Similarly, the Italian regulator Garante had ChatGPT temporarily taken offline until the company agreed to incorporate age verification features and allow European users to prevent their information from being used to train the system.
Although Altman hinted earlier that the startup might withdraw from Europe if EU regulations proved too challenging to adhere to, he reassured that OpenAI had no plans to exit Europe.