The UK government is investing heavily in AI, and is now trying to create an environment for AI to flourish. The much-awaited White Paper identifies however, some risks to human rights and privacy, for example the use of AI to generate deepfake pornographic video content, bias in assessing credit-worthiness of loan applicants, or intrusive collection of data through connected devices in the home. On a larger scale, the privacy law community needs to worry about disinformation generated and propagated by AI, and its impact on trust in democratic institutions and processes.

The consultation on the AI White Paper is now open until 21 June. This is the time to shape the future direction of AI in the UK – although regulation elsewhere will have an impact too. Cross-jurisdictional requirements mean that companies need to prepare AI compliance programmes now. Organisations can no longer claim they were not aware of the issues involved. The ICO has for some time been saying that AI is no longer a new concept, and it will therefore enforce data protection law on AI as vigorously as in any other field.
The White Paper will lack any statutory footing, and the government is not seeking to appoint a new regulator. While AI is a strategic priority for the ICO, as is empowering responsible innovation, the regulator says that it would welcome clarification on the respective roles of government and regulators in issuing guidance and advice as a result of the proposals in the AI White Paper. The ICO encourages the government to reach out to the Digital Regulation Cooperation Forum (DRCF) which consists of several regulators in this field, and where joint regulatory responses can be formulated.
In April, the Data Protection and Digital Information No. 2 Bill received its second reading in the House of Commons. The Online Safety Bill is at the Committee stage in the House of Lords, and the Digital Markets, Competition and Consumers Bill was recently introduced into Parliament.

Italy’s Garante imposes and then withdraws a ban on ChatGPT

On 28 April, the Garante, Italy’s Data Protection Authority, announced that it had reinstated ChatGPT in Italy because the company had cooperated in responding to the Garante’s concerns expressed in an Order dated 11 April.
On 31 March, the Garante imposed a temporary ban on Open AI’s ChatGPT. The regulator took action after having received a report on 20 March that a data breach had occurred affecting ChatGPT users’ conversations, and information on payments by subscribers.
When announcing an immediate temporary limitation on the processing of Italian users’ data by OpenAI, the US-based company developing and managing the platform, the Garante based its decision on:
  • lack of information provided to users and data subjects whose data is collected by Open AI; and
  • an insufficient legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.
At the same time, the Garante launched an enquiry into the ChatGPT Service and the discussion with OpenAI is continuing.
The regulator overturned its initial ban as OpenAI responded to many of the Garante’s objections and applied them to its service across Europe.

Hong Kong: PDPD issues statement on entry into effect of Chinese standard contracts measures for data transfers

On May 31, 2023, the Privacy Commissioner for Personal Data (PCPD) issued a statement on the entry into effect of the Chinese Standard Contract Measures for Exporting Personal Information on June 1, 2023. The PCPD reminded local enterprises and organizations which conduct business in Mainland China, especially enterprises and organizations which transfer personal information out of Mainland China on a smaller scale, such as small and medium-sized enterprises, that if the conditions prescribed in the measures are met, they may need to enter into a standard contract and file the contract with the local Cyberspace Administration before effecting the transfer of personal information.