xi's moments
Home | World Watch

Privacy safeguards vital in use of AI

By Ada Chung | China Daily Global | Updated: 2026-04-01 08:55

[Photo/VCG]

Governments and businesses worldwide are seeking to harness artificial intelligence for innovation and economic growth. Yet as AI technologies become more accessible and sophisticated, a parallel and troubling trend is emerging: the misuse of AI-driven "deepfakes".

A deepfake — a seemingly realistic but falsified image, audio or video generated by AI — can inflict profound and lasting harm on individuals, especially children and young people, when exploited for malicious purposes.

A recent global incident brought these issues to the forefront: An AI chatbot allowed users to generate nonconsensual sexual images of real women and children, among others. Within 11 days, an estimated 3 million sexualized images had reportedly been generated. This illustrates how easily personal data can be misused and how quickly the resulting harm can spread, especially to minors, who are least equipped to protect themselves.

The incident triggered swift regulatory actions by privacy or data protection authorities worldwide, and temporary bans in some jurisdictions. Given the borderless nature of AI-related privacy risks, data protection authorities have stepped up coordinated efforts to advocate privacy-protective AI.

In a landmark move, during the 47th Global Privacy Assembly Conference in September, 20 authorities from different jurisdictions signed the Joint Statement on Building Trustworthy Data Governance Frameworks to Encourage Development of Innovative and Privacy-Protecting AI. This advocated, among other things, the incorporation of data-protection principles into AI system development and the establishment of robust data governance.

In February, the Hong Kong Special Administrative Region's Office of the Privacy Commissioner for Personal Data, or PCPD, together with 60 privacy/data protection authorities from around the world (including Canada, France, Germany, Italy, South Korea, New Zealand, Singapore and the United Kingdom), issued the Joint Statement on AI-Generated Imagery and the Protection of Privacy.

Initiated and coordinated through the Global Privacy Assembly's International Enforcement Cooperation Working Group, which the PCPD co-chairs, the statement sets out fundamental international principles to guide organizations in developing and using AI content generation systems lawfully and safely.

The joint statement reminds all organizations that develop and use AI content generation systems to comply with applicable data protection and privacy laws. It also recommends a series of measures to safeguard the fundamental rights of individuals, especially children and vulnerable groups.

Authorities both on the Chinese mainland and in the Hong Kong SAR recognize that the development and use of AI must be accompanied by appropriate guardrails.

Since the promulgation of the 2023 Global AI Governance Initiative, the equal importance of the development and safety of AI has been repeatedly stressed, and this was also reaffirmed in the Hong Kong chief executive's 2025 Policy Address.

This balanced vision is further reinforced in China's recently adopted 15th Five-Year Plan (2026-30), which targets advancing the "AI Plus" initiative across the board, while governance of AI must be strengthened. As the plan specifies, it is essential to consolidate security during development and pursue development in a secure environment, including strengthening data governance frameworks and rules, enhancing AI governance, and fostering an environment that is beneficial, secure and fair for development.

It is against this backdrop that the recent emergence of agentic AI — autonomous systems that use large language models without continuous human oversight — warrants close attention, as it has already intensified concerns over data breaches and privacy and cybersecurity risks.

Unlike conventional AI chatbots that primarily generate content in response to prompts, these agentic systems can connect with external tools and services, enabling them to take multistep actions on behalf of users.

The privacy risks posed by agentic AI thus extend far beyond the outputs of conventional AI chatbots. These systems can access, manipulate and expose personal data with unprecedented speed and reach. If such capabilities are misused to create and distribute abusive deepfakes with minimal human involvement, the resulting harm could spread more quickly and at greater scale.

It is crucial, therefore, for all stakeholders, including AI developers, service providers and users, to be aware of the threats to fundamental human rights posed by the new technologies.

When using AI content-generation systems, for instance, Hong Kong's Office of the Privacy Commissioner for Personal Data recommends that users label or watermark the output as AI-generated to avoid confusion or misunderstanding.

In particular, to avoid data leakage or cyberattacks, users should download only the latest official version of any agentic AI, grant minimum access rights to the tool, adopt adequate measures to ensure system security and data security, and continuously assess the risks involved. Users should be alert, for example, to any high-risk prompts or automatic processing that might wipe out all user data (including emails).

In the race to tap into AI's huge potential, we should remember that the development and deployment of AI systems should, from the outset, be guided by the principles of protecting personal data privacy, privacy-by-design and privacy-by-default, among others, to prevent infringement on people's data privacy and minimize the privacy risks involved.

As recent events have demonstrated the vulnerability of users, especially minors, in the rapidly evolving age of AI, as well as the tangible and far-reaching harms of AI's abusive or malicious use, organizations developing and deploying AI must not sacrifice privacy and security for speed-to-market or novel functionalities.

All stakeholders in the ecosystem, including AI developers, service providers and users, have unavoidable responsibilities to co-create a safe and trustworthy digital environment for our future generations.

The author is privacy commissioner for personal data of the Hong Kong Special Administrative Region. The views do not necessarily reflect those of China Daily.

Global Edition
BACK TO THE TOP
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349