Artificial Intelligence (A.I.) Employment Law Risks in Hong Kong
Artificial Intelligence or A.I. is the buzzword of the moment. Last week, Apple launched its highly anticipated iPhone 16 range, branding it as “Built for Apple Intelligence!”
No matter how we may wish to avoid it; the future is no longer coming, it has arrived! Typically, we rely on our governments to protect us; but for this fast moving, technical and constantly evolving topic what is being done?
The Global Regulatory Landscape for A.I.
There has been considerable debate about whether and how to regulate A.I. Governments worldwide have taken varied approaches—some drafting new regulations tailored to A.I., while others rely on existing legislation.
In Europe, the world’s first comprehensive act designed to regulate the use of AI in all sectors and industries “the Artificial Intelligence Act” entered into force on 1 August 2024. The regulatory framework is underpinned by a risk-based approach whereby AI systems are classified according to the intensity and scope of the risks they pose. Stricter rules are imposed on AI systems that pose higher levels of risk, and certain practices considered to present unacceptable risks are banned.
Mainland China has also rolled out a series of regulations, including the Interim Administrative Measures for Generative Artificial Intelligence Services (effective from August 2023), aimed at protecting individual privacy and balancing the development and security of A.I. Additionally, the Global AI Governance Initiative, launched in October 2023, highlights the equal importance of A.I. development and security.
In contrast, the United Kingdom has long been championing a ‘pro-innovation’ approach to AI regulation and intends to customise existing regulations to address risks. (Although it should be noted the new government has recently indicated it has not ruled out introducing legislation.)
So where does Hong Kong stand in this evolving technological landscape?
Hong Kong has no overarching regulation tailor-made for AI and longer term is expected to follow guidance from Mainland China; however, in the interim it is relying on various regulators across industry sectors to produce guidance.
From an employer position what areas should we be giving consideration?
Hong Kong: PCPD’s guidance on AI
The Privacy Commissioner of Personal Data (PCPD) has been one of the key regulators exploring and addressing regulatory issues that arise from AI.
The Personal Data Protection Ordinance (the ‘PDPO’), is the Hong Kong equivalent to GDPR. It is a piece of principle-based and technology-neutral legislation that applies equally to AI. (Overall issues involved in governance of AI are broader and more complex than simple data protection.) All data users are bound by the provisions of the PDPO, including its Data Protection Principles (‘DPPs’), irrespective of the type and state of the art of the technological means adopted to collect and use personal data.
The collective objective of DPPs is to ensure that personal data is only collected:
- on a fully informed basis and in a fair manner,
- with due consideration toward minimising the amount of personal data collected.
- Personal data should be processed in a secure manner and kept only for as long as necessary.
- Use of the data should be limited to or related to the original collection purpose.
- Data subjects are given the right to access and make correction to their data.
The PCPD has produced 2 papers on AI governance, the first in August 2021 was an explanation of governing ethical principles for AI Development and use; the second in June 2024 provides a model framework for data protection in the context of AI development and use.
The 2024 model framework covers recommended measures in the following four areas:
- Establish AI Strategy and Governance: Formulate an organisation’s AI strategy and governance considerations for procuring AI solutions, establish an AI governance committee and provide employees with training relevant to AI;
- Conduct Risk Assessment and Human Oversight: Conduct comprehensive risk assessments, formulate a risk management system, adopt a “risk-based” management approach, and, depending on the levels of the risks posed by AI, adopt proportionate risk mitigating measures, including deciding on the level of human oversight;
- Customisation of AI Models and Implementation and Management of AI Systems: Prepare and manage data, including personal data, for customisation and/or use of AI systems, test and validate AI models during the process of customising and implementing AI systems, ensure system security and data security, and manage and continuously monitor AI systems; and
- Communication and Engagement with Stakeholders: Communicate and engage regularly and effectively with stakeholders, in particular internal staff, AI suppliers, individual customers and regulators, in order to enhance transparency and build trust.
Conclusion
As AI technology rapidly develops its application will become increasingly prevalent, there is no doubt AI will be an essential driving force in all sectors of the economy. For this to happen will however require that AI is generally deployed across the workplace, throughout this process employers and HR should be mindful of the ethical and privacy risks that could result from the use of AI with personal data. In the areas of recruitment, employment process and overall management; consideration should also be given to whether there is also a risk of bias if too small a data set is used.
Moving forward establishing a robust AI system and strategy while ensuring data security will be as crucial as the technological advancement itself; employers should familiarise themselves with the current requirements and stay aware for updates in this rapidly changing area.
For more information you can contact us directly, through our website or via email: enquiries@blackmountainhr.com