Human Rights Commission recommendations on digital technology may be restricted by trade agreements

June 4, 2021: The Australian Human Rights Commission final report on human rights and technology, released on April 27, 2021, deals with the use of artificial intelligence (AI) in decision-making, and how people with disabilities experience digital communication technologies.

The report argues that human rights should be embedded into all aspects of law, policy development and decision-making. It notes that AI can breach human rights through ‘algorithmic bias’, which results in racial or gender discrimination in the criminal justice system, advertising, recruitment, healthcare and policing.

The report recommends a moratorium on the use of Facial Recognition Technology pending the development of effective human rights safeguards and an independent AI Safety Commissioner to ensure that AI-informed decision-making will be lawful, transparent, explainable, used responsibly, and subject to appropriate human oversight, review and intervention .It also recommends creation of a new Disability Standard for Digital Communication Technology, increased audio description and captioning for broadcasting services and amending government procurement rules to require accessible goods, services and facilities.

These recommendations require more regulation of digital technology. As the AFTINET submission argued, this runs contrary to the deregulation trends in international trade agreements like the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) and the Singapore Australia Digital Economy Agreement. These agreements maximise cross-border data flows and restrict government access to and regulation of source code and algorithms.

Unfortunately, although AFTINET’s submission is cited, the report does not deal with the ways in which these international trade obligations may restrict the changes to regulation that it recommends.