The Toronto Declaration: Protecting the Rights to Equality and Non-Discrimination in Machine Learning Systems is a declaration that advocates responsible practices for machine learning practitioners and governing bodies. It is a joint statement issued by groups including Amnesty International and Access Now, with other notable signatories including Human Rights Watch and The Wikimedia Foundation.[1] It was published at RightsCon on May 16, 2018.[2][3]
The Declaration focuses on concerns of algorithmic bias and the potential for discrimination that arises from the use of machine learning and artificial intelligence in applications that may affect people's lives, "from policing, to welfare systems, to healthcare provision, to platforms for online discourse."[4] A secondary concern of the document is the potential for violations of information privacy.
The goal of the Declaration is to outline "tangible and actionable standards for states and the private sector."[5] The Declaration calls for tangible solutions, such as reparations for the victims of algorithmic discrimination.[6]
Contents
editThe Toronto Declaration consists of 59 articles, broken into six sections, concerning international human rights law, duties of states, responsibilities of private sector actors, and the right to an effective remedy.
Preamble
editThe document begins by asking the question, "In a world of machine learning systems, who will bear accountability for harming human rights?"[4] It argues that all practitioners, whether in the public or private sector, should be aware of the risks to human rights and approach their work with human rights in mind – conscious of the existing international laws, standards, and principles. The document defines human rights to include "the right to privacy and data protection, the right to freedom of expression and association, to participation in cultural life, equality before the law, and access to effective remedy";[4] but it states that the Declaration is most concerned with equality and non-discrimination.
Using the framework of international human rights law
editThe framework of international human rights law enumerates various rights, provides mechanisms to hold violators to account, and ensures remedy for the violated. The document cites the United Nations Human Rights Committee's definition of discrimination as "any distinction, exclusion, restriction or preference which is based on any ground [including but not limited to] race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status, and which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise by all persons, on an equal footing, of all rights and freedoms."[7]
Governments should proactively create binding measures, and private entities should create internal policies, to protect against discrimination. Measures may include protections for sensitive data, especially for vulnerable populations. Systems should be designed in collaboration with a diverse community in order to prevent discrimination in design.
Duties of states: human rights obligations
editGovernments today are deploying machine learning systems, often in collaboration with private entities. Even when development is contracted to such third parties, governments retain their obligation to protect human rights. Before implementation, and on an ongoing basis thereafter, they should identify risks and conduct regular audits, then take all necessary measures to mitigate these risks. They should be transparent about how machine learning is implemented and used, avoiding black box systems whose logic cannot be easily explained. Systems should be subject to strict oversight from diverse internal committees and independent judicial authorities.
Governments must also protect citizens from discrimination by private entities. In addition to oversight, they should pass binding laws against discrimination, as well as for data protection and privacy, and they should provide effective means to remedy for affected individuals. It is important for national and regional governments to expand on and contextualize international law.
Responsibilities of private sector actors: human rights due diligence
editPrivate entities are responsible for conducting "human rights due diligence." Just like governments, private entities should identify risks before development by considering common risks and consulting stakeholders, "including affected groups, organizations that work on human rights, equality and discrimination, as well as independent human rights and machine learning experts."[4] They should design systems that mitigate risks, subject systems to regular audits, and forego projects that carry too high of risks. They should be transparent about assumed risks, including details of the technical implementation where necessary, and should provide a mechanism for affected individuals to dispute any decisions that affect them.
The right to an effective remedy
edit"The right to justice is a vital element of international human rights law."[4] Private entities should create processes for affected individuals to seek remedy, and they should designate roles for who will oversee these processes. Governments must be especially cautious when deploying machine learning systems in the justice sector. Transparency, accountability, and remedy can help.
References
edit- ^ Brandom, Russell (2018-05-16). "New Toronto Declaration calls on algorithms to respect human rights". The Verge. Retrieved 2021-09-03.
- ^ "The Toronto Declaration • Toronto Declaration". Toronto Declaration. Retrieved 2021-09-08.
- ^ "BBC World Service - Digital Planet, The Toronto Declaration". BBC. Retrieved 2021-09-08.
- ^ a b c d e "The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems". The Toronto Declaration. Amnesty International and Access Now. May 16, 2018. Archived from the original on August 12, 2021. Retrieved September 3, 2021.
- ^ May 17; Burt, 2018 | Chris (2018-05-17). "Toronto Declaration calls for application of human rights frameworks to machine learning | Biometric Update". www.biometricupdate.com. Retrieved 2021-09-03.
{{cite web}}
: CS1 maint: numeric names: authors list (link) - ^ "The Toronto Declaration on Machine Learning calls for AI that protects human rights". Futurism. 16 May 2018. Retrieved 2021-09-03.
- ^ General Comment No. 18: Non-discrimination. Geneva: United Nations Human Rights Committee. 1989.