Public consultation on the Privacy Act – Submission – Ignacio Cofone, Ana Qarri and Jeremy Wiener

Cette soumission n’est disponible qu’en anglais.

Ignacio CofoneFootnote *, Ana QarriFootnote ** and Jeremy WienerFootnote **

February 14, 2021

1. Introduction

Artificial intelligence (AI) is more embedded in Canadian’s daily lives than ever before. Yet Canada’s federal privacy laws are unequipped to deal with the risks it poses. The Canadian government recognizes this. Last year, it tabled an overhaul of Canada’s federal private-sector privacy statute (PIPEDA).Footnote 1 If passed, Bill C-11 would enact the Consumer Privacy Protection Act. Last month, it launched a public consultation to help reform Canada’s federal public-sector Privacy Act, to which this note responds.Footnote 2

Since the Privacy Act was passed, technology has changed dramatically. So has our understanding of privacy’s connection with other human rights, such as equality and non-discrimination. AI has led to crucial developments while also producing unique risks, particularly when used in decision-making.Footnote 3

This note seeks to contribute to this public consultation with the goal of modernizing Canada’s Privacy Act. Central to this goal is strengthening public and private enforcement mechanisms, and enacting new rights and obligations that protect Canadians’ privacy and human rights. These proposals relate to the Policy Proposals for PIPEDA Reform to Address Artificial Intelligence Report, published by the OPC, which details their applicability to PIPEDA.Footnote 4

2. The Privacy Act needs a rights-based approach

Privacy reform must start by explicitly adopting a rights-based approach that grounds the Privacy Act in Canadians’ human right to privacy, equality, and other Charter rights. This is a natural step given that, as the Supreme Court has recognized, Canada’s Privacy Act has quasi-constitutional status inextricably tied to all other Charter rights.Footnote 5 In the AI context, where the risks to fundamental rights (such as the right to be free from discrimination) are heightened,Footnote 6 adopting a rights-based approach is important and follows precedent set by the European Union as well as countries in South America and Asia.Footnote 7 In practice, this means that principles of necessity, proportionality, and minimal intrusiveness, which are core to Canadian rights-based balancing tests,Footnote 8 must run through any amended version of the Privacy Act.

A rights-based approach is only as effective as the mechanisms enforcing it. The Office of the Privacy Commissioner of Canada (OPC) should thus be granted the power to issue binding orders and RSC financial penalties. The OPC currently only has the authority to investigate alleged violations of the Privacy Act, to suggest steps to compliance, to publicize its findings of violations, and to initiate court proceedings if the violating federal body does not adjust its privacy practice.Footnote 9 As a result, enforcement is slow and largely depends on agencies’ voluntary compliance. Providing the OPC with the power to issue binding orders and financial penalties to public bodies, as Bill C-11 does for private entities, would redress this shortcoming.

However, under the current statute, even when the OPC successfully achieves compliance, those whose rights have been violated are left without compensation. When someone’s rights are breached, the only thing that they can do is make a complaint to the OPC and wait for the investigation results.Footnote 10 This is insufficient: rights-based privacy legislation must give Canadians a mechanism to protect their rights without being fully dependent on public enforcement.

Granting Canadians the right to sue agencies they believe violated the Privacy Act would ensure that Canadians are not left without redress. It also goes hand in hand with giving the OPC enforcement flexibility: if individuals can claim for violations of their rights, the OPC can be given greater freedom to decide which investigations to pursue without leaving statutory violations unaddressed. Such rights would also make enforcement more efficient by multiplying enforcement resources, thereby lightening the burden placed on government budgets.Footnote 11

Bill C-11 takes a step in the right direction but does not go far enough in granting private rights of action. Europeans and Californians, for example, have a right to sue for violations of their respective privacy statutes, with limitations.Footnote 12 Canadians under Bill C-11, however, would only have a right to sue following an investigation by the OPC. This limitation on the ability to exercise rights undermines the purpose of granting private rights of action, which is ensuring that Canadians can obtain redress and including disincentives also for minor violations of the Privacy Act, on which the OPC might not focus. The Privacy Act reform should include private rights of action to ensure demonstrable accountability of public bodies and effective enforcement of the Act.

3. Two rights are key for Canadians: purpose limitation and data minimization

The Privacy Act should – as Bill C-11 does – mandate purpose specification and data minimization as essential elements of privacy-protective design. Purpose specification limits the use of personal information to those purposes that are identified when the individual consents to the collection of their information. Data minimization means that organizations collect and use only the personal information they need (and not more) to fulfill such identified purpose.

These two principles promote transparency and accountability. and reduce the risk of overarching surveillance and privacy harm.Footnote 13 Consider, for example, the extent of information disclosure that takes place in the course of online shopping or subscribing to an online service. Users are often asked to provide personal contact information that is unnecessary for carrying out the transaction.

Data minimization and purpose specification, however, should not be required for de- identified information. Data is often de-identified – stripped of identifying personal information such as names or addresses – for research or statistical purposes, or to develop AI. Federal public bodies should be encouraged to pursue such important purposes. They can lead to, for example, more efficient traffic management or public transport development. The difficulty is that organizations sometimes find useful purposes for de-identified data only after it has been collected and processed. To avoid unduly limiting AI’s benefits, Privacy Act reform should follow Bill C-11’s lead by exempting de-identified data from data minimization and purpose specification requirements.

The problem that remains is that aggregated de-identified data still reveals information and trends about groups.Footnote 14 This means that it enables decisions that can affect de- identified individuals based on group attributes (such as gender, sexual orientation, and political preference).Footnote 15 Such breaches in group privacy can amount not only to discrimination, but also infringements of other Charter rights such as freedom of opinion and expression, and freedom of association.Footnote 16 Think, for example, of risks to democracy revealed by the Cambridge Analytica scandal.Footnote 17

De-identified data are still personal information because they relate to identifiable (even if not identified) individuals.Footnote 18 While they do not eliminate all risks, they present lower risks to human rights than identified data. This is because they are one step removed from identifiable individuals. An exemption from data minimization and purpose specification requirements may thus encourage agencies to de-identify data, lowering (albeit not eliminating) risks for individuals’ human rights. To ensure that data stays de-identified, an offense that prohibits re-identification, similar to what is proposed in Bill C-11, should be introduced.

These two rights relate more broadly to design’s importance for privacy. In the words of the former Information & Privacy Commissioner of Ontario Ann Cavoukian, who coined the term privacy by design, “embedding privacy into the design and architecture” specifications of various technologies is crucial.Footnote 19 Privacy by design entails proactively mitigating the risk of privacy violations, such as re-identification of de-identified data, helping prevent privacy violations.Footnote 20 The European Union, for example, already requires organizations to design for privacy and human rights in all phases of their data collection and processing by implementing “appropriate technical and organizational measures.” So should the Privacy Act. By establishing a way to proactively mitigate risks, privacy by design contributes to the goal of accountability in privacy.

4. Automated decision-making requires special transparency provisions

Automated decision-making heightens the risks to human rights posed by data-driven technology. Protected categories, such as gender or race, are often statistically associated with seemingly inoffensive characteristics, such as height or postal code.Footnote 21 By relying on inoffensive characteristics as proxies for protected traits, algorithmic decision-making can lead to discriminatory results that adversely affect members of protected groups.Footnote 22 There are numerous examples of this outcome from around the world. For example, last year, the U.K. stopped using algorithms that made decisions for welfare, immigration, and asylum cases after allegations that the systems were perpetuating racist patterns of decision-making.Footnote 23

To reduce the risk of such algorithmic discrimination, the Privacy Act should obligate federal public bodies to log and trace their data collection and processing systems. Traceability is an essential part of a transparent data processing architecture.Footnote 24 It would promote accountability and increase the public’s trust in the government’s privacy protection systems. In the case of the scrapped U.K. system for welfare decisions, for example, data traceability would ensure that individuals can gain access to a log of the data used to make decisions about them. Data traceability would also show where the data was collected from–directly from individuals, another government body, or an external party.

Granting Canadians a right to an explanation for automated decisions would provide key support to traceability. Quebec’s Bill 64, for example, proposes to provide people the right to know the personal information and the reasons or factors that contributed to automated decision-making processes, as well as the right to request the information’s source.Footnote 25 The federal Treasury Board Secretariat’s Directive on Algorithmic Decision-Making (DADM), as well as the tabled Bill C-11, also contain a right to explanation.Footnote 26 The Directive includes that an explanation’s detail should increase with the decisions’ impact on the individual. Welfare decisions, for example, impact individual livelihood; these decisions are likely to be considered high impact. Therefore, the explanation requirement would likewise be higher. The Privacy Act should build on – and legislate – this existing framework.

Individuals cannot exercise their right to explanation unless they know when it is applicable – in other words, when data about them has been used to make a decision affecting them.Footnote 27 Public bodies should thus inform Canadians when they can request an explanation, lightening the load associated with requesting one.

Together, these provisions would mitigate the risks associated with introducing automated decision-making in government processes. They would promote the transparency that public trust in the government’s data collecting and processing requires as the government moves to use AI.

5. The right to contest is an essential procedural safeguard

Individuals should have a right to contest decisions made through automated decision- making using their personal information. When the right to contest is asserted, a person is required to take a second look at the decision to check for individual or systemic issues in the algorithmic decision-making process.

The right to contest supplements the right to explanation in ensuring transparency in automated decision-making processes.Footnote 28 The ability for individuals to contest decisions mitigates the risk of algorithmic discrimination, a key risk to human rights in automated decisions, particularly when State function are involved.Footnote 29 A right to contest may also decrease litigation by allowing individuals to contest automated decisions internally within the decision-making body at an earlier stage, providing a possibility to address potential problems without a court process.

While a right to contest is not directly contemplated in the Directive on Automated Decision-Making, human involvement in the decision-making process is required based on the level of risk.Footnote 30 The modernized Privacy Act should legislate this risk-based approach to human involvement in the process of automated decision-making. By requiring a person to evaluate the decision after the outcome has been communicated to an individual, the right to contest would supplement the existing framework for human involvement.

6. Conclusion

A rights-based approach that provides for data minimization and purpose specification, data traceability and related rights to explanation, and to effective public and private enforcement mechanisms would be a significant improvement on the Privacy Act we currently have.

According to a recent OPC survey, 90% of Canadians are concerned about their privacy.Footnote 31 And they should be. Reforming the Privacy Act in this way is an essential step to alleviating warranted concerns and turning Canada into the protector of human rights that it strives to be.