Helpful or Harmful: The Good and Not-So-Good Aspects of Canada’s proposed Online Harms Act

How the Online Harms Act could affect LGBTQ+ communities

This content was paid for by Gluckstein Lawyers, separate from Xtra’s editorial staff.

Cheers, jeers, and fears. Reaction to the Canadian government’s long-awaited Online Harms Act (Bill C-63) has been decidedly mixed. But until the Act and the regulatory bodies it creates are implemented, it will be difficult to know whether this legislation will help or hurt LGBTQ+ people on balance.

What a law proposes to do in theory and what it actually does in practice can sometimes differ greatly. And, when it comes to our diverse community, regulations that may benefit some of us may be unreasonably punitive for others.

The Online Harms Act is a mammoth piece of legislation. It not only outlines new rules and responsibilities for social media companies, adult content services and live streaming services, but it also makes consequential amendments to the Criminal Code.

The Act targets certain types of “harmful” content, including: intimate content communicated without consent; content that sexually victimizes a child or revictimizes a survivor; content that induces a child to harm themselves; content used to bully a child; content that foments hatred; content that incites violence; and content that incites violent extremism or terrorism.‍

It creates a Digital Safety Commission and Ombudsperson, imposes a variety of duties on regulated online service providers, and enables regulators to assess penalties and fines if they are breached.

Credit: Gluckstein Lawyers

The Act does not require regulated service providers to conduct proactive searches for harmful content (though it leaves open the possibility for regulations requiring technological means to prevent users from uploading content that sexually victimizes a child, revictimizes a survivor, or displays non-consensual intimate content (including “deepfakes”). Notably, the content only refers to a visual or visual recording and not speech. If the provider identifies such content or if a user flags such content, it must be reviewed and assessed within 24 hours and either be made inaccessible or report why a user’s flag was dismissed.

While the legislation also requires regulated service providers to provide tools that enable users to block other users, there is no requirement for these providers to address this content in private messaging (defined as communication without a potentially unlimited audience).

The bill also seeks to define hate (in human rights legislation and the Criminal Code) as content expressing “detestation or vilification” – language that emerged from the Supreme Court of Canada’s Whatcott decision. This definition is more narrow in scope than one previously found in the Canadian Human Rights Act. The language in that legislation, which defined hate as anything “likely to expose a person or persons to hatred or contempt,” was removed by Prime Minister Stephen Harper’s government.

In the new Act, content that belittles an identifiable group, attacks its dignity, or argues in favour of taking its rights away would not qualify as hate. Content that portrays an identifiable group as inherently violent, inhuman, and/or worthy of execution or banishment would be deemed hate.

Penalties for hate offences, when added to other crimes, could be as much as life imprisonment. “Communication of hate speech” complaints brought to the Canadian Human Rights Commission could result in penalties of up to $50,000.

Credit: Gluckstein Lawyers

Federal Justice Minister Arif Virani has stated the legislation will not ban “awful but lawful” content. What could this mean for members of the LGBTQ+ community?

While “hate” is defined narrowly, legal observers have noted that other terms could be much more clear.

For instance, the Act defines content used to bully a child “as content, or an aggregate of content, that, given the context in which it is communicated, could cause serious harm to a child’s physical or mental health.” Will positive and affirming messaging to LGBTQ+ youth be flagged by people who will attempt to argue they will harm a child’s physical or mental health? Will this flag be dismissed or upheld (either initially or on appeal)? Will we see this attempted with the provision against counselling to “self-harm” if information about affirming healthcare for trans people is targeted? Will lawful self-expression of LGBTQ+ people engaged in sex work be targeted by user flagging campaigns?

The success of this legislation will largely depend on how its terms are interpreted (and who is doing the interpreting). As a community, we no doubt will hope its provisions will reduce hate speech and make regulated online service providers more accountable and transparent to users encountering harmful content. But we should also remain vigilant to this Act’s potential to be used to suppress legitimate messaging directed at helping LGBTQ+ people or content made to promote our self-expression.