The Artificial Intelligence and Data Act… coming soon to AI near you
In June, 2022, the Government introduced Bill C-27, an Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act. A major component of this proposed legislation is a brand new law on artificial intelligence. This will be, if passed, the first Canadian law to regulate AI systems.
The stated aim of the Artificial Intelligence and Data Act (AIDA) is to regulate international and interprovincial trade and commerce in artificial intelligence systems. The Act requires the adoption of measures to mitigate “risks of harm” and “biased output” related to something called “high-impact systems“.
Ok, so how will this work? First, the Act (since it’s federal legislation) applies to “regulated activity” which refers to specific activities carried out in the course of international or interprovincial trade and commerce. That makes sense since that’s what falls into the federal jurisdiction. Think banks and airlines, for sure, but the scope will be wider than that since any use of a system by private sector organizations to gather and process data across provincial boundaries will be caught. The regulated activities are defined as:
- (a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;
- (b) designing, developing or making available for use an artificial intelligence system or managing its operations.
That is a purposely broad definition which is designed to catch both the companies that use these systems, and providers of such systems, as well as data processors who deploy AI systems in the course of data processing, where such systems are used in the course of international or interprovincial trade and commerce.
The term “artificial intelligence system” is also broadly defined and captures any “technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”
For anyone carrying out a “regulated activity” in general, there are record keeping obligations, and regulations regarding the handling of anonymized data that is used in the course of such activities.
For those who are responsible for so-called “high-impact systems“, there are special requirements. First, a provider or user of such a system is responsible to determine if their system qualifies as a “high-impact system” under AIDA (something to be defined in the regulations).
Those responsible for such “high-impact systems” must, in accordance with the regulations, establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system, and they must also monitor compliance with these mitigation measures.
There’s more: anyone who makes a “high-impact system” available, or who manages the operation of such a system, must also publish a plain-language description of the system that includes an explanation of:
- (a) how the system is intended to be used;
- (b) the types of content that it is intended to generate and the decisions, recommendations or predictions that it is intended to make; and
- (c) the mitigation measures.
- (d) Oh, and any other information that may be prescribed by regulation in the future.
The AIDA sets up an analysis of “harm” which is defined as:
- physical or psychological harm to an individual;
- damage to an individual’s property; or
- economic loss to an individual.
If there is a risk of material harm, then those using these “high-impact systems” must notify the Minister. From here, the Minister has order-making powers to:
- Order the production of records
- Conduct audits
- Compel any organization responsible for a high-impact system to cease using it, if there are reasonable grounds to believe the use of the system gives rise to a “serious risk of imminent harm”.
The Act has other enforcement tools available, including penalties of up to 3% of global revenue for the offender, or $10 million, and higher penalties for more serious offences, up to $25 million.
If you’re keeping track, the Act requires an assessment of:
- plain old “harm” (Section 5),
- “serious harm to individuals or harm to their interests” (Section 4),
- “material harm” (Section 12),
- “risks of harm” (Section 8),
- “serious risk of imminent harm” (Sections 17 and 28), and
- “serious physical or psychological harm” (Section 39).
All of which is to be contrasted with the well-trodden legal analysis around the term “real risk of significant harm” which comes from privacy law.
I can assure you that lawyers will be arguing for years over the nuances of these various terms: what is the difference between “harm” and “material harm”, “risk” versus “serious risk”? and what is “serious harm” versus “material harm” versus “imminent harm”? …and what if one of these species of “harm” overlaps with a privacy issue which also triggers a “real risk of significant harm” under federal privacy laws? All of this could be clarified in future drafts of Bill C-27, which would make it easier for lawyers to advise their clients when navigating the complex legal obligations in AIDA
Stay tuned. This law has some maturing to do, and much detail is left to the regulations (which are not yet drafted).
Calgary – 16:30 MT
No commentsPrivacy Update: Will Consent be Required for Outsourcing Canadian Data?
By Richard Stobbe
Here’s a familiar picture: You are a Canadian business and you use a service provider outside of the country to process data. Let’s say this data includes personal information. This could be as simple as using Gmail for corporate email, or using Amazon Web Services (AWS) for data hosting, or hiring a UK company for CRM data processing services.Â
Until now, the Federal Office of the Privacy Commissioner (OPC) has taken the position that data processing of this type is a “use†of personal information by the entity that collected the data for the purposes of the Personal Information Protection and Electronic Documents Act (PIPEDA). Such use would require the consent of the individual for the initial collection, but would not require additional consent for the data processing by an out-of-country service provider, provided there was consent for that use at the time the information was first collected. Â
The privacy laws of some provinces contain notification requirements in certain cases, though not express consent requirements, for the use of service providers outside of Canada. For example, Alberta’s Personal Information Protection Act, Section 13.1, indicates that an organization that transfers personal information to a service provider outside Canada must notify the individual in question.Â
The OPC’s guidance, dating from 2009, took a similar approach, allowing Canadian companies to address the cross-border data processing through notification to the individual. In many cases, a company’s privacy policy might simply indicate in a general way that personal information may be processed in countries outside of Canada by foreign service providers. In the words of the commissioner in 2009: “[a]ssuming the information is being used for the purpose it was originally collected, additional consent for the transfer is not required.â€Â As long as consumers were informed of transborder transfers of personal information, and the risk that local authorities will have access to information, the organization was meeting its obligations under PIPEDA.Â
A recent consultation paper published by the OPC has signalled a potential change to that approach. If the changes are adopted by the OPC, this will represent a significant shift in data-handling practices for many Canadian companies.Â
Draft guidance from the OPC, issued April 9, 2019, indicates that recent high profile cross-border data breaches, such as the incident involving Equifax, have inspired a stricter consent-based approach. Today, the OPC issued a supplementary discussion document to explain the reasons for the proposed changes. (See: Consultation on Transborder Dataflows)
Reversing 10 years of guidance on this issue, the OPC now explains that a transfer of personal information between one organization and another should be understood as a “disclosure†according to the common understanding of that term in privacy laws.Â
If the draft guidelines are adopted by the OPC, any cross-border transfers of personal data to an outsourced service provider would be considered a “disclosureâ€, mandating a new consent, as opposed to a “use†which could be covered by the initial consent at the time of collection. Depending on the circumstances, the type of disclosure and the type of information, this could require express consent, and it’s not clear how this would apply to existing transborder data-processing agreements, or whether additional detail would be required for consent purposes, or if the specific names of the service providers would be required as part of the consent. This could significantly impact data-processing, e-commerce operations, and the consent practices of many Canadian businesses.Â
Consultations are open until June 4, 2019. Please stay tuned for further updates on this issue and if you want to seek advice on your company’s privacy obligations, please contact us.
Calgary – 16:00 MST
No commentsTech Companies Take Note: Google Hit with $76 Million GDPR Fine
.
By Richard Stobbe
The National Data Protection Commission (CNIL), France’s data protection authority, came down on Google with a €50 million penalty for breach of the EU’s General Data Protection Regulation (“GDPRâ€).
CNIL was responding to complaints from two privacy advocacy groups who called out Google for lacking a valid legal basis to process the personal data of EU users of Google services, particularly for ads personalization purposes. Although Google’s European headquarters are situated in Ireland, that country did not take the role of “lead authority” for DPA purposes, since processing of EU users’ data occurred through Google’s U.S. operations, rather than through its Irish division. This left the field open for France to take over the file and render a decision on the complaint.
The GDPR establishes a “one-stop-shop†which is designed for greater certainty for those organizations doing business in the EU. A business should only have to deal with the Data Protection Authority (“DPAâ€) of the country where its “main establishment†is located.
A DPA is, under the GDPR regime, an independent public authority tasked with supervising enforcement of data protection laws, with investigative powers and corrective authority. There is one DPA in each EU member state.
Google, according to CNIL, failed on two main counts based on the GDPR principles of transparency, information and consent:
- First, Google’s explanation of data processing purposes is not clear nor comprehensive. “Users are not able to fully understand the extent of the processing operations carried out by GOOGLE,” says CNIL, and Google’s processing operations are “massive and intrusive” due to the sheer scope of the company’s services and the high volume of data which is collected and processed by Google.
- Second, consent was not validly obtained, because the specific uses are not made clear to the user. This is the case even though the user of Google’s services can modify some options and configure some features of personalized ads. Just because some user configuration is allowed, that does not mean Google is in compliance with GDPR requirements.
- CNIL was not impressed with the configuration options for ads personalization. To the extent configuration is made available to Google’s users, the choices are “pre-ticked”. The GDPR requires “unambiguous†consent, requiring a specific affirmative action from the user (for example, by clicking a non-pre-ticked box ). At the point of account creation, when a user clicks “I agree to the processing of my information as described above and further explained in the Privacy Policy“, the user gives consent in full, for all processing operations. However the CNIL notes that “the GDPR provides that the consent is ‘specific’ only if it is given distinctly for each purpose.”
This is the first penalty issued by France’s DPA.
Should Canadian companies be concerned? Any company that is engaged in processing of EU resident data will be subject to the GDPR, not just those who have a permanent establishment in the EU.
Calgary – 07:00 MST
No commentsFacebook at the Supreme Court of Canada: forum selection clause is unenforceable
By Richard Stobbe
We just wrote about a dispute resolution clause that was enforceable in the Uber case. Last year, a privacy class action claim landed in the Supreme Curt of Canada (SCC) and I see that we haven’t had a chance to write this one up yet.
In our earlier post ( Facebook Follows Google to the SCC  ) we provided some of the background: The plaintiff Ms. Douez alleged that Facebook used the names and likenesses of Facebook customers for advertising through so-called “Sponsored Storiesâ€. The claim alleged that Facebook ran the “Sponsored Stories†program without the permission of customers, contrary to of s. 3(2) of the B.C. Privacy Act. The basic question was whether Facebook’s terms (which apply California law) should be enforced in Canada or whether they should give way to local B.C. law. The lower court accepted that, on its face, the Terms of Service were valid, clear and enforceable and the lower court went on to decide that Facebook’s Forum Selection Clause should be set aside in this case, and the claim should proceed in a B.C. court. Facebook appealed that decision: Douez v. Facebook, Inc., 2015 BCCA 279 (CanLII) (See this link to the Court of Appeal decision). The appeal court reversed and decided that the Forum Selection Clause should be enforced. Then the case went up to the SCC.
In Douez v. Facebook, Inc., [2017] 1 SCR 751, 2017 SCC 33 (CanLII), the SCC found that Facebook’s forum selection clause is unenforceable.
How did the court get to this decision?  Um…. hard to say, even for the SCC itself, since there was a 3-1-3 split in the 7 member court, with some judges agreeing on the result, but using different reasoning, making it tricky to find a clear line of legal reasoning to follow. The court endorsed its own “strong cause” test from Z.I. Pompey Industrie v. ECU-Line N.V., 2003 SCC 27 (CanLII): When parties agree to a jurisdiction for the resolution of disputes, courts will give effect to that agreement, unless the claimant establishes strong cause for not doing so. Here, the Court tells us that public policy considerations must be weighed when applying the “strong cause” test to forum selection clauses in the consumer context. The public policy factors appear to be as follows:
- Are we dealing with a consumer contract of adhesion between an individual consumer and a large corporation?
- Is there a “gross inequality of bargaining power” between the parties?
- Is there a statutory cause of action, implicating the quasi-constitutional privacy rights? These constitutional and quasi-constitutional rights play an essential role in a free and democratic society and embody key Canadian values, so this will influence the court’s analysis.
- The court will also assess “the interests of justice” and decide which court is better positioned to assess the purpose and intent of the legislation at issue (as in this case, where there was a statutory cause of action under the BC Privacy Act).
- Lastly, the court will assess the expense and inconvenience of requiring an individual consumer to litigate in a foreign jurisdiction (California, in this case), compared to the comparative expense and inconvenience to the big bad corporation (Facebook, in this case).
The court noted that, in order to become a user of Facebook, a consumer “must accept all the terms stipulated in the terms of use, including the forum selection clause. No bargaining, no choice, no adjustments.” But wait… Facebook is not a mandatory service, is it? The fact that a consumer’s decision to use Facebook is entirely voluntary seems to be missing from the majority’s analysis. A consumer must accept the terms, yes, but there is a clear choice: don’t become a user of Facebook. That option does not appear to be a factor in the court’s analysis.
The take-home lesson is that forum selection clauses will have to be carefully handled in consumer contracts of adhesion. It is possible that this decision will be limited to these unique circumstances.
Calgary – 07:00 MST
No commentsPrivacy Breach … While Jogging Down a Public Path?
.
By Richard Stobbe
An online video shows someone jogging on a public pathway during a 2-second clip. Let me get this straight… does this constitute a breach of privacy rights? According to an Ontario court, the answer is yes.
This is a scenario that is likely repeated every year across the country in a variety of industries. In this case, a real estate developer engaged a video developer to produce a sales video for a residential condominium project. To capture the “lifestyleâ€Â of the neighbourhood, the video developer shot footage of local shops, bicycling paths, jogging trails, and other local amenities. In the course of this project, the plaintiff – a jogger – was caught on video, and after editing, a 2-second clip of the plaintiff was included in the final 2-minute promotional video.
Like all video, this one was posted to YouTube, where it lived for 1 week, before being taken down in response to the plaintiff’s complaints.
In Vanderveen v Waterbridge Media Inc., 2017 CanLII 77435 (ON SCSM), the court considered the claim that this clip of the jogger constituted a violation of privacy rights and appropriation of personality rights.  In the analysis, the court considered the tort of “intrusion upon seclusionâ€, which was designed to provide a remedy for conduct that intrudes upon private affairs where the invasion of privacy is considered “highly offensiveâ€.
The court in this case decided that a 2-second video clip of someone jogging on a public pathway does constitute a “highly offensive†invasion of a person’s private affairs. The plaintiff was awarded $4,000 for the breach of privacy rights and $100 for the appropriation of personality.
Some points to consider:
- It is unclear why the court did not spend more time considering the issue of “reasonable expectation of privacyâ€. A number of court decisions have looked at this issue as it relates to video or photography on public beaches, public schools and other public places. This is not a new issue in privacy law, but it appears to have been given short shrift in this decision.
- The impact of this decision must be put into context: it is an Ontario small claims court decision, so it won’t be binding on other courts. However, it may be referred to in other cases of this type. The decision is unlikely to be appealed considering the amounts at issue, so we probably won’t see a review of the analysis at a higher level of court.
- Consider the implications of this approach to privacy and personality rights in light of the use of drone footage in making promotional videos – something that is becoming more common as costs lower and access to this technology increases.
- When entering into contracts for any promotional or marketing collateral – website content, images, video footage, film, advertisements, print materials – both sides should review the terms to confirm who bears the risk of addressing complaints such as this one, and who bears the responsibility for obtaining consents or releases from recognizable individuals who appear in the media content.
For advice regarding privacy rights, personality rights, drone law, and video development contracts, contact Field Law.
Calgary – 10:00 MST
No commentsAnother Canadian Decision Reaches Outside Canada
By Richard Stobbe
This fascinating Ontario case deals with an Alberta-based individual who complained of certain material that was re-published on the website Globe24h.com based in Romania. The server that hosted the website was located in Romania. The material in question was essentially a re-publication of certain publicly available Canadian court and tribunal decisions.
The Alberta individual complained that this conduct – the re-publication of a Canadian tribunal decision on a foreign server – was a breach of his privacy rights since he was named personally in this tribunal decision.
So, let’s get this straight, this is a privacy-based complaint relating to republication in public of a publicly available decision?
Yes, you heard that right. This Romanian site scraped decisions from Canadian court and tribunal websites (information that was already online) and made this content searchable on the internet (making it …available online).
This is an interesting decision, and we’ll just review two elements:
The first issue was whether Canadian privacy laws (such as PIPEDA ) have extraterritorial application to Globe24h.com as a foreign-based organization.  On this point, the Ontario court, citing a range of past decisions (including the Google v. Equustek decision which is currently being appealed to the Supreme Court of Canada) said:
“In this case, the location of the website operator and host server is Romania. However, when an organization’s activities take place exclusively through a website, the physical location of the website operator or host server is not determinative because telecommunications occur “both here and thereâ€: Libman v The Queen, 1985 CanLII 51 (SCC), [1985] 2 SCR 178 at p 208 [Emphasis added]
Secondly, the Ontario court reviewed whether the Romanian business was engaged in “commercial activities” (since that is an element of PIPEDA) . The court noted the Romanian site  “was seeking payment for the removal of the personal information from the website. The fees solicited for doingdoing so varied widely. Moreover, if payment was made with respect to removal of one version of the decision, additional payments could be demanded for removal of other versions of the same information. This included, for example, the translation of the same decision in a Federal Court proceeding or earlier rulings in the same case.”
The Romanian site made a business out of removing data from this content, but the court’s conclusion that “The evidence leads to the conclusion that the respondent was running a profit-making scheme to exploit the online publication of Canadian court and tribunal decisions containing personal information.” [Emphasis added] – in a general sense, that statement could just as easily apply to Google or any of the commercial legal databases which are marketed to lawyers.
The court concluded that it could take jurisdiction over the Romanian website, and ordered the foreign party to take-down the offending content.
This decision represent another reach by a Canadian court to takedown content that has implications outside the borders of Canada. Â From the context, it is likely that this decision is going to stand, since the respondent did not contest this lawsuit. The issue of extra-terrtorial reach of Canadian courts in the internet context is going to be overtaken by the pending Supreme Court decision in Equustek. Stay tuned.
Calgary – 07:00 MT
No comments
Dear CASL: When can I rely on “implied consent”?
.
By Richard Stobbe
Canada’s Anti-Spam Legislation (CASL) is overly complex and notoriously difficult to interpret – heck, even lawyers start to see double when they read the official title of the law (An Act to promote the efficiency and adaptability of the Canadian economy by regulating certain activities that discourage reliance on electronic means of carrying out commercial activities, and to amend the Canadian Radio-television and Telecommunications Commission Act, the Competition Act, the Personal Information Protection and Electronic Documents Act and the Telecommunications Act).
The concept of implied consent (as opposed to express consent) is built into the law, and the trick is to interpret when, exactly, a company can rely on implied consent to send commercial electronic messages (CEMs). There are a number of different types of implied consent, including a pre-existing business or non-business relationship, and the so-called “conspicuous publication exemption”.
One recent administrative decision (Compliance and Enforcement Decision CRTC 2016-428 re: Blackstone Learning Corp.) focused on “conspicuous publication” under Section 10(9) of the Act. A company may rely on implied consent to send CEMs where there is:
- conspicuous publication of the email address in question,
- the email address is not accompanied by a statement that the person does not wish to receive unsolicited commercial electronic messages at the electronic address; and
- the message is relevant to the person’s business, role, functions, or duties in a business or official capacity.
In this case, Blackstone Learning sent about 380,000 emails to government employees during 9 separate ad campaigns over a 3 month period in 2014. The case against Blackstone by the CRTC did not dwell on the evidence – in fact, Blackstone admitted the essential facts. Rather, this case focused on the defense raised by Blackstone. The company pointed to the “conspicuous publication exemption” and argued that it could rely on implied consent for the CEMs since the email addresses of the government employees were all conspicuously published online.
However, the company provided very little support for this assertion, and it did not provide back-up related to the other two elements of the defense; namely, that the email addresses were not accompanied by a “no spam” statement, and that the CEMs were relevant to the role or business of the recipients. The CRTC’s decision provides some guidance on implied consent and “conspicuous publication”:
- The CRTC observed that “The conspicuous publication exemption and the requirements thereof set out in paragraph 10(9)(b) of the Act set a higher standard than the simple public availability of electronic addresses.” In other words, finding an email address online is not enough.
- First, the exemption only applies if the email recipient publishes the email address or authorizes someone else to publish it. Let’s take the example of a sales rep who might publish his or her email, and also authorize a reseller or distributor to publish the email address. However, the CRTC notes if a third party were to collect and sell a list of such addresses on its own then “this would not create implied consent on its own, because in that instance neither the account holder nor the message recipient would be publishing the address, or be causing it to be published.”
- The decision does not provide a lot of context around the relevance factor, or how that should be interpreted. CRTC guidance provides some obvious examples – an email advertising how to be an administrative assistant is not relevant to a CEO. Â In this case, Blackstone was advertising courses related to technical writing, grammar and stress management. Arguably, these topics might be relevant to a broad range of people within the government.
- Note that the onus of proving consent, including all the elements of the “conspicuous publication exception”, rests with the person relying on it. The CRTC is not going to do you any favours here. Make sure you have accurate and complete records to show why this exemption is available.
- Essentially, the email address must “be published in such a manner that it is reasonable to infer consent to receive the type of message sent, in the circumstances.” Those fact-sepecific circumstances, of course, will ultimately be decided by the CRTC.
- Lastly, the company’s efforts at compliance may factor into the ultimate penalty. Initially, the CRTC assessed an administrative monetary penalty (AMP) of $640,000 against Blackstone. The decision noted that Blackstone’s correspondence with the Department of Industry showed the “potential for self-correction” even if Blackstone’s compliance efforts were “not particularly robust”. These compliance efforts, among other factors, convinced the commission to reduce the AMP to $50,000.
As always, when it comes to CEMs, an ounce of CASL prevention is worth a pound of AMPs. Get advice from professionals about CASL compliance.
See our CASL archive for more background.
Calgary 07:00 MST
No commentsOnline Defamation & Libel: The Trump Effect
By Richard Stobbe
It’s not often that our little blog intersects with such titanic struggles as the U.S. presidential race – and by using the term “titanic” IÂ certainly don’t mean to suggest that anything disastrous is in the future.
After the New York Times published personal allegations of sexual assault against presidential candidate Donald Trump, the candidate’s lawyers promptly fired a shot across the bow, threatening legal action for libel and demanding that the article be removed from the Times’s website. Last week, the lawyer for the New York Times responded to lawyers for Mr.  Trump with a succinct defense of their publication of the article, arguing “We did what the law allows: We published newsworthy information about a subject of deep public concern.”
If this had happened in Canada, the law would almost certainly favour the position taken by the Times. In Quan v. Cusson, [2009] 3 SCR 712, the Supreme Court of Canada confirmed the defense of “responsible communication on matters of public interest” permitting journalists to report on matters of public interest. That case, interestingly, dealt with an Ontario police officer who attended in New York City shortly after the events of September 11, 2001 in order to assist with the search and rescue effort at Ground Zero. The officer sued for defamation after a newspaper published articles alleging that he had misrepresented himself to the New York authorities and possibly interfered with the rescue operation. As noted by the CBA this defense of responsible journalism applies if:
- the news was urgent, serious, and of public importance,
- the journalist used reliable sources, and
- the journalist tried to get and report the other side of the story.
Calgary – 07:00 MST
No commentsLiability of Cloud-Based Service Provider For Data Breach
By Richard Stobbe
Silverpop Systems provides digital marketing services through a cloud-based tool called ‘Engage’. Leading Market Technologies, Inc. (“LMT”) engaged Silverpop through a service agreement and during the course of that agreement LMT uploaded digital advertising content and recipient e-mail addresses to the Engage system. A trove of nearly half a million e-mail addresses, provided by LMT, was stored on the Engage online system. In November 2010, Silverpop’s system was hacked, putting LMT’s email list at risk. Silverpop notified LMT of the data breach. After LMT refused to pay for further service, Silverpop suspended the agreement.
Litigation commenced in 2012, with LMT claiming damages for breach of contract and negligence based on Silverpop’s failure to keep the email list secure. Should the service provider be liable? Silverpop argued that it was engaged to provide access to its online system, not specifically to keep data secure. Thus there was no breach of its obligations under the agreement. And anyway, if LMT suffered any damages, they were indirect or consequential and consequential such damages were excluded under the terms of the agreement. LMT countered that, in fact, the agreement quite clearly contained a confidentiality clause, and that the damages suffered by LMT were direct damages, not indirect consequential damages.
The US Federal Circuit Court of Appeals in Silverpop Systems Inc. v. Leading Market Technologies Inc. sided with Silverpop:
- “Here, the parties’ agreement was not one for the safeguarding of the LMT List. Rather, the parties contracted for the providing of e-mail marketing services. While it was necessary for LMT to provide a list of intended recipients (represented as e-mail addresses on the LMT List) to ensure that the service Silverpop provided (targeted e-mail marketing) was carried out, the safe storage of the list was not the purpose of the agreement between the parties.” (Emphasis added)
The court was careful to review both the limit of liability clause (which provided an overall cap on liability to 12 months fees), and the exclusion clause (which barred recovery for indirect or consequential damages). The overall limit of liability had an exception: the cap did not apply to a breach of the confidentiality obligation. However, this exception did not impact the scope of the limit on indirect or consequential damages. Since the court decided that the claimed breach did not result from a failure of performance, and the consequential damages clause applied to LMT’s alleged loss. As a result, LMT’s claims were dismissed.
Lessons for business?
- Those limitation of liability and exclusion clauses are often considered “boilerplate”. But they really do make a difference in the event of a claim. Ensure you have experienced counsel providing advice when negotiating these clauses, from either the customer or service provider perspective.
Calgary – 07:00 MST
No commentsCASL Enforcement (Part 1)
.
By Richard Stobbe
Since July 1, 2014, Canada’s Anti-Spam Law (or CASL) has been in effect, and the software-related rules have been in force since January 15, 2015.
With the benefit of hindsight, we can see a few patterns emerge from the efforts by the enforcement trifecta: the Privacy Commissioner of Canada, the CRTC and the Competition Bureau. (For background, see our earlier posts) What follows is a round-up of some of the most interesting and instructive enforcement actions:
This certainly points to the more technical risks of poorly implemented unsubscribe features, rather than underlying gaps in consent. Perhaps this is because heavy enforcement action related directly to consent is still to come. Implied consent can be relied upon during a three-year transitional period. After that window closes, expect enforcement to focus on failures of consent.
To put this all in perspective, consider enforcement of other laws within the CRTC mandate: in 2014 the CRTC did not issue any notices of violation of CASL, but issued 10 notices of violation related to the Unsolicited Telecommunications Rules and the National Do Not Call List (DNCL); in 2015 the CRTC issued 1 notice of violation of CASL and about 20 related to the Do Not Call List.
Calgary – 07:00 MT
1 commentApple’s Liability for the Xcode Hack
.
By Richard Stobbe
I don’t think I’m going out on a limb by speculating that someone, somewhere is preparing a class-action suit based on the recently disclosed hack of Apple’s app ecosystem.
How did it happen? In a nutshell, hackers were able to infect a version of Apple’s Xcode software package for iOS app developers. A number of iOS developers – primarily in China, according to recent reports – downloaded this corrupted version of Xcode, then used it to compile their apps. This corrupted version was not the “official” Apple version; it was accessed from a third-party file-sharing site. Apps compiled with this version of Xcode were infected with malware known as XcodeGhost. These corrupted apps were uploaded and distributed through Apple’s Chinese App Store. In this way the XcodeGhost malware snuck past Apple’s own code review protocols and, through the wonder of app store downloads, it infected millions of iOS devices around the world.
The malware does a number of nasty things – including fishing for a user’s iCloud password.
This case provides a good case study for how risk is allocated in license agreements and terms of service. What do Apple’s terms say about this kind of thing? In Canada, the App Store Terms and Conditions govern a user’s contractual relationship with Apple for the use of the App Store. On the face of it, these terms disclaim liability for any “…LOSS, CORRUPTION, ATTACK, VIRUSES, INTERFERENCE, HACKING, OR OTHER SECURITY INTRUSION, AND APPLE CANADA DISCLAIMS ANY LIABILITY RELATING THERETO.”
Apple could be expected to argue that this clause deflects liability. And if Apple is found liable, then it would seek the cover of its limitation of liability clause. In the current version of the terms, Apple claims an overall limit of liability of $50. Let’s not forget that “hundreds of millions of users” are potentially affected.
As a preliminary step however, Apple would be expected to argue that the law of the State of California governs the contract, and Apple would be arguing that any remedy must be sought in a California court (see our post the other day: Forum Selection in Online Terms).
Will this limit of liability and forum-selection clause hold up to the scrutiny of Canadian courts if there is a claim against Apple?
Calgary – 07:00 MT
No commentsOutsourcing by Canadian Companies after the USA PATRIOT Act
By Richard Stobbe
Wondering about outsourcing your data to the U.S.? What follows is an update to one of our most popular posts: Outsourcing by Canadian Companies: Another Look at the USA PATRIOT Act, originally written in January 2013.
In that post, we discussed the concern that U.S. government authorities may use the provisions of the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (“PATRIOT Actâ€) to access the personal information of Canadians where that information is stored in the United States in the context of outsourcing or cloud-computing.
We also noted that for private sector businesses there are no specific legal prohibitions on outsourcing to the United States in light of the PATRIOT Act, provided (1) reasonable safeguards are built into the outsource contract (including confidentiality, use-restrictions, security, and provisions to meet monitoring and audit requirements), and (2) customers are notified in a clear way when their personal information will be stored or handled outside Canada. The only exceptions to this are within the public sector, as reviewed in our earlier post.
What Has Changed and What Remains the Same
This is a complicated area of law. Starting in June 2013, Edward Snowden’s revelations about N.S.A.’s pervasive warrantless surveillance programs triggered a broader debate about privacy, as well as the specific risks of outsourcing to U.S. companies.
Certain provisions of the PATRIOT Act expired under a sunset clause on June 1, 2015. The U.S. Congress passed the USA FREEDOM Act on June 2, 2015 (in keeping with the American penchant for legislative acronyms, the full name is “Uniting and Strengthening America by Fulfilling Rights and Ending Eavesdropping, Dragnet-collection and Online Monitoring Act“).
The USA FREEDOM Act restores many of the expired provisions of the PATRIOT Act through 2019. Some provisions of the Foreign Intelligence Surveillance Amendments Act will expire in 2017 (including Section 702, a provision which underpins some of the N.S.A.’s bulk surveillance of online communications). Under the FREEDOM Act, certain sections of the Foreign Intelligence Surveillance Act of 1978 were amended in an effort to delimit the NSA’s mass data collection programs. However, the restrictions on bulk data collection don’t take effect for 6 months after the USA FREEDOM Act is enacted. There is also a carve-out to permit the government to obtain FISA orders during this 180-day period. The effect of this is unclear, but commentators have speculated that during this 6-month grace period the N.S.A. can continue bulk collection, and obtain FISA orders which are not constrained by the requirement for a “selection term”.
Furthermore, bulk collection of phone data is not necessarily coming to an end – arguably, it is merely being delegated to the telecoms: “The Freedom Act does take the bulk collection of Americans’ telephone records out of the hands of the National Security Agency and leaves those records with the phone companies; it sets up procedures for the NSA to get access to those records when it wants to.”
The new law does introduce reforms for oversight of government surveillance. In a nod to transparency, some FISA Court opinions may become available, and technology companies will have the ability to publicly report the number of government surveillance requests or investigation inquiries they receive. Previously, companies were prohibited from reporting that such requests had been received.
Generally, under the FREEDOM Act, indiscriminate bulk data collection is to be reformed by requiring the use of “specific selection terms”. In other words, government agencies such as the NSA must use a search term – the name of a specific person, account, address, or personal device, or any other specific identifier – to limit the scope of data collection “to the greatest extent reasonably practicable”.
In 2004, after the initial flurry of anxiety about US government surveillance under the PATRIOT Act, the Privacy Commissioner of Canada noted: “The [PATRIOT] Act is simply one example of a law that can give the United States government or its agencies access to personal information about Canadians that has been transferred to the United States. Research done by the Office of the Privacy Commissioner and discussions with the Department of Justice suggest that the USA PATRIOT Act is not likely in the normal course of events to be used to obtain personal information held in the United States about Canadians.” (Emphasis added)
In light of the 2013 Snowden revelations (and the 2007 Mark Klein disclosures), we now know that, in fact, the bulk collection of phone and internet data by the N.S.A. would have resulted in a lot of personal information about Canadians being collected by the N.S.A. in the United States through the N.S.A.’s PRISM, ECHELON and related surveillance programs.
Data access by Canadian or American government authorities in the course of investigations is not new. Don’t forget that the PATRIOT Act itself was merely an amendment and expansion to a series of existing government investigation tools which were already part of U.S. law, such as the Electronic Communications Privacy Act, Computer Fraud and Abuse Act, Foreign Intelligence Surveillance Act, Money Laundering Control Act and the Bank Secrecy Act. Going back even further, NSA’s cooperation and information-sharing with Canadian security agencies actually dates to the 1940s (see: the UK-USA Agreement). However, the sheer scope, breadth and depth of surveillance was new.
The Americans are not the only ones who carry on surveillance. There are a number of Canadian laws that enable police, security agencies and government investigators to obtain access to information held in Canada in the course of an investigation. And as in the U.S., Canadian security agencies have also been caught exceeding the legal limits on their online surveillance (see X (Re), 2013 FC 1275; aff’d 2014 FCA 249, where the Federal Court and Federal Court of Appeal decided that CSIS breached the duty of candour owing to the Court in seeking and obtaining search warrants fro surveillance on Canadians outside Canada).
Canadian police and security agencies can also obtain information held in the U.S., just as American security agencies can obtain records held in Canada through information-sharing agreements, protocols and a bilateral treaty between the United States and Canada known as the Mutual Legal Assistance Treaty (the “MLATâ€). Other countries have similar investigative powers.
While the Americans are making some modest reforms to their surveillance laws, Canadian authorities are actually expanding their reach; the Anti-terrorism Act, 2015 (Bill C-51) was passed on June 9, 2015, and is awaiting royal assent. This new law expands the information-gathering powers between CSIS, police investigators and other Canadian government agencies.
Further, the effect of so-called “boomerang routing” means that online information flowing between a Canadian sender and Canadian recipient is still often routed through the US. (See: IXMaps.ca) Thus, even where data is not physically stored in the US, it may be caught by ongoing N.S.A. surveillance at the point the data traverses through an internet exchange point located within the United States.
Conclusion
As a matter of risk-assessment for Canadian companies outsourcing data to cloud-computing service providers, should you be concerned that your (or your customers’) Canadian online data will be subject to access by the U.S. government?
1. We know that for Canadian private sector businesses there are still no legal prohibitions against outsourcing data to the United States (note that the public sector is treated differently);
2. Best practices still dictate that (a) reasonable safeguards should be built into the outsource contract (including confidentiality, use-restrictions, security, and provisions to meet monitoring and audit requirements), and (b) customers should notified in a clear way when their personal information will be stored or handled outside Canada.
3. There can be no doubt that surveillance practices under the (old) PATRIOT Act resulted in the mass indiscriminate collection of internet and phone data for many years (and very likely continues within the 6-month period after enactment of the FREEDOM Act). It appears very likely that Canadian data outsourced to the U.S. was subject to bulk collection by the N.S.A. Due to “boomerang routing”, it appears likely that even data stored on servers located within Canada often flows through internet exchange locations within the U.S., and therefore would be susceptible to bulk collection by the N.S.A. The USA FREEDOM Act (which is really the PATRIOT Act 2.0) does impose some mild but important reforms on the scope of N.S.A. surveillance. If bulk data and phone-record collection is actually curtailed, the ongoing risk is associated with “targeted” or “selection term” access, in situations where government security and law enforcement agencies exercise rights of accessing and monitoring online data in the course of investigations of a “specific person, account, address, or personal device” in the U.S. It is worth noting that this ongoing risk of access is similar on both sides of the Canada/U.S. border, since Canadian security and law enforcement agencies have similar powers of investigation, and the two governments can rely on MLAT requests and other information sharing protocols to share data.
When you weigh the issues and risks associated with outsourcing Canadian data to the U.S., consider these points and seek advice from experienced IT and privacy counsel.
Further reading: Law, Privacy and Surveillance in Canada in the Post-Snowden Era.
Calgary – 07:00 MDT
No commentsCopyright Implications of a “Right to be Forgotten”? Or How to Take-Down the Internet Archive.
–
By Richard StobbeÂ
They say the internet never forgets. From time to time, someone wants to challenge that dictum.
In our earlier posts, we discussed the so-called “right to be forgotten” in connection with a Canadian trade-secret misappropriation and passing-off case and an EU privacy case. In a brief ruling in October, the Federal Court reviewed a copyright claim that fits into this same category. In Davydiuk v. Internet Archive Canada, 2014 FC 944 (CanLII), the plaintiff sought to remove certain pornographic films that were filmed and posted online years earlier. By 2009, the plaintiff had successfully pulled down the content from the original sites on which the content had been hosted. However, the plaintiff discovered that the Internet Archive’s “Wayback Machine” had crawled and retained copies of the content as part of its archive.
If you’re not familiar with the Wayback Machine, here is the court’s description: “The ‘Wayback Machine’ is a collection of websites accessible through the websites ‘archive.org’ and ‘web.archive.org’. The collection is created by software programs known as crawlers, which surf the internet and store copies of websites, preserving them as they existed at the time they were visited. According to Internet Archive, users of the Wayback Machine can view more than 240 billion pages stored in its archive that are hosted on servers located in the United States. The Wayback Machine has six staff to keep it running and is operated from San Francisco, California at Internet Archive’s office. None of the computers used by Internet Archive are located in Canada.”
The plaintiff used copyright claims to seek the removal of this content from the Internet Archive servers, and these efforts included DMCA notices in the US. Ultimately unsatisfied with the results, the plaintiff commenced an action in Federal Court in Canada based on copyrights. The Internet Archive disputed that Canada was the proper forum: it argued that California was more appropriate since all of the servers in question were located in the US and Internet Archive was a California entity.
Since Internet Archive raised a doctrine known as “forum non conveniens”, it had to convince the court that the alternative forum (California) was “clearly more appropriate†than the Canadian court. It is not good enough to simply that there is an appropriate forum elsewhere, rather the party making this argument has to show that clearly the other forum is more appropriate, fairer and more efficient. The Federal Court was not convinced, and it concluded that there was a real and substantial connection to Canada. The case will remain in Canadian Federal Court. A few interesting points come out of this decision:
- This is not a privacy case. It turns upon copyright claims, since the plaintiff in this case had acquired the copyrights to the original content. Nevertheless, the principles in this case (to determine which court is the proper place to hear the case) could be applied to any number of situations, including privacy, copyright or personality rights.
- Interestingly, the fact that the plaintiff had used American DMCA notices did not, by itself, convince the court that the US was the best forum for this case.
- The court looked to a recent trademark decision (Homeaway.com Inc. v. Hrdlicka) to show that a trademark simply appearing on the computer screen in Canada constituted use and advertising in Canada for trademark law purposes. Here, accessing the content in Canada from servers located in the US constituted access in Canada for copyright purposes.
- While some factors favoured California, and some favoured Canada, the court concluded that California was not clearly more appropriate. This shows there is a first-mover advantage in commencing the action in the preferred jurisdiction. Â
Get advice on internet copyright claims by contacting our Intellectual Property & Technology Group.
Calgary – 07:00 MST
Â
No commentsUpdate: PIPA Revived
By Richard Stobbe
As a follow-up to our earlier post (PIPA on Death’s Door), Alberta’s Personal Information Protection Act (PIPA) has been resuscitated. The Supreme Court of Canada (SCC) has granted a six-month reprieve, to allow the Government of Alberta to pass amendments to PIPA. An amended bill was tabled in the legislature last week. The amendments attempt to strike a balance to address the constitutional issues that were the cause of the Act’s downfall in an SCCÂ decision more than a year ago.
Stay tuned.
Calgary – 07:00 MST
No commentsTwo Privacy Class Actions: Facebook and Apple (Part 2)
–
By Richard Stobbe
In Part 1, we looked at the B.C. decision in Douez v. Facebook, Inc.
Another proposed privacy class action was heard in the B.C. court a few months later: Ladas v. Apple Inc., 2014 BCSC 1821 (CanLII).
This was a claim by a representative plaintiff, Ms. Ladas, alleging that Apple breached the customer’s right to privacy under the Privacy Act (B.C.), since iOS 4 records the location of the “iDevice” (that’s the term used by the court for any Apple-branded iOS products) by surreptitiously recording and storing locational data in unencrypted form which is “accessible to Apple”. The claim did not assert that this info was transmitted to Apple, merely that it was “accessible to Apple”. This case involved a different section of the Privacy Act (B.C.) than the one claimed in Douez.
The Ladas claim, curiously, referred to a number of public-sector privacy laws as a basis for the class action, and the court dismissed these claims as providing no legal basis. The court did accept that there was a basis for a claim under the Privacy Act (B.C.) and similar legislation in 3 other provinces. However, the claim fell down on technical merit. It did not meet all of the requirements under the Class Proceedings Act: specifically, the court was not convinced that there was an “identifiable class” of 2 or more persons, and did not accept there were “common issues” among the proposed class members (assuming there was an identifiable class).
Thus, the class action was not certified. It was dismissed without leave to amend the pleadings.
Apple’s iOS software license agreement did not come into play, since the claim was dismissed on other grounds. If the claim had proceeded far enough to consider the iOS license, then it would surely have faced the same defences raised by Facebook in Douez. As the judgement noted: Apple argued that “every time a user updates the version of iOS running on the user’s iDevice, the user is prompted to decide whether the user wants to use Location Services by accepting the terms of Apple’s software licensing agreement. Apple relies on users taking such steps in its defence of the plaintiff’s claims. The legal effect of a user clicking on “consent†or “allow†or “ok†or “I agree†would be an issue on the merits in this action.”
Any test of Apple’s license agreement will have to wait for another day.
Calgary – 07:00 MST
No commentsTwo Privacy Class Actions: Facebook and Apple
–
By Richard Stobbe
Two privacy class actions earlier this year have pitted technology giants Facebook Inc. and Apple Inc. against Canadian consumers who allege privacy violations. The two cases resulted in very different outcomes.
First, the Facebook decision: In Douez v. Facebook, Inc., 2014 BCSC 953 (CanLII), the court looked at two basic questions:
- Do British Columbian users of social media websites run by a foreign corporation have the protection of BC’s Privacy Act, R.S.B.C. 1996, c. 373?
- Do the online terms of use for social media override these protections?
The plaintiff Ms. Douez alleged that Facebook used the names and likenesses of Facebook customers for advertising through so-called “Sponsored Storiesâ€. The claim alleges that Facebook ran the “Sponsored Stories” program without the permission of customers, contrary to of s. 3(2) of the B.C. Privacy Act which says:
“It is a tort, actionable without proof of damage, for a person to use the name or portrait of another for the purpose of advertising or promoting the sale of, or other trading in, property or services, unless that other, or a person entitled to consent on his or her behalf, consents to the use for that purpose.”
Interestingly, this Act was first introduced in B.C. in 1968, even before the advent of the primitive internet in 1969 .
Facebook argued that its Terms of Use precluded any claim in a B.C. court, due to the “Forum Selection Clause” which compels action in the State of California. The court accepted that, on its face, the Terms of Service were valid, clear and enforceable. However, the court went on to decide that the B.C. Privacy Act establishes unique claims and specific jurisdiction. The Act mandates that claims under it “must be heard and determined by the Supreme Court†in British Columbia. This convinced the court that Facebok’s Forum Selection Clause should be set aside in this case, and the claim should proceed in a B.C. court.
The class action was certified. Facebook has appealed. Stay tuned.
Next up, the Apple experience.
Calgary – 07:00 MST
No commentsAlberta Privacy Law Update: PIPA on Death’s Door
By Richard Stobbe
About a year ago on November 15, 2013, Alberta’s Personal Information Protection Act (PIPA) was declared invalid on constitutional grounds. The Supreme Court of Canada (SCC), in its wisdom, deferred the effect of this order for a 1 year period, to permit the Alberta legislature to revisit and amend the legislation to bring it in line with the Constitution. The legislature has drafted legislation in the intervening period, but is not due to return to work until November 17, 2014, two days after the court’s declaration of invalidity takes effect.
The Alberta government has filed a motion asking the SCCÂ to extend the suspension period, to provide more time to address the issue, but an overhaul of PIPA is not an easy or quick task. Stay tuned.
Calgary – 07:00 MDT
No commentsAn American Attorney in Canada (Part 2: Anti-Spam)
By Richard Stobbe
Canada and the USA. We enjoy the world’s longest undefended border… a border that unfortunately does not screen spam.
If you are an American attorney with US clients doing business in Canada, then you should be aware of a few things, like our lack of imaginative legislative acronyms, such as the CAN-SPAM Act (from Controlling the Assault of Non-Solicited Pornography And Marketing) (…or while we’re at, who can forget the Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property (PROTECT-IP) Act, or the Enforcing and Protecting American Rights Against Sites Intent on Theft and Exploitation (E-PARASITE) Act).
Secondly, you should be aware that Canada’s incoming anti-spam law, known unimaginatively as CASL (Canada’s Anti-Spam Law) is coming into force next week, on July 1, 2014. Here are some pointers for US counsel:
- Remember, an organization’s compliance with CAN-SPAM does not necessarily mean compliance with CASL. This is because of a number of important points of departure between the two laws. Canada’s law has been described as among the strictest internationally.
- CASL broadly covers all “commercial electronic messages” and is not restricted to email, as is the case with CAN-SPAM. Thus, CASLÂ is broad enough to capture text messages, social media messaging and other forms of electronic messages.
- CAN-SPAM permits a “negative option” approach to consent, in which toggle consent boxes can be pre-clicked and the user has the ability to opt out by “un-clicking”. CASL prohibits such an approach and requires express consent with an opt-in mechanism.
- Statutory penalties under CASL are more severe (up to $10 million for organizations, and up to $1 million for individuals), and the law also establishes a broader private right of civil action (which will come into effect in the future).
- Lastly, CASL does provide for personal liability for directors and officers.
For more information on the application of this law to American businesses, contact Field Law.
Calgary – 07:00 MST
No commentsSupreme Court of Canada on Internet Privacy
–
By Richard Stobbe
The Supreme Court puts it mildly in its opening line: “The Internet raises a host of new and challenging questions about privacy.”
One of those questions is whether an IP address can be considered personal information. An internet protocol (IP) address is the unique numeric identifier of a particular computer and, in a wider sense, can be any node or point in the internet generally. In the recent case of R. v. Spencer, 2014 SCC 43, the Supreme Court of Canada (SCC) considered whether there is a reasonable expectation of privacy in ISP subscriber information including IP address information.
In this case, police identified the IP address of a computer that someone had been using to access child pornography. Police approached the ISP and obtained the subscriber information associated with that IP address. At this point, no warrant was issued. This led them to the accused and a warrant was issued for a search of his residence. The accused was charged and convicted. The SCC indicated that in this case, there was a reasonable expectation of privacy in the subscriber information, including the IP address.
Since the search of the subscriber info was obtained without a warrant, the search violated the Charter. While a warrant was eventually issued for a search of the accused’s residence, that warrant could not have been obtained without the original (warrantless, unconstitutional) search of the ISP subscriber information. Since the original search was unconstitutional, it follows that the search of the residence was also unconstitutional. This all leads to the exclusion of the evidence found at the residence.
Nevertheless, the SCC said that, even in light of all of the above points, the “police conduct in this case would not tend to bring the administration of justice into disrepute.” The court concluded, in essence, that excluding the evidence would be worse than allowing that unconstitutional search. The admission of the evidence was therefore upheld.
A few key points to note:
- Terms of Use and Privacy Policies are carefully reviewed and taken into account by the court in these cases.
- In this case Shaw was the ISP. Shaw’s Privacy Policy said that “Shaw may disclose Customer’s Personal Information to: . . . a third party or parties, where the Customer has given Shaw Consent to such disclosure or if disclosure is required by law…”  The initial warrantless search by the police was not “required by law” (in the sense that it was merely a request and police had no way to legally compel compliance). This contributed to the court’s conclusion that there was a reasonable expectation of privacy on the part of the accused.
- This contrasts with the decision by the Ontario Court of Appeal in R. v. Ward 2012 ONCA 660, where the court held that the provisions of PIPEDA were a factor which weighed against finding a reasonable expectation of privacy in subscriber information. That was another child pornography case. In that case, the ISP was Bell, whose terms said “[Bell Sympatico will] offer full co-operation with law enforcement agencies in connection with any investigation arising from a breach of this [Acceptable Use Policy].†There was no reference to disclosures “required” by law.  In that case, the accused has a subjective expectation of privacy, but that expectation was not objectively reasonable in light of his criminal activities.
- Consider reviewing your privacy policies and your organization’s ability to disclose subscriber information in light of these decisions.
Calgary – 07:00 MST
The Transitional Period & Implied Consent Under CASL
By Richard Stobbe
If the term “CASL compliance” is giving you a nervous twitch, you’re not alone. Many small and medium-sized businesses in Canada are scrambling to prepare for Canada’s Anti-Spam Law (CASL), whose official title says it all – especially the part about “efficiency and adaptability” (take a deep breath before you read “An Act to promote the efficiency and adaptability of the Canadian economy by regulating certain activities that discourage reliance on electronic means of carrying out commercial activities, and to amend the Canadian Radio-television and Telecommunications Commission Act, the Competition Act, the Personal Information Protection and Electronic Documents Act and the Telecommunications Act“).
Two weeks from today, on July 1st, the first stage of the new law will come into force. The implied consent provisions in Section 66 create a transitional period.
For a period of three years after July 1, 2014, consent is implied for all “commercial electronic messages” (CEMs) if the sender and the recipient have a preâ€existing business or non-business relationship and that relationship previously included the exchange of CEMs. Consent can be withdrawn by the recipient at any time during that threeâ€year period. Note that “commercial electronic messages”, “existing business relationship” and “existing non-business relationship” all have special definitions in the legislation.
While this transitional period only lasts for 36 months, it allows a sender of CEMs to rely on prior relationships that reach back in time. The regular implied consent provisions only permit a sender to rely on a two-year window – in other words, implied consent depends on an existing relationship during the two-year period before the CEM is sent (or only 6-months in some cases).
In recent information sessions, the CRTC has indicated that the Section 66 transition provisions do not impose those two-year or 6-month rules: “So what Section 66 does is during that transition period of three years, the definitions of existing business relationship and existing non-business relationship are not subject to the limitation period, which are six months and two years that would otherwise be applicable. So in theory, if you meet the definition of existing business relationship or existing non-business relationship and there’s the communication of CEMs between the individuals, you could go back 25 years in theory.” [Link to transcript.]
Don’t think the transitional period lets you off the hook. Every organization should be preparing for CASL for a July 1st start date. If you need assistance on reviewing the scope of your obligations, how the law applies, and how the transitional provisions might apply to your organization, contact us.
Calgary – 07:00 MST
No comments