The Artificial Intelligence and Data Act… coming soon to AI near you

In June, 2022, the Government introduced Bill C-27, an Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act. A major component of this proposed legislation is a brand new law on artificial intelligence. This will be, if passed, the first Canadian law to regulate AI systems.

The stated aim of the Artificial Intelligence and Data Act (AIDA) is to regulate international and interprovincial trade and commerce in artificial intelligence systems. The Act requires the adoption of measures to mitigate “risks of harm” and “biased output” related to something called “high-impact systems“.

Ok, so how will this work? First, the Act (since it’s federal legislation) applies to “regulated activity” which refers to specific activities carried out in the course of international or interprovincial trade and commerce. That makes sense since that’s what falls into the federal jurisdiction. Think banks and airlines, for sure, but the scope will be wider than that since any use of a system by private sector organizations to gather and process data across provincial boundaries will be caught. The regulated activities are defined as:

  • (a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;
  • (b) designing, developing or making available for use an artificial intelligence system or managing its operations.

That is a purposely broad definition which is designed to catch both the companies that use these systems, and providers of such systems, as well as data processors who deploy AI systems in the course of data processing, where such systems are used in the course of international or interprovincial trade and commerce.

The term “artificial intelligence system” is also broadly defined and captures any “technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”

For anyone carrying out a “regulated activity” in general, there are record keeping obligations, and regulations regarding the handling of anonymized data that is used in the course of such activities.

For those who are responsible for so-called “high-impact systems“, there are special requirements. First, a provider or user of such a system is responsible to determine if their system qualifies as a “high-impact system” under AIDA (something to be defined in the regulations).

Those responsible for such “high-impact systems” must, in accordance with the regulations, establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system, and they must also monitor compliance with these mitigation measures.

There’s more: anyone who makes a “high-impact system” available, or who manages the operation of such a system, must also publish a plain-language description of the system that includes an explanation of:

  • (a) how the system is intended to be used;
  • (b) the types of content that it is intended to generate and the decisions, recommendations or predictions that it is intended to make; and
  • (c) the mitigation measures.
  • (d) Oh, and any other information that may be prescribed by regulation in the future.

The AIDA sets up an analysis of “harm” which is defined as:

  • physical or psychological harm to an individual;
  • damage to an individual’s property; or
  • economic loss to an individual. 

If there is a risk of material harm, then those using these “high-impact systems” must notify the Minister. From here, the Minister has order-making powers to:

  • Order the production of records
  • Conduct audits
  • Compel any organization responsible for a high-impact system to cease using it, if there are reasonable grounds to believe the use of the system gives rise to a “serious risk of imminent harm”.

The Act has other enforcement tools available, including penalties of up to 3% of global revenue for the offender, or $10 million, and higher penalties for more serious offences, up to $25 million.

If you’re keeping track, the Act requires an assessment of:

  • plain old “harm” (Section 5),
  • “serious harm to individuals or harm to their interests” (Section 4),
  • “material harm” (Section 12),
  • “risks of harm” (Section 8),
  • “serious risk of imminent harm” (Sections 17 and 28), and
  • “serious physical or psychological harm” (Section 39).

All of which is to be contrasted with the well-trodden legal analysis around the term “real risk of significant harm” which comes from privacy law.

I can assure you that lawyers will be arguing for years over the nuances of these various terms: what is the difference between “harm” and “material harm”, “risk” versus “serious risk”? and what is “serious harm” versus “material harm” versus “imminent harm”? …and what if one of these species of “harm” overlaps with a privacy issue which also triggers a “real risk of significant harm” under federal privacy laws? All of this could be clarified in future drafts of Bill C-27, which would make it easier for lawyers to advise their clients when navigating the complex legal obligations in AIDA

Stay tuned. This law has some maturing to do, and much detail is left to the regulations (which are not yet drafted).

Calgary – 16:30 MT

No comments

Canadian Smart Contract Law: Is it broke and do we need to fix it?

.

By Richard Stobbe

The idea of a ‘smart contract’ has been a lot of things: it’s upheld as the next big thing, a beacon of change for society, a nail in the coffin of an inefficient legal services profession, and it’s criticized as a misnomer for ‘dumb code’.  Our review of smart contracts continues with this question:  Are ‘smart contracts’ in need of specific laws and regulations in Canada?

In other words, is ‘smart contract’ law broken and in need of fixing?

(Need a quick primer on smart contracts? Can Smart Contracts Really be Smart?)

For those who may recall, the advent of other technologies has caused similar hand-wringing. For example the courts have, over the years, dealt with contract formation involving the telephone, radio, telex and fax … and email … yes, and the formation of contracts by tapping “I accept” on a screen.

There is a very good argument that the existing electronic transactions laws in Canada adequately cover the most common situations where so-called ‘smart contracts’ would be used in commercial relationships. For example, the Alberta Electronic Transactions Act (a piece of legislation that was introduced almost 20 years ago, when people talked about the “information superhighway”), was intentionally designed to be technology neutral.

The term “electronic signature” is defined in that law as “electronic information that a person creates or adopts in order to sign a record and that is in, attached to or associated with the record”. It’s so broad that the term can arguably apply to any number of possible applications, including situations where someone approves a transactional step within a smart contract work flow. Of course, this still has to be tested in court, where a judge would apply the law in an assessment of the specific facts of a particular dispute.

Does that create uncertainty? Yes, to a degree.

But the risks associated with that approach are preferable to the alternative. The alternative is to go the way of Arkansas, or other jurisdictions who have decided to wade in by prescriptively defining “smart contracts”.   For example, a 2019 law in Arkansas – “An Act Concerning Blockchain Technology” HB 1944 – amends that state’s electronic transactions law by defining “blockchain distributed ledger technology,” “blockchain technology” and “smart contract.”  By imposing specific definitions, these laws may have the unintended effect of excluding certain technologies that should be included, or catching use cases that were not intended to be caught.  This would be the equivalent of trying, in 2001, to define an electronic transaction by looking at  Amazon’s 1-click checkout. Sure, it was innovative at that time, but to peg a legal definition to that technology would have been short-sighted and unnecessarily constraining.

A second problem is a lack of standardization or uniformity in how different jurisdictions are choosing to define these technologies. This creates more uncertainty than a reliance on existing electronic transactions laws.

As blockchain and smart contract technology develops, the rush to have legal definitions cast in stone is premature and unwarranted.

Related Reading:

Blockchain Legislation – Too Soon?

 

Calgary – 07:00 MST

No comments

Smart Contracts (Part 4): Ricardian Contracts and the Internet of Agreements

.

By Richard Stobbe

As we’ve reviewed before, the term “smart contract” is a misnomer. (For background, see Smart Contracts (Part 3): Opportunities & Limits of Smart Contracts.) The so-called smart contract isn’t really a “contract” at all : it’s the portion of the transaction that can be automated and executed through software code. Hence, we prefer the term “programmatically executed transactions” — not as catchy, but maybe more accurate.

The written legal prose, or what we might think of as a ‘traditional contract’, sets out a bunch of contract terms, usually in arcane legalese, that describe certain elements of the relationship. Parts of that ‘traditional contract’ can be automated and delegated to software. However, once concluded, the traditional legal contract usually sits in one silo, and the software code is developed and sits in another silo, completely divorced one from the other.

The evolution of research and software tools has permitted the so-called Ricardian contract to function as a bridge between these silos. Based on the work of Ian Grigg, a Ricardian contract is conceived as a single document that has a number of elements that permit it (1) to function as a “contract” in the way the law would recognize a contract, so the thing has legal integrity, (2) to be readable by humans, in legal prose, (3) to be readable by software, like software reads a database or a input fields, (4) to be signed digitally, and (5) to be integrated with cryptographic identifiers that imbue the transaction process with technical integrity and verifiability. This is where blockchain or distributed ledger technology comes in handy.

The document should be readable by both humans and machines. It integrates the ‘traditional contract’ with the ‘smart contract’, since the elements or parameters that can be automated and implemented by software are read into the code straight from the contract terms.

Can this form the basis for software developers and lawyers to play in the same sandbox?

There are a number of developments in this arena where “legal” and “software” overlap, and Ricardian contracts are merely one iteration of this concept: for more background, Meng Wong’s presentation on Computable Contracts is a must-see.  His Legalese contracts are intended to allow legal terms and conditions to be represented in machine-understandable way, with or without a blockchain deployment. OpenLaw is another version of this approach : blockchain-enabled contracts that delegate certain functions to software. There are a whole range of options and variations of this.

In theory, this sets up an “Internet of Agreements” system that is designed to execute deals and transactions automatically with distributed ledger ecommerce technology through interwoven contracts and software across disparate platforms.

How far away is this legal-techno-dream?

For some applications, particularly in financial services, it’s much closer. Versions of these technologies are being beta-tested and implemented by global banks.  Since many of these implementations will be between entities in back rooms of the financial services industry, they will be invisible to the average consumer.  For many sectors – let’s say for example, the development of a full-stack land transfer technology - where smart contracts have to interface with existing immovable legal or institutional structures, this is a long way off.

 

Calgary – 07:00

 

No comments

Smart Contracts (Part 3): Opportunities & Limits of Smart Contracts

.

By Richard Stobbe

In Part 1 (Can Smart Contracts Really be Smart?), we looked at “smart contracts”, what might be called “programmatically executed transactions” or PETs. This concept refers to computers programmed to automatically executes certain transaction steps, provided certain conditions are met, illustrated by the vending machine analogy.

In Part 2 (Smart Contracts (Part 2): Intermediaries? We Don’t Need No Stinkin’ Intermediaries!), we pointed out that users of private shared (DLT) ledger systems must be aware of the attendant costs of switching to new intermediaries, and the legacy costs of continued dependence on old intermediaries.  To borrow a phrase from The Who, “Meet the new boss… same as the old boss.”  In other words, don’t be fooled into thinking that intermediaries will disappear; they merely change. Managing the intermediaries remains a challenge.

In this final instalment of our series, we look at the opportunities and limits of smart contracts. I want to emphasize a few points:

  1. Placing Smart Contracts in Context: First, it’s worth emphasizing that smart contracts or PETs are merely one element of the whole DLT permissioned ledger ecosystem. The smart contract enables and implements certain important transactional steps, but those steps fit within the broader context of a matrix of contractual relations between the participants. Many of those relationships will be governed by “traditional” contracts. This traditional contract architecture enables the smart contract workflow.  The take-home point here is that traditional contracts will remain a part of these business relationships, just as intermediaries will remain part of business relations. Let me provide an example: the Apple iTunes ecosystem contains a number of programmatically executed transactions. When a consumer chooses a movie rental, a song download or a music subscription, the order fulfilment and payment processing is entirely automated by software. However, users cannot participate in that ecosystem, nor can Apple obtain content from content producers, without an overarching set of traditional contracts: end user license agreements, royalty agreements, content licenses, agreements with payment providers. Those traditional contracts enable the PET, just as the PET enables the final transaction fulfillment.
  2. Changing Smart Contracts:  Once a PET is set loose, we think of it as a self-actuating contract: it cannot be changed or altered or stopped by humans.  The inability of humans to intervene is seen as a positive attribute - it removes the capriciousness of individuals and guarantees a specific pre-determined machine-driven outcome. But what if the parties decide (humans being humans) that they want the contract to be suspended or altered? Where humans control the progression of steps, they can decide to change, stop or reverse at any point in the workflow. Of course we’re assuming that this is a change or reversal to which both parties agree. But what is the mechanism to hit “pause”, or change a smart contract once it’s in midflight?  That remains a challenge of smart contracts, particularly as PET workflows gain complexity using blockchain-based technologies.
    • One solution may be found within those traditional contracts, which can be drafted in such a way that they allow for a remedy in the event of a change in circumstances to which both sides agree, even after the PET has started executing the steps it was told to execute. In other words, the machine may complete the tasks it was told to do, but the humans may decide (contractually) to control the ultimate outcome, based on a consensus mechanism that can override the machine after the fact.  This does have risks – it injects uncertainty into the final outcome. It also carries benefits – it adds flexibility to the process.
    • Another solution may be found in the notion of “hybrid contracts” which are composed in both machine-readable form (code) and human-readable form (legal prose).  This allows the parties to implement the consensus using a smart contract mechanism, and at the same time allows the parties to open up and change the contract terms using more traditional contract methods.
  3. Terminating Smart Contracts:  Finally, consider how one party might terminate the smart contract relationship. If the process is delegated to self-executing blockchain code, how can the relationship be terminated?  Again, where one party retains the ability to unilaterally terminate a PET, the final outcome is uncertain, and one of the chief benefits of smart contracts is lost. Too much flexibility will undermine the integrity of the process.  On the other hand, too much rigidity might slow adoption of certain smart-contract workflows, especially as transaction value increases. A multilateral permissioned mechanism to terminate the smart contract must be considered within the system. Participants in a smart contract permissioned ledger will also have to consider what happens with the data that sits on the (permanent, immutable) ledger after termination. When building the contract matrix, consider what is “ledgerized”, what remains in non-ledgerized participant databases, and what happens to the ledgerized data after contract termination.

 

If you need advice in this area, please get in touch with our Emerging Technology Group.

 

Calgary – 07:00 MST

1 comment

AI and Copyright and Art

.

By Richard Stobbe

At the intersection of this Venn diagram  [ Artificial Intelligence + Copyright + Art ] lies the work of a Paris-based collective.

Obvious

By means of an AI algorithm, these artists have generated a series of portraits that have caught the attention of the art world, mainly because Christie’s, the auction house,  has agreed to place these works of art on the auction block. Christie’s markets themselves as “the first auction house to offer a work of art created by an algorithm”.  The series of portraits is notionally “signed” (𝒎𝒊𝒏 𝑮 𝒎𝒂𝒙 𝑫 𝔼𝒙 [𝒍𝒐𝒈 𝑫 (𝒙))] + 𝔼𝒛 [𝒍𝒐𝒈(𝟏 − 𝑫(𝑮(𝒛)))]), denoting the algorithm as the author.

We have an AI engine that was programmed to analyze a range of prior works of art and create a new work of art. So where does this leave copyright? Clearly, the computer software that generated the artwork was authored by a human; whereas the final portrait (if it can be called that) was generated by the software.  Can a work created by software enjoy copyright protection?

While Canadian courts have not yet tackled this question, the US Copyright Office in its Compendium of US Copyright Office Practices has made it clear that copyright requires human authorship: “…when a work is created with a computer program, any elements of the work that are generated solely by the program are not registerable…”

This is reminiscent of the famous “Monkey Selfie” case that made headlines a few years ago, where the Copyright Office came to the same conclusion: without a human author, there’s no copyright.

Calgary – 0:700 MST

Photo Credit: Obvious Art

 

 

No comments