Behavioural use licensing won’t fix the negative impacts of AI

I recently read a paper called “Behavioral Use Licensing for Responsible AI” in which the authors make a case that licences can be used to create a legally enforceable way to limit the ways in which AI can be used, and in particular in-line with response AI guidelines.

Here’s the abstract:

With the growing reliance on artificial intelligence (AI) for many different applications, the sharing of code, data, and models is important to ensure the replicability and democratization of scientific knowledge. Many high-profile academic publishing venues expect code and models to be submitted and released with papers. Furthermore, developers often want to release these assets to encourage development of technology that leverages their frameworks and services. A number of organizations have expressed concerns about the inappropriate or irresponsible use of AI and have proposed ethical guidelines around the application of such systems. While such guidelines can help set norms and shape policy, they are not easily enforceable. In this paper, we advocate the use of licensing to enable legally enforceable behavioral use conditions on software and code and provide several case studies that demonstrate the feasibility of behavioral use licensing. We envision how licensing may be implemented in accordance with existing responsible AI guidelines.

Behavioral Use Licensing for Responsible AI, https://doi.org/10.1145/3531146.3533143

The paper includes examples of licensing terms, as well as an example licence, that illustrates how behaviour restrictions might be imposed. For example by adding clauses like:

Licensee will not use the Licensed Technology to enable the distribution of untrustworthy information, lies or propaganda, and if Licensee discovers that such distribution is unintentionally occurring, Licensee will put in place countermeasures, including human agents, to prevent or limit such distribution.

Example clause

The authors go on to highlight a number of issues, including the difficulty of driving adoption of standardised licences, the incompatibility of these licences with existing open source (and open data) licences, the issue of enforcement, the problems of licence proliferation and that legal frameworks reinforce existing power structures.

Their conclusion, despite those issues, are that licensing can be an effective measure in encouraging responsible use of AI, models and data.

I agree with them that licensing has an important role in shaping the data ecosystems we want to build. I also agree that standardising the terms of more restrictive licences would also be a beneficial thing to do. But, we need so much more than licenses.

I don’t think the paper really addresses all of the issues around licensing. And those that it does address aren’t fully covered, IMO.

For example:

  • Bad actors can and will ignore licences
  • New, harmful uses may be identified after a model has been published under a licence. While a licence might be updated to restrict those uses, existing licensees will be able to continue to use the model or data under their prior agreement
  • Poorly phrased restrictions might hinder or discourage beneficial applications of the technology
  • Restrictions are up for interpretation, by users and the courts. For example, look at the example term above and consider who gets to define what is considered to be “untrustworthy information“? And exactly what counts as “countermeasures“?
  • There is no effective way for someone to monitor how their model or data is used, making enforcement extremely difficult
  • Reuse can be international, which adds complexity to licence design, monitoring and enforcement
  • By the time that an IP holder identifies that a model has been used in a way that breaches the contract, the bulk of the harms may have already been caused
  • Enforcement of a licence means removing permission to use technology, it does nothing to address retrospective harms
  • In a research setting, there is a push by funders and publications to use standard open licenses. Researchers may therefore not have a choice of licensing.
  • In a research setting, the IP for some research resides with the institution. Researchers may therefore not have a choice of licensing
  • …etc, etc

Licensing is only part of a solution that is based on institutional and regulatory frameworks that will shape, monitor and enforce the behaviours we want to see.

We need to start from building those frameworks and not designing new licences. These are, at best, a sticking plaster.

I still think we should be approaching regulation and deployment of these powerful general purpose AI tools and models, in the same way that we do medicines, pesticides and chemicals which can have unexpected and far-reaching harmful impact.

Sandboxes and incubators provide a way for us to learn about the beneficial and harmful impacts of AI, with researchers and developers working closely with uses, before releasing it into wider use.

At that point licensing agreements, for specific applications and users, might be more tailored or even open.