SERVICE

8 Points about AI Development Agreements that can be learned from the “Contract Guidance on Utilization of AI and Data

Ownership of Intellectual Property Rights
There are only three patterns for agreements concerning who owns the intellectual property rights.

  1. Vendor owns all the rights
  2. User owns all the rights
  3. The rights are shared by the vendor and the owner

In the AI Guideline’s model development agreement, among the deliverables, the deliverable objects covered by copyright are stipulated in Article 16 and the deliverable objects covered by intellectual property rights other than copyright are stipulated in Article 17.

I separated in this way because I believe that it is necessary to clarify when the contract is executed whether the user or the vendor will have ownership to the intellectual property rights pertaining to the deliverable objects that are covered by intellectual property rights (the code portion of the inference program of the training dataset and trained model and the like).
On the other hand, with respect to the deliverables that are covered by intellectual property rights other than copyright, Article 17 [of the model AI development agreement] stipulates that “the deliverables shall belong to the party who is the creator [of the deliverables]” (the principle of inventorship).
Since it is often unclear when the contract is executed what kind of objects of intellectual property rights other than copyright may occur, the ownership of intellectual property rights is not stipulated in advance. Naturally, however, as is the case with objects of copyright, there should not be any problem with any of these 3 approaches: “the vendor having all the rights”, “the user having all the rights” or “the vendor and the user sharing the rights”.
Further, even the Ministry of Economy, Trade and Industry’s model transactional agreement (version 1) announced in 2007 (model agreement 2007) treats “copyright” and “other than copyright” in the same way by having a separate provision for each category; the [current AI] model development agreement is also based on this same concept.
The related provisions in the model development agreement are summarized in the chart below.

Terms of Use
The user and the vendor must each consider carefully in the terms of use how it wants to use the materials, interim deliverables, and deliverables in its own business.
For example, with respect to the terms of use for a trained model, there are many possible ways that each of the user and the vendor could use [the trained model] in its respeactive business. It is necessary to consider, among other things, the following:

  1. Whether to use the trained model that has been developed only to the extent necessary to conduct its own business?
  2. Whether to engage in learning with new data in the trained model and generate a derivative model (called a “reusable model” in the AI Guidelines)?
  3. Whether to be disclosed, permitted to use, or provide the trained model or derivative model to a third party in the future?
  4. Whether allocation of profits (licensee fees, profit shares) to the counterparty in the case of item 2 or item 3 is necessary?

I personally feel that whether you can set these “terms of use” in a manner that is compatible with your business model is much more important in actual negotiations than the “ownership of intellectual property rights”.
Please refer to the AI Guideline’s model development agreement which mentions 3 concrete examples and illustrates how the “terms of use” should be stipulated in the contract in each of these 3 cases.

4. Know the Limitations of the Contract

As you have seen by now, the user and the vendor can each fix its right to use the deliverables to the extent necessary for its own business by stipulating the ownership of intellectual property rights and terms of use [for such deliverables] in the AI development agreement.
However, in reality only stipulating the ownership of intellectual property rights and terms of use for the deliverables may not sufficiently protect the rights of the user and the vendor.
There is a significant risk particularly for trained models due to the possible generation of derivative models and distilled models.

Derivative Models
A derivative model is a model resulting from relearning using new data for a certain trained model.

Although [a distilled model] functions with a higher degree of accuracy than the original model, since the parameters are regenerated by relearning, at the very least the parameter portion will have a completely different form from the original model, and, depending on the type of framework, the [distilled model’s] network structure will differ from that of the original model.

Distilled Model
A distilled model embodies these acts.

Quite simply, this involves an action capable of generating a completely different model, even without directly copying the trained model, by a separate learning action using input data and output data.
Based on this, it is said that a lightweight model, whose performance is largely unchanged, is possible.
Furthermore, this act of distillation even makes it possible to render the trained model apparent from the outside (i.e., like a black box).
The problem with the act of “distillation” is that, like the derivative model, [the distillation model] has a completely different form from the original model; in short, there is no association with the original model.

So, what should we do?
Since there is no association between the derivative model and the distilled model and their respective original models, even if provisions stipulating the transfer of ownership of intellectual property rights and terms of use of the trained model that has been developed are included in the AI development agreement, these provisions may not be binding.
As such, consider including the following types of provisions in the AI development agreement:

  1. An explicit prohibition of reverse engineering, generation of derivative models, and acts of distillation (Model AI development agreement, Article 19); and
  2. Limit the conducting of any business made possible by using trained models with identical or similar function to a fixed period or scope.

Further, it is necessary to be careful of any conflicts between item 2 and the Antimonopoly Act.

In the AI development agreement, how should we provide for damages that might arise in connection with AI development and use? (liability)

Three types of liability related to AI development and use

I think that the liability that the vendor may have to bear with respect to the user related to AI development and use can be classified into the following 3 categories:

  1. Liability for damage sustained by the user when engaged in AI development
  2. Liability for damage sustained by the user due to use of AI software that is a deliverable
  3. Liability attributable to the user’s infringement of the intellectual property rights of a third party due to the user’s use of AI software which is a deliverable.

This is a diagram illustrating the breakdown of the software generation phase (the learning phase) and the application phase (the inference phase) of AI software.