Ownership of Intellectual Property Rights
There are only three patterns for agreements concerning who owns the intellectual property rights.
- Vendor owns all the rights
- User owns all the rights
- The rights are shared by the vendor and the owner
In the AI Guideline’s model development agreement, among the deliverables, the deliverable objects covered by copyright are stipulated in Article 16 and the deliverable objects covered by intellectual property rights other than copyright are stipulated in Article 17.
I separated in this way because I believe that it is necessary to clarify when the contract is executed
whether the user or the vendor will have ownership to the intellectual property rights pertaining to the
deliverable objects that are covered by intellectual property rights (the code portion of the inference
program of the training dataset and trained model and the like).
On the other hand, with respect to the deliverables that are covered by intellectual property rights other than copyright, Article 17 [of the model AI development agreement] stipulates that “the deliverables shall belong to the party who is the creator [of the deliverables]” (the principle of inventorship).
Since it is often unclear when the contract is executed what kind of objects of intellectual property rights other than copyright may occur, the ownership of intellectual property rights is not stipulated in advance. Naturally, however, as is the case with objects of copyright, there should not be any problem with any of these 3 approaches: “the vendor having all the rights”, “the user having all the rights” or “the vendor and the user sharing the rights”.
Further, even the Ministry of Economy, Trade and Industry’s model transactional agreement (version 1) announced in 2007 (model agreement 2007) treats “copyright” and “other than copyright” in the same way by having a separate provision for each category; the [current AI] model development agreement is also based on this same concept.
The related provisions in the model development agreement are summarized in the chart below.
- Whether to use the trained model that has been developed only to the extent necessary to conduct its own business?
- Whether to engage in learning with new data in the trained model and generate a derivative model (called a “reusable model” in the AI Guidelines)?
- Whether to be disclosed, permitted to use, or provide the trained model or derivative model to a third party in the future?
- Whether allocation of profits (licensee fees, profit shares) to the counterparty in the case of item 2 or item 3 is necessary?
your business model is much more important in actual negotiations than the “ownership of intellectual
4. Know the Limitations of the Contract
As you have seen by now, the user and the vendor can each fix its right to use the deliverables to the extent
[for such deliverables] in the AI development agreement.
There is a significant risk particularly for trained models due to the possible generation of derivative models and distilled models.
A derivative model is a model resulting from relearning using new data for a certain trained model.
Although [a distilled model] functions with a higher degree of accuracy than the original model, since the parameters are regenerated by relearning, at the very least the parameter portion will have a completely different form from the original model, and, depending on the type of framework, the [distilled model’s] network structure will differ from that of the original model.
A distilled model embodies these acts.
Quite simply, this involves an action capable of generating a completely different model, even without
directly copying the trained model, by a separate learning action using input data and output data.
Based on this, it is said that a lightweight model, whose performance is largely unchanged, is possible.
Furthermore, this act of distillation even makes it possible to render the trained model apparent from the outside (i.e., like a black box).
The problem with the act of “distillation” is that, like the derivative model, [the distillation model] has a completely different form from the original model; in short, there is no association with the original model.
So, what should we do?
As such, consider including the following types of provisions in the AI development agreement:
- An explicit prohibition of reverse engineering, generation of derivative models, and acts of distillation (Model AI development agreement, Article 19); and
- Limit the conducting of any business made possible by using trained models with identical or similar function to a fixed period or scope.
Further, it is necessary to be careful of any conflicts between item 2 and the Antimonopoly Act.
In the AI development agreement, how should we provide for damages that might arise in connection with AI development and use? (liability)
I think that the liability that the vendor may have to bear with respect to the user related to AI development and use can be classified into the following 3 categories:
- Liability for damage sustained by the user when engaged in AI development
- Liability for damage sustained by the user due to use of AI software that is a deliverable
- Liability attributable to the user’s infringement of the intellectual property rights of a third party due to the user’s use of AI software which is a deliverable.
This is a diagram illustrating the breakdown of the software generation phase (the learning phase) and the application phase (the inference phase) of AI software.