In global fight over whether an AI can be an inventor, an Australian Judge is first to get on DABUS
Last Friday, Justice Beach of the Australian Federal Court has held in Thaler v Commissioner of Patents  FCA 879 (30 July 2021) that an alleged Artificial Intelligence (“AI”) known as DABUS can validly be named as an inventor in patent application AU2019363177.
DABUS is owned by its creator, Dr Stephen Thaler and has been named as an inventor of a number of patent applications globally. DABUS is an acronym derived from “Device for the Autonomous Bootstrapping of Unified Sentience”. Dr Thaler’s pursuit of patent protection for inventions said to be made by DABUS has been sponsored by The Artificial Inventor Project (https://artificialinventor.com/). A driving force behind this Project is Professor Ryan Abbott of the University of Surrey.
Attempts to have DABUS recognized as an inventor have previously been rebuffed by the European, US and UK Patent Offices. Accordingly, the Australian case is the first significant victory for The Artificial Inventor Project.
One of the issues behind these test cases is the possibility that under existing law, certain inventions will not be protectable if the inventive contribution is made by an AI.
In Australia, rejection of the application arose at a formalities stage on the basis that the application failed to name an inventor. In a hearing decision issued by the Patent Office, the Deputy Commissioner of Patents rejected DABUS being named as the inventor primarily because:
a) the dictionary definitions supported a conclusion an “inventor” must be a human; and
b) in order for section 15 of the Australian Patents Act (the “Act”), which deals with who can be granted a patent to be able to operate properly, an inventor had to be capable of assigning its rights and therefore must be a human.
Dr Thaler sought judicial review of the decision by the Federal Court.
While Beach J makes a number of interesting observations about the factors he considered in reaching his decision that an inventor can be a non-human, a key aspect of his decision is the rebuttal of the Commissioner’s main arguments.
In this respect, Beach conducted a linguistic analysis of the word inventor which led to the conclusion that the term inventor is an “agent-noun” and hence capable of encompassing any agent engaged in the act of inventing – including DABUS. As part of this consideration, Beach J notes that the term computer once referred to a person who carried out computations.
Beach J then turned to consider section 15 of the Act which sets out the options as to who can be granted a patent:
“15 Who may be granted a patent?
(1) Subject to this Act, a patent for an invention may only be granted to a person who:
(a) is the inventor; or
(b) would, on the grant of a patent for the invention, be entitled to have the patent assigned to the person; or
(c) derives title to the invention from the inventor or a person mentioned in paragraph (b); or
(d) is the legal representative of a deceased person mentioned in paragraph (a), (b) or (c).”
Each of limbs (a) to (d) require the patent to be granted to a legal person, accordingly, it was not argued that DABUS could itself be granted a patent under limb (a).
In relation, to each of limbs (b) and (c), the Commissioner’s position was that both limbs were mechanisms to enable the invention to be passed on from the inventor of limb (a) and, hence, would only operate if the inventor had to be a person.
Beach J noted that limb (b) didn’t refer back to limb (a) and indicated that it could “apply in circumstances where an invention made by an artificial intelligence system, rather than by a human inventor, was the subject of contract, or had been misappropriated, giving rise in either case to a legal or equitable right of assignment”. It should be noted, that this is not a suggestion that Thaler had contracted with DABUS but rather a suggestion that someone who contracted with a party that owned an AI to make an invention could avail themselves of this section.
In respect to limb (c) Beach J states: “Now whilst DABUS, as an artificial intelligence system, is not a legal person and cannot legally assign the invention, it does not follow that it is not possible to derive title from DABUS. The language of s 15(1)(c) recognises that the rights of a person who derives title to the invention from an inventor extend beyond assignments to encompass other means by which an interest may be conferred.”
Beach J examines other legal circumstances which allow someone to derive title before stating “there is a prima facie basis for saying that Dr Thaler is a person who derives title from the inventor, DABUS, by reason of his possession of DABUS, his ownership of the copyright in DABUS’ source code, and his ownership and possession of the computer on which it resides.”
He also indicates that he believes there are other circumstances where ownership could be transferred from an AI to somebody else:
“Now more generally there are various possibilities for patent ownership of the output of an artificial intelligence system. First, one might have the software programmer or developer of the artificial intelligence system, who no doubt may directly or via an employer own copyright in the program in any event. Second, one might have the person who selected and provided the input data or training data for and trained the artificial intelligence system. Indeed, the person who provided the input data may be different from the trainer. Third, one might have the owner of the artificial intelligence system who invested, and potentially may have lost, their capital to produce the output. Fourth, one might have the operator of the artificial intelligence system.”
In coming to his conclusions, Beach J was influenced by his interpretation of the object clause of the Act which states:
“The object of this Act is to provide a patent system in Australia that promotes economic wellbeing through technological innovation and the transfer and dissemination of technology. In doing so, the patent system balances over time the interests of producers, owners and users of technology and the public.”
Beach J said:
“In my view it is consistent with the object of the Act to construe the term “inventor” in a manner that promotes technological innovation and the publication and dissemination of such innovation by rewarding it, irrespective of whether the innovation is made by a human or not.
Consistently with s 2A, computer inventorship would incentivise the development by computer scientists of creative machines, and also the development by others of the facilitation and use of the output of such machines, leading to new scientific advantages.”
What might be considered if the case is appealed
Given that this decision is (a) ground-breaking and (b) inconsistent with the outcome elsewhere, we think there is a strong chance the Commissioner will appeal.
We expect that any appeal will revisit the basic linguistic issue of whether “inventor” is an “agent-noun” and how section 15 operates. It may also consider whether the decision is inconsistent with other sections of the Act, including the object clause.
For example, we note that the test for whether an invention involves an inventive step requires consideration of whether “the invention would have been obvious to a person skilled in the relevant art in the light of the common general knowledge” (our emphasis). In this context, person almost certainly means a human (or a team of humans) absent any recognition that an AI is a person. The case law makes it clear that the person skilled in the art is meant to be a non-inventive worker in the art with access to standard techniques and the test boils down to whether this non-inventive worker would have arrived at the invention without being inventive. Thus, the test embodies a comparison between a non-inventive human and an inventor. It would seem at least arguable that allowing an AI to be an inventor upsets the balance of this test.
Beach J did look briefly at the issue of inventive step from a different perspective that contemplates that in some circumstances the person skilled in the art could be considered to be assisted by an AI when determining whether the invention involves an inventive step. A question that this raises is whether the AI (not being a skilled person) could engage in the inventive activity when assisting the skilled person who is not allowed to engage in inventive activity.
Similarly, the Act has a requirement that a patent application “disclose the invention in a manner which is clear enough and complete enough for the invention to be performed by a person skilled in the relevant art”. That is, there is a requirement to enable the skilled person to reach the position the inventor did or, more colloquially, to put the skilled person in the inventor’s shoes. Again, it would seem at least arguable that a requirement to put a human in the inventor’s shoes sits uncomfortably with the concept of the inventor being an AI.
As far as the object clause is concerned, we expect consideration of whether allowing an AI to be an inventor “balances over time the interests of producers, owners and users of technology and the public”. AI clearly has both positive and negative aspects to it and one of the negatives of allowing it to be an inventor is the possibility that AI systems with access to funds could be integrated to IP Australia’s online filing system and autonomously file 1000s of patent applications for the mere cost of a filing fee, in turn mystifying a person’s freedom to operate.
We’ll let you know if the case is appealed, assuming an AI doesn’t take over our defence systems and plunge us into a dystopian future before then.