MACHINE LEARNING AND B2C2 V QUOINE
In the recent Singapore International Commercial Court (“SICC”) decision of B2C2 Ltd v Quoine Pte Ltd  SGHC(I) 03 (“B2C2 v Quoine”), the SICC decided that, inter alia, “… where it is necessary to assess the state of mind of a person in a case where acts of deterministic computer programs are in issue, regard should be had to the state of mind of the programmer of the software of that program at the time the relevant part of the program was written.”
Deterministic computer programme. What did the SICC mean by the term “deterministic computer program”? At  –  B2C2 v Quoine, the SICC sets out the following:
“208 So also with computers used for trading purposes. Where the law is in a formative state it is, I think, appropriate for a court (of first instance at any rate) to develop the law only so far as necessitated by the facts of the case before it. With this in mind I do not intend to express any views on the precise legal relationship between computers and those who control or program them. The algorithmic programmes in the present case are deterministic, they do and only do what they have been programmed to do. They have no mind of their own. They operate when called upon to do so in the pre-ordained manner. They do not know why they are doing something or what the external events are that cause them to operate in the way that they do.
209 They are, in effect, mere machines carrying out actions which in another age would have been carried out by a suitably trained human. They are no different to a robot assembling a car rather than a worker on the factory floor or a kitchen blender relieving a cook of the manual act of mixing ingredients. All of these are machines operating as they have been programmed to operate once activated.”
In other words, the SICC was using the term “deterministic computer program” to refer to a situation where the computer programme is predictable: it will react in a programmed and pre-determined manner. It has no “mind” of its own. Input A, output A.
This stands in contrast to the situation identified by the SICC earlier:
“206 Turning to knowledge of the mistake, the law in relation to the way in which ascertainment of knowledge in cases where computers have replaced human actions is to be determined will, no doubt, develop as legal disputes arise as a result of such actions. This will particularly be the case where the computer in question is creating artificial intelligence and could therefore be said to have a mind of its own.”
This situation contemplates a scenario where the program is able to have a “mind” of its own. While not expressly stated by the SICC, this appears to contemplate a scenario where it is not possible to predict how the programme will react. Input A, maybe output A, maybe output B, maybe something else.
What about machine learning? As other commentators have pointed out, B2C2 v Quoine raises the interesting question of whether and how would unilateral mistake apply if the programme is able to “develop” itself via machine learning?
Machine learning. At the expense of over-simplification, it appears that the term “machine learning” has been used to refer to an algorithm where the machine receives input data, and uses statistical analysis to predict outputs, and uses those outcomes as new data to “learn” to become more accurate in its predictions.
The two important aspects of this process are that (a) there is minimal human intervention involved, especially in terms of how the machine develops from the new data, and (b) the programmers do not know what the final outcome would be.
Deterministic, or artificial intelligence? So, the question is, when it comes to “machine learning”, is the programme a “deterministic computer programme”, or is it an “artificial intelligence”?
We must bear in mind that even in “machine learning”, there is an “original” source algorithm that must be first developed by a human mind to kick-start the process.
Yet we must recall that the issue with machine learning is that from this “origin”, the machine is then “programmed” to learn from repeated iterations and its “programming” may then change along the way.
So, where do we draw the line? Is it logical or principled to always say that “machine learning” is not “deterministic”? It appears to be reasonable to hypothesize that there will be instances where if there is a fundamental defect in the original algorithm, the outcomes will always be defective.
Would it be better (as the SICC has done) to leave things to develop incrementally? And if Parliament should legislate, how should the legislation go?
Food for thought.
Tags: B2C2 v Quoine; Machine Learning; Artificial Intelligence
This publication is not intended to be, nor should it be taken as, legal advice; it is not a substitute for specific legal advice for specific circumstances. You should not take, nor refrain from taking, actions based on this publication. Chancery Law Corporation is not responsible for, and does not accept any responsibility for, any loss or damage that may arise from any reliance based on this publication.