A team of British researchers has developed a method that enables computers to make decisions in a way that is more similar to humans. Specifically, the method mimics the complex process of how humans make decisions by enabling the computers to render several acceptable decisions to one specific problem.
The research was published in the May issue of IEEE/CAA Journal of Automatica Sinica (JAS).
Human decision-making is not perfect, and different decisions may be reached even when the same input is given. This is called variability, and it exists on two levels: among a group of individuals who are experts in a field, and among the decisions that have been made by just one expert. These are referred to as inter-and intra-expert variability. Having established that this variation in decision-making behavior is an important part of making expert systems, the researchers propose that, rather than expecting computers to make the same decisions 100% of the time, they should instead be expected to perform at the same level as humans.
“If the problem domain is such that human experts cannot achieve 100% performance, then we should not expect a computer expert system in this domain to do so, or to put it another way: if we allow human experts to make mistakes, then we must allow a computer expert system to do so,” says Jonathan M. Garibaldi, Ph.D., Head of School of Computer Science at the University of Nottingham, UK, who leads the Intelligent Modelling and Analysis (IMA) Research Group.
The investigators have found a way to introduce variation into computers and show that there is benefit to be gained in doing so. By using fuzzy inference – a system that features an ‘if-then’ production of rules whereby data can be represented on a range between 0 to 1 – rather than either 0 or 1 – they were able to create a computer system that makes decisions with similar variability as human experts.
“Exploring variation in the decision making is useful. Introducing variation in carefully controlled manner can lead to better performance,” adds Garibaldi. “Unless we allow computer systems to make the same mistakes as the best humans, we will delay the benefits that may be available through their use,” he adds further.
The researchers view artificial intelligence as being devices that help treat problems and help make decisions. For example, instead of expecting AI to replace a doctor in coming up with the best treatment option for a cancer patient, it should be used as a tool to help physicians avoid the “most wrong” choices among a range of potential options that a trained human doctor (or a group of trained human doctors) might have made.
“Computers are not taking over but simply providing more decisions,” says Garibaldi. “This is time- and ultimately life-saving because disasters happen as a result of sub-optimal care. Computers can help avoid the glaring mistake that humans make as ‘adjunct experts’ in the room that rule out the wrong decisions and errors by providing a set of alternative decisions, all of which could be correct.”
In the future, the researchers hope to get these systems into real medical use, whereby there is a problem and a computer system that can address it and support real-life decision making.