How Do You Make Decisions? (Part 2 of 4)
Emotions and intent are a large part of decision making, even in business -- and that's not necessarily a bad thing.
- By Barry Devlin
- June 1, 2016
In Part1 of this series, we reviewed the process of decision making in business and how business intelligence might support it. We encountered an existential crisis: rationality is only one (perhaps minor) part of the way decisions are made and, consequently, the role of information is less extensive -- or at least different -- than we thought. We need a new model to help retrain our thinking.
Personal experience and a little thought will convince you that the reality of decision making in business includes not just rational consideration, but that gut feel, emotions, ethics, and intent each play a role.
Gerd Gigerenzer, a director at the Max Planck Institute for Human Development, proposes in Gut Feelings: The Intelligence of the Unconscious that the mind should be seen as an adaptive toolbox that has developed a wide range of heuristics or rules of thumb. These instincts allow the mind to best handle a highly uncertain environment characterized by ill-defined problems and opportunities, an environment with loose and changeable rules and variable definitions of success.
To me, this describes the real world in which we live. The world is computationally intractable: any reasonably complex decision of business interest cannot be solved conclusively with any conceivable amount of information and processing power. Gigerenzer describes decision making in terms of this adaptive toolbox of heuristics, which often reaches conclusions more quickly and directly than logical reasoning.
Intent and emotions are also widely seen in real-world decision making, although they are more often deemed problems rather than assets. This dismissal is the real problem: it misses the vital role of intention in directing thought and passion in driving action. Consider the intent of a decision maker who values her own advancement up the corporate ladder above the overall success of the business. Her decisions are likely to be very different from those of a team player, even though both parties use the same processes and information.
From a human resources perspective, passion is highly valued in executive selection, being seen as the energy to get things done and as a wellspring of innovation. However, when it comes to modeling the decision-making process, both positive and negative emotional aspects are excluded from consideration.
Behavioral sciences are exploring some of these issues from an academic viewpoint. Altruism and other prosocial behaviors "appear to contradict economic and evolutionary axioms about how humans should behave: selfishly, nasty, and brutish," according to Jamil Zaki, Assistant Professor of Psychology, Stanford University, in "The Altruism Hierarchy."
Altruism, it turns out, illustrates perfectly the inherent contradictions that emerge when we try to apply wholly rational explanations to human behavior and decision making. The tricky topics of ethics and morality soon emerge in any such discussion, but Zaki concludes: "De facto, when people engage in actions, it is because they want to."
Rationality remains, of course, valid, valuable, and often necessary, but it has been promoted to the exclusion of all other approaches for too long. Information certainly contributes to the rational choice aspect of decision making, as one might expect.
However, in real-world decision making, we see that a more common use of BI is to confirm -- and often justify -- decisions that have already been reached based on gut feel, political necessity, or some other basis deemed less acceptable by the organization.
Declaring "rationality gaps" and other aspects of the mind as human weakness, as we saw in Part 1, now provides a worrying opening to propose removing the human from the decision-making loop altogether. This is particularly evident in the current thinking about algorithmic, AI-based decision making. From autonomous vehicles to algorithmic trading and medical diagnosis, one emerging narrative is that using algorithms will avoid the biases, pattern misidentification, and other problems inherent in human decision making.
A point often missed is that predictive analytics, correlations, and so on are based on the training and run-time data, statistical models, and experimental assumptions made by the humans who designed the algorithms.
In effect, the approach only amplifies the risks inherent in human decision making by concentrating it in the hands of a few designers. The consequences have already been well demonstrated in the financial crash of 2008, where CDOs (collateralized debt obligations) were implicitly trusted largely because of their algorithmic basis.
At issues here is that the data warehousing/business intelligence industry has, from the very beginning, been missing a major component of its architecture: people. We have slowly been raising our gaze from machine code and hexadecimal data to process and information. Indeed, we have made particular strides in the area of information and the importance of context, despite the recent return to lower-level thinking seen in the Hadoop world.
Regarding people, however, we have seen very little fundamental thought in the BI world about how people, both individually and collectively, reach decisions.
What are the internal and collaborative processes within and between people as they move from question or problem to analysis and solution? What are the inputs to these processes? What do people bring as background to each decision? Why is there such a seeming chasm between decision and action? We need an "architecture of decision makers." That is the topic of Part 3.
Dr. Barry Devlin defined the first data warehouse architecture in 1985 and is among the world’s foremost authorities on BI, big data, and beyond. His 2013 book, Business unIntelligence, offers a new architecture for modern information use and management.