Chapter 2:
The data-driven organization: Intelligence at SCALE
Evolving at an unprecedented pace, digital technologies promise to automate not only labor-intensive and repetitive work, but also the traditional and exclusive domain of educated humans—knowledge work. This is evident in the new ways of reaching customers and coordinating activities, as well as in the fact that companies conducting a business built on the new technologies now constitute the world’s largest enterprises. The presence and evolution of these companies challenge established divisions of labor between man and machine, and almost casually redraws the boundaries between industries. Machine learning and analytics challenge the managers leading and the managerial scientists studying organizations. Everybody says they want to be data driven—but what does a company really need to do to achieve that?
This article will explore the managerial, organizational, and strategic implications of allowing an ever increasing number of organizational decisions to be taken not by managers employing intuition and common sense, but by algorithms and learning systems based on massive amounts of data derived from electronically based customer interactions. We argue that these companies can be thought of as “intelligent enterprises” with enhanced abilities to sense, comprehend, act, learn and explain (SCALE) their environment and their interactions with it. To acquire these capabilities, managers need to cede authority over some decisions while acquiring new capabilities and roles for themselves.
Keywords:: Analytics, Digital strategy, Intelligent systems, Artificial intelligence, Organization design, Decision making2.1 Data, data everywhere
Data, which used to be expensive and scarce, is now everywhere. Digital customer interfaces make customers’ purchases, pre-purchase searches, and post-purchase reactions available for analysis. Sensors record what happens, transmitting the information through air and fiber optics to those who want to use it. With the exponential growth in raw data comes even faster growth in our ability to do something with it: store it, process it, and present it. Moore’s law (1965) applies to semiconductors, but exponential growth is everywhere in information technology (Brynjolfsson and McAfee, 2014; Denning and Lewis, 2016)—and is hard to wrap our heads around. With the rate of current growth, we will have systems with 32 times the current capacity in ten years; in twenty years, more than 1000 times.
With growth in data and computing comes a truly breathtaking increase in analytical capability. In early 2017, two of the authors created a course in analytics—and less than one year later, we have changed core analytics technologies two times and reframed the course, since the tools available have automated many operations.
An example of this is Google’s AlphaGo. In March 2016, AlphaGo played the board game Go and won 4–1 against Lee Sedol, considered to be the greatest player of the past decade (Google, 2017). Eighteen months later, the next version, AlphaGo Zero, trained itself for three days and then won 100–0 over AlphaGo (Silver et al., 2017), before going on to teach itself chess in four hours and resoundingly beating all known chess programs (Sterling, 2017). The evolution does not stop—developments in quantum computing may lead to computers that are a million or more time faster than current computers (Waters, 2018). Current organizational designs and decision-processes do not have the ability to use the future capacity of computers to collect, process, and distribute information.
The rapid evolution will require organizations to be data-driven, using new tools and techniques to analyze data in order to make intelligent decisions. We think of these organizations as information processing systems (Galbraith, 1973), structured “to create the most appropriate configuration [...] to facilitate effective collection, processing and distribution of information” (Tushman and Nadler, 1978: 615), and as displaying two types of organizational innovation (Galbraith, 2010, Fjeldstad et.al., 2012). First, they have developed coordinative and collaborative capabilities that are information-intensive and automated to manage increasing complexity. Second, they can self-organize and self-reconfigure on a continuous basis.
2.2 Organizational intelligence at SCALE
To deconstruct and describe a data-driven organization, we think of it as an intelligent system that can Sense, Comprehend, Act, Learn, and Explain—SCALE—inspired by the literature on artificial intelligence (Winston, 1984; Simon and Newell, 1958). These are capabilities that characterize intelligent actors such as humans and smart machines, as well as organized collectives of both. We think this SCALE framework will allow scholars and practitioners to better understand how organizations can turn data into sustainable competitive advantage. Emphasizing the role of data and technology, the following sections explain the SCALE framework of organizational intelligence. We use the vehicle manufacturer Tesla to illustrate each SCALE element.
2.2.1 Sense
Sensing, a dynamic organizational capability (Teece et al.,1997), involves observing and registering the external and internal environment. Sensing technologies—sensors and the technologies that connect them—enable data collection at scale from many sources. Use of digital and digitally enabled products and services leave digital traces, which become useful data. Increased analytical capabilities turn output from conventional ERP and CRM systems from irritating noise to vital information. Third-party vendors emerge, specializing in data collection, structuring, and aggregation. Furthermore, companies such as Capital One are increasingly willing to invest substantially in acquiring data by deliberately extending their services into new markets and products, less to sell there than to acquire the data required to build new products (Davenport and Kim, 2013).
Since 2016, Tesla has embedded eight 360-degree view cameras in their new cars—some of them not in use at launch, but available in anticipation of new, as yet undefined functionality. Combined with mobile connectivity, these sensors provide Tesla with a sophisticated sensing capability well beyond what conventional car manufacturers currently have—a solid basis for the other SCALE capabilities.
2.2.2 Comprehend
Comprehension involves using data and observations from sensing activities to discern context, detect patterns, and make inferences. Organizations can build data-driven models of their internal and external environment, identify causal relationships, and prescribe what to do. Descriptive and predictive analytics can enhance an organization’s comprehension capabilities, as can speech, image, and video recognition technologies.
The combination of sensing and comprehension enables novel data applications such as the generation of virtual representations—digital twins—of physical objects and systems that allows organizations to monitor, diagnose, and maintain such installations remotely. Companies like General Electric, Siemens, and Rolls Royce Maritime provide digital twin solutions in construction, shipping, energy, and manufacturing. Digital twins are continuously updated and often developed before their “physical twin” in order to test a product or system before it is built. Using virtual reality goggles, architects can offer customers a guided tour of a 3D virtual representation of a planned building—as was done at a newly opened hospital in Østfold, Norway—and train staff to use the new building before it is finished. Use of cheap cell phones to 3D photograph the building (generating a “point cloud”) as it was assembled let the builders track progress and discover errors while they still could be inexpensively fixed.
Tesla accumulates and analyzes the vast data from their connected vehicles to identify maintenance, safety, functionality, and performance improvements, in addition to allowing services such as the remote unlocking a car by a customer. The collected data allows the company to understand how their vehicles perform when used by real customers—as well as to respond to criticism, as demonstrated when a car journalist reported the car to have limited range, and Tesla could show that the journalist had deliberately run the battery down and neglected to charge the car (Muller, 2013).
2.2.3 Act
Action refers to the decision-making and productive activities of an organization. Technology is currently used to automate and augment processes previously reserved exclusively for humans, as well as enabling decisions and activities that used to be impossible or unfeasible to execute. The development of physical and software robots allows automatic execution of productive activities, particularly routinized processes, but with the growth in machine intelligence it is increasingly also applied to more adaptive forms of work. Banks and financial institutions, in Norway and worldwide, are rapidly implementing Robotic Process Automation (RPA) technology to automate information-based routine processes and decisions, such as handling credit card applications. Companies use chatbots with embedded natural language recognition (comprehend) and generation (act) technology to answer standard questions and offer 24/7 service to customers. Sparebank1 SR-bank, a regional bank headquartered in Stavanger, Norway, uses a local chatbot provided by Boost.ai that reportedly understands Norwegian dialects (Lyche, 2017). Advertisers and media agencies use programmatic advertising systems to automate ad placement in different media outlets.
The Danish cafe chain Joe & the Juice has developed a system for assessing the market potential in different geographies that managers consult on a weekly basis in deciding where to launch new outlets. For instance, the chain opened 14 cafes in New York in 2017 and estimates that it will saturate that specific market with 86 outlets. Joe & the Juice uses the data from each new outlet to update the company’s model, which guides where to open the next cafe in the city. The chain currently has 230 outlets in 15 countries (Andersen, 2018). Organizations personalize digital services based on user behavior and characteristics, and implement product improvement via software upgrades on existing hardware.
Acting technology can be physical, as with 3D printing used to create physical replicas of digital originals (Sasson and Johnson, 2016), i.e. spare parts for time-critical industrial equipment, hearing aids tailored to each user’s ears, medical implants, prescription lenses, and even vehicles and buildings—disrupting traditional production and supply chains in the process.
All cars produced by Tesla are permanently connected to the Internet, allowing the company to update the software remotely. This service is provided free of charge to the customer and can involve some attractive freebies—free music streaming from Spotify, for instance. The effect is that customers eagerly look forward to software upgrades—and that Tesla can fix errors quickly and gain the benefit of all their cars running on the latest software, drastically reducing model and version complexity.
2.2.4 Learn
Learning refers to the ability to acquire knowledge or skills in order to adapt and improve behavior and cognitive understanding (Fiol and Lyles, 1985; Simon, 1981). Learning involves experimentation, model refinement, and integration into products and services, productive processes, and organizational design (Gavetti and Levinthal, 2000; Lawrence and Lorsch, 1967; March, 1991; Senge, 1990). Organizational learning has been a central topic in management research and practice for the past decades (Argote, 2013; Levitt and March, 1988; March, 1991), and machine learning is revolutionizing predictive analytics; it is also core to the development of a vast array of artificially intelligent applications (e.g. Wilson, Sachdev and Alter, 2016). Machine intelligence holds distinctive advantages over human learning in drawing lessons from large amounts of data as our human information-processing ability is very limited (Simon, 1947, 1973) and prone to serious biases (Kahneman, Slovic and Tversky, 1982). By codifying tacit knowledge, machine learning pushes the boundaries for codified learning, enabling more accurate and scalable learning processes and outcomes. To improve organizations’ learning and problem-solving capabilities, humans must identify meaningful problems and shape strategies for acquiring data.
Tesla has used its SCALE capabilities to provide new services to its customers—sometimes as solutions to problems, other times because of suggestions. When the cars were sold in Norway—a new environment—customers complained that cars stopped charging overnight. The cars’ computer logs showed this was due to power supply—Norwegian power companies produce electricity with larger variance than the Teslas were calibrated for. A software update slightly widening the security envelope was sent out, and the problem was solved. Similarly, customers complained that it was hard to change the rubber on the windshield wiper—so the company added a “wiper service mode” button to the touch screen with which the wipers sweep up into a vertical position and stay there, providing access.
2.2.5 Explain
Explaining refers to the ability to show how something works, to explicate causal relationships, articulate purpose, and set direction. It is a leadership imperative but also important in peer-to-peer relationships as well as in interactions with outside parties such as customers, partners, suppliers, and regulators. Explanation is a vital capability in generating organizational purpose, meaning, and identity. The search for explanations drives the identification and formulation of questions and problems for humans and machines to solve.
While machine capabilities on the other four SCALE factors are quite formidable, machines’ ability to explain themselves is still very limited, leaving humans primarily responsible for interpretation. This raises a dilemma—the more sophisticated a machine-learning model is, the harder it may be to explain. Explicability is critical for managers’ willingness to trust the advice from intelligent systems (Kolbjørnsrud, Amico and Thomas, 2017), though this may change as the tools become more familiar. Furthermore, regulatory initiatives such as GDPR will require organizations to state, in language understandable to customers, how their automated decisions are reached, and to ensure that automated machine learning does not inadvertently derive unlawfully discriminatory features such as gender or race from the apparent noise of customer backgrounds and behavior.
The technology itself will help, too. Data science technology is increasingly automated for non-data scientists1 and offers graphical interfaces for a more intuitive analytical process (Schwab, 2018). The latest tools provide better explanation for each individual decision and can be configured to identify the most potent variables. But fully harnessing the power of machine learning may require a reliance on results that are impossible to explain to humans (Weinberger, 2017, 2018), setting us up for a tradeoff between understandable and optimized solutions. If we cannot understand how a model works, we may have to settle for understanding how it behaves.
2.3 Human decision making with machine learning
“The Answer to the Great Question... Of Life, the Universe and
Everything... Is... Forty-two,' said Deep Thought,
with infinite majesty and calm.”
Organizational mastery of the SCALE framework manifests itself in an integrated organizational decision-making process that leverages distinct human and machine capabilities: humans must specify questions comprehensible to machines; machines can then search for solutions to these questions and assess their validity; humans assess whether the machine-generated solutions are viable and valuable; and humans determine deployment procedures.
After expending enormous computational resources, Douglas Adams’s Deep Thought answered the humans’ question with a laughable answer … because the question was too underspecified. Data-driven “organizational intelligence” obviously requires technological mastery. Perhaps less obviously, data-driven organizations must also engage in cultural change to communicate with machines. Human decision makers must communicate with machines more precisely than they may be accustomed to communicating with other humans. Machines cannot understand the ambiguous or opaque questions that human colleagues sometimes tolerate and muddle through. Human decision makers must learn to ask questions on machines’ terms, at a potentially unfamiliar level of precision, and must be able to assess and manage machines’ granular output. Organizations have to master new communication routines to reconcile the “big picture” questions that executives want to answer with the low-level questions machines can answer.
Leaders of successful digital organizations will need to build strategy from questions machines can answer, and to reconcile machine-generated models with the nuance of human preferences. For example, a machine is about as bad at finding “good” customers as it is at finding the meaning of life; but a machine could easily find customers that shop three times a week. Similarly, a machine-learning model might accurately predict that 99.99% of all airline passengers are non-terrorists, but no one will care unless the model can correctly identify the 0.01% of passengers that are.
Imagine that a policy-maker asks the question “How can we reduce cancer-related deaths?” A machine cannot independently answer that question. However, a machine could predict whether a particular tumor is cancerous, a doctor could use that prediction to guide treatment, and that human/machine interaction aggregated over many tumors could reduce cancer-related deaths.
To illustrate the interface between human decision making and machine learning, consider this process. Perhaps a model uses scan measurement to predict whether a tumor is malignant (cancerous) or benign. The question or target “malignant or benign” is a binary problem, easily understood by a machine. With many observations of previous tumors, the machine can use observed cancerous tumors and each tumor’s associated scan measurements to train a machine-learning model, resulting in a confusion matrix2 :
This model is 93% accurate. Is this good? Answering that question depends on what kind of errors we humans care about—and we probably care more about identifying “malignant” than identifying “benign.” In economic terms, humans assign different costs to different outcomes. The process of cost assignment remains a uniquely human task: fed precise questions, machines can search for good predictive models, but humans must assess model value.
In this hypothetical example, suppose that correctly identifying a benign tumor is costless. Correctly identifying a malignant tumor triggers a biopsy that costs 1 plus treatment that costs 15. Mistakenly flagging a benign tumor as malignant unnecessarily triggers a biopsy that costs 1. Mistakenly flagging a malignant tumor as benign delays treatment, ultimately triggering a cost of 100. We can show this in a cost matrix:
Multiplying the cost matrix by the relative frequencies of each cell in the confusion matrix and summing them up gives an expected cost of each new patient: about 10.97. Should we use the model to decide whether or not to order a biopsy? Probably not—the safe alternative would be to biopsy everyone—at an expected cost of 6.55. However, with a working model and a specified cost function, a human could tweak the model to increase value. Underlying the confusion matrix’s binary outcomes are probabilities for each observation: for some of the tumors predicted to be benign, the model was quite sure it was right (say 98% certain); for other tumors predicted to be benign, the model was less confident (say 56% certain). The confusion matrix corresponds to an accuracy-maximizing threshold value separating predicted “benign” cases from predicted “malignant” cases. A human decision maker could reassess this threshold—for instance, only allow the model to predict “benign” if it was more than 90% certain for that instance. This will reduce the model’s accuracy but might increase its value by avoiding biopsies for at least some cases.
As automated machine learning becomes more common, human intervention in the modeling process itself will be less necessary, which will facilitate greater machine learning deployment. With increasing deployment of automated machine-learning methods, human intervention in data-driven decision making will increasingly be in the target specification and model evaluation stages. Rather than asking machines to build strategy, humans will need to ask machines questions that facilitate building strategy from the ground up. Incorporating machines into our decision-making processes by asking them to predict some outcomes will require changes to organizational culture and communication practices.
2.3.1 Modes of analytics use
Organizations vary in how data-driven and intelligent their SCALE capabilities are. We can think of this as variation in the complexity of questions they ask and sophistication in their modeling techniques. Organizations also vary in their pre-analytics capabilities, i.e. collecting and preparing data for analytics, and they tend to be more advanced as they become more experienced and develop skills. There are two main stages of analytics—descriptive and predictive—that can be used to answer different types of questions.
Descriptive analytics is a data-driven approach for questions such as: What happened? The analytical focus is to report past and present facts about a situation to human decision makers. It requires few techniques beyond being able to combine, compute and aggregate data, and descriptive statistics. Typical examples are reports and scorecards that report on benchmarks and KPIs. The use of analytics is passive to the decision.
Predictive analytics is an analysis-driven approach for question such as: Why did it happen? What will happen? What should I do? The focus in these questions shifts from the past to the future, and the application of analytics becomes gradually more advanced.
First, we have what we call a reactive mode of analytics in decision making that focuses on understanding why things have happened in the past. In this mode, analytics is used to search for explanations through relationships, patterns and trends. Typical techniques include classical statistics such as cross tabulation, correlation and regression models, but some organizations also apply more advanced techniques such as data mining.
At the next level, analytics are used more directly in decisions, in an active mode of decision making where the analysis is aimed at trying to foresee what might happen. The analytical focus is to develop predictive models used to estimate probabilities for individual cases (e.g. scoring) and forecasts on aggregated levels. A typical approach is to build predictive models that combine classical statistics and machine learning. Examples include credit scores, fraud identification, and customer intent.
The most advanced level is to deploy the predictive models to guide decisions and actions. We call this a proactive mode of analytics in decision making. This mode has been referred to as prescriptive analytics, but in our view the main difference between the active and proactive modes concerns organizational capabilities to deploy the results from the active mode rather than the techniques applied. In the proactive mode, the models built in the reactive mode, and the probabilities calculated in the active mode, are used to take (or suggest) actions based on rules, simulation or optimization. Examples of applications include smart buildings and server room management (Evans and Gao, 2016). Current applications rely on dynamic and complex networks of automation, often combining a variety of data flows from sources such as transactions, process control systems and sensors, but as organizations become more sophisticated in exploiting the data, this is a changing landscape.
Organizations with capabilities in traditional business intelligence and analytics have focused on descriptive analytics. They need to acquire competencies and skills for predictive analytics. Some have to start with more advanced classical statistics in order to be able to shift from passive to reactive mode. The more advanced levels require the organization to move into data science and learn how to combine techniques such as data management and machine learning with the ability to be able to formulate business problems that the machine can answer. Many organizations will find the shift from descriptive analytics to advanced modes of predictive analytics to be discontinuous—even dramatic.
2.3.2 The discipline of strategic experimentation
Increasingly, organizations have to deal with fast-paced and unpredictable change due to new technologies, business models, disruptive innovation, global competition and more. In rapidly and unpredictably changing environments, strategic planning is a risky business. If conditions change even the best plan, once implemented, is likely to be wrong. Such environments require more adaptive approaches to strategy. Organizations armed with strong SCALE intelligence capabilities have the aptitude for strategic, data-driven experimentation (Thomke and Manzi, 2014), as the Norwegian companies RiksTV and Finn.no exemplify.
RiksTV, the Norwegian provider of Digital Terrestrial TV (DTT), faced challenges from bigger competing TV distributors as well as from disruptive over-the-top providers such as Netflix, an inferior distribution platform with limited bandwidth and no on-demand capabilities, and a frequency license expiring in 2021. Management realized that it faces a major digital transformation under great uncertainty and that static, long-term plans would be insufficient. Recognizing the impossibility of specifying a winning strategy ex ante, they frame strategy as a portfolio of hypotheses that need to be generated, tested, and organized for maximum agility. Inspired by Lean Startup and agile methods, RiksTV continuously develops, executes, and evaluates fast, low-cost experiments in new products and services, as well as technological and operational projects. The experiments are guided by a clear strategic direction specified in four broad goals revised at regular intervals. The experimental turn in the company’s strategy has enabled more rapid product development adapted to user needs and preferences, while keeping the organization nimble and capital investment levels moderate.
Finn.no, the online marketplace owned by Schibsted ASA, have explicated a managerial decision process that exemplifies how a data-driven organization changes managerial responsibilities (Lome, 2016). In this process, top management identifies the customer set, the value the organization is creating, and how this value should be measured. After management sets performance goals, they give a development team responsibility for finding a solution. The development team then generates possible solutions and tests them on the digital user interface (a process known as A/B testing.) Solutions that work are implemented and the results of the implementation are checked.
The important change is that solution selection is determined by performance in experiments at the customer interface—not by senior management’s judgment (i.e., HIPPO, or “highest paid person’s opinion”). Top management may change the goals of the organization or major direction of its activities—which it has done, notably in switching development from a focus on web interfaces to a “mobile first” strategy—but the detailed implementation within the broader frame of goal-setting is up to the development teams.
The data-driven, intelligent SCALE capabilities enable Finn.no and RiksTV to take experimental yet disciplined approaches to strategizing under uncertainty. Experimentation at SCALE breaks down the traditional strategy formulation and execution divide as formulation and execution are performed in multiple, parallel, and iterative micro-cycles rather than the conventional linear, sequential approach.
2.4 Implications
The emergence of data-driven organizations that intelligently orchestrate collectives of intelligent people and intelligent machines has profound implications for managerial practice, research, and education.
2.4.1 Managers
“Rather than giving orders as from one person to another, both should take their orders from the situation; justification of order this way is most effective in all situations except when crisis is imminent, in such a case direct order-giving is not only accepted, but expected.”
Machine learning will, for many organizations, trigger a rethink of management’s role. Some decisions will be automated, while in other decisions managerial judgment will be augmented by intelligent technology (Kolbjørnsrud, Amico, and Thomas, 2016; Daugherty and Wilson, 2018), allowing for decisions on how to solve customers’ problems to be executed by decentralized resource mobilization, and for rapidly reconfiguring value networks based on customers’ interactions with each other.
Reorienting managerial decision making from opinions to data requires discipline from the top. Former Harvard Business School professor Gary Loveman, who built the world’s largest casino corporation by changing management decisions from intuition to analysis by carefully analyzing what customers really did, famously said that there were three ways to get fired from his company: theft, sexual harassment, and running an experiment without a control group (Schrage, 2011).
Paradoxically, the increased reliance on data and algorithms in decision making and the large-scale automation of routine and information-intensive tasks will increase the need for interpersonal leadership skills among managers, rather than the contrary. The remaining human tasks will be oriented towards creative, complex problem solving requiring managers to harness the collective creativity, intelligence, and judgment of their human coworkers (Chamorro-Premuzic, Wade and Jordan, 2018; Kolbjørnsrud, Amico and Thomas, 2016). The data-driven organization requires bilingual managers that speak both ‘machine’ and ‘human’—that is, know how to communicate and work effectively with both intelligent machines and intelligent humans.
2.4.2 Organizations
According to David (1990), the second industrial revolution’s dramatic productivity jump did not emerge from technological change—i.e., transmitting power using electrical cables rather than belts and pulleys—but from recognizing that machines no longer had to line up according to where the belts and pulleys had been. A similar recognition will have to take place in order to fully benefit from the digital revolution—we have to stop designing processes based on the working speed and communications capabilities of individual decision makers, and start thinking of organizations as information processing systems with humans and machines both doing what they do best.
Modern IT systems are organized as individual units that communicate using a common protocol—a computer science concept called “object orientation,” first developed in Norway by Dahl and Nygaard (1966) and articulated at Xerox PARC in the 1980s (e.g., Goldberg and Robson, 1983). An important organizational principle is that, as far as possible, one part of the program (one “object”) should be used by everyone. Not only does this reduce complexity (if you have an error, you know where to look) but it also ensures that if you come up with a better way of doing something (for instance, a faster way of sorting a list of items) the increased speed will be felt everywhere in the system, since everyone is using the same mechanism.
That is where the difficulty will arise: Models force organizational changes by resolving interdependencies through SCALE. For example, Uber and Lyft invaded the taxi industry by using powerful information technologies to centralize (and take over) some activities (i.e. ordering, communication, pricing, payment, location, navigation and driver/passenger evaluation), while decentralizing other aspects (service design, performance) to the individual driver. The traditional companies (taxi services) were largely left with financing the vehicles, not as much outcompeted as made obsolete because the new service made most of their managerial decisions based on models and monitoring—and self-organized the rest.
Applying the object-oriented principles to organization design allows the data-driven organization to become actor oriented and self-organized. Its organization design is embedded in the rules and protocols for interaction rather than in a fixed structure, as in hierarchical designs (Fjeldstad et al., 2012; Kolbjørnsrud, 2017). Rules-in-use are the formal and informal rules regulating behavior, rights, obligations in a community—what participants can, cannot, and may do (Crawford and Ostrom, 1995). Protocols are used to guide the interactions of self-organizing actors (Fjeldstad et al., 2012). Combined with extensive information transparency and shared resource commons, the protocols enable a shared situational awareness that allows self-organizing actors—human or machine—to make informed decisions and actions towards fulfilling the goals of the organization—exhibiting distributed intelligence at SCALE.
2.4.3 Science and education
But faced with massive data, this approach to science—
hypothesize, model, test—is becoming obsolete.
Machine learning and intelligent enterprises challenge not just managers, but also scientists. One challenge is that academia is no longer in the lead in the development and application of new methods. Research budgets of large companies dwarf those of universities—companies like Google, Facebook and Baidu develop the new methods for analyzing and acting on data, but also share their findings via open source agreements (see Snow et al., 2017, for an academic treatment, and Dowling, 2017, for a practical example).
In machine learning, there is less use for theory (Anderson, 2008), challenging—at least on the surface—the idea of the scientific method as hypothesis falsification (Popper, 1959). Every Ph.D. student has the proper order of things drilled into them: First you formulate hypotheses from theory, then you collect the data, then you test the hypotheses against the data, and if your coefficients merit three asterisks, you can publish. Doing it in any other order is frowned upon and considered “fishing.”
Once overall goal setting is done, however, machine learning is nothing but fishing. Much like managers, researchers doing machine learning will have to cede model design to the data. This is partially due to increases in computational power: Computers now can test any possible hypothesis (given that you have data) with any known technique (algorithm) and with several competing assumptions to find the “optimal” fit. This approach—raw power over ingenuity and theoretical insight—is something no human can do in their lifetime. Furthermore, at the point of deployment, the computer can continuously test the extent to which the model remains optimal, and continuously adjust it. However, the sheer quantity of data also challenges the very consideration of what is a good model. In social science, most evaluations of models are based on a starved data set: With 300 survey answers, a 95% confidence interval seems a reasonable criterium. If you have millions of observations, everything is significant.
We can view machine learning as large-scale, machine-based induction, developing insights from patterns identified in the data. Machine learning is thus consistent with qualitative researchers’ methodological norms. But because qualitative researchers typically lack the skills required to apply machine learning in their work, the scientific community faces a norms/skills paradox: quantitative researchers may have the skills to use machine learning, but machine learning violates their norms about how science should be “done”; qualitative researchers may be open to the approach, but they do not have the required skills. Perhaps this paradox explains why academia has been slower than industry to apply machine learning in empirical research. Going forward, it is imperative that educators teach students the new skills and ways of thinking—to train both next generation scientists and machine learning practitioners.
2.5 Conclusion
Ever since Turing (1949) considered whether machines could have intelligence and introduced what came to be known as the Turing test, decision makers have been fascinated with the idea that machines somehow will make man more intelligent. As computers get faster and faster, we have gradually understood that the term artificial intelligence (AI) is somewhat meaningless—as Marvin Minsky is alleged to have said: “Artificial intelligence is anything we haven’t done yet.”
AI will not make us smarter, and certainly will not replace managers or management decision making anytime soon. It may, however, make managers a bit less prone to biases or, at least, willing to question them; it may make organizations less wasteful and perhaps more agile; and it may guide the social sciences toward increased relevance. Releasing its potential will require new ways of organizing management, organizations, and science, allowing faster and more precise interactions between all three and their environment. That is to be welcomed—though we still do not know what this will look like, we suggest the technology will allow organizational intelligence to SCALE.
References
1 | One such tool is DataRobot, which automates much of data scientists’ manual work. In most situations, DataRobot’s own specialists cannot come up with a model that is better than the one created in autopilot mode. |
2 | The model is generated using SciKit-Learn’s decision tree classifier run on the Wisconsin cancer dataset, with the confusion matrix normalized to 100 observations. |