A lot has been said and written supporting the concept of data driven decision making. But far less has been published about how decision makers actually get from raw data to a final, real-life decision. And even less about the ways in which decision makers have to first interrogate that data. In reality, even the most “data-driven” decision makers rarely use actual, raw data in their deliberations. They are far more likely to make decisions based on information that has been derived from that data. Moreover, the quality of their decisions is largely dependent on the quality of that derivation.
If you think about it for even a second, the idea that decisions should not be made from raw data should not be a surprise to anyone. Decision makers may or may not have the ability to interpret large sets of data and even less often have time to invest in doing it. As a rule, it is other people in the company who are responsible for the accuracy of the data generated for reports and analysis. The information-based decision maker then relies on that pre-computed information to improve the quality of their decisions.
The general marketplace approach to providing and maintaining data for decision making is to assign this responsibility to technology staff or technology-based companies. These folks are usually left to develop policy and procedure for this data collection on their own. This isn’t a problem in and of itself, but without clear communication from decision makers about the end-goal of the process, the data is often channeled in a way that is not actually supportive of the best decision making.
In my opinion, the worst way to provide the kinds of data meant for decision support is via traditional reporting like user-managed report or query writers. Why? Because, for the most part, these processes require a decision maker to spend unnecessary time reading and analyzing data that is too raw and non-contextual to glean useful information. Raw data is constantly changing and updating. These reports are static, meaning that decision makers have to remember to continually update and renew their reports.
Think about any decision maker you know — a manager, administrator or chairman. What do they all have in common? They’re time pressured. They feel like there is not enough time in a day to get their jobs done, all while attending meeting and managing their staff. Do we honestly think that people in these positions are going to give up extra time every day to ensure that they have downloaded live reports, and to read huge swaths of raw data until they have full comprehension of the daily changes? Me neither. In reality, these obstacles lead to a kind of decision making that is little more than educated guessing.
Adding to this is the result of studies which have shown that decision makers respond negatively to a high density of data in reports provided to them. Just watch a manager’s face as you drop a traditional report on their desk. In other words, instead of ongoing data lines in reports, they respond positively to information in the form of highly processed conclusions, graphically represented with graphs and charts.
This is the very reason why so many data warehouse implementations fail or seriously miss the goals that were established in their acquisition. The technical staff that has developed those systems has not realized what the actual goal and context of their data collection will be. Any data delivery system that demands the time and patience of the user is destined to fail or be largely underused.