I’m struggling with this one. Our web analytics class is looking at a couple months’ worth of data from BYU’s most popular independent study course, Math 110 [not sure what the definition of “popular” is in this case, by the way] and making some recommendations, both about their tracking suite and about the course itself. Clint explained, and I understand, that analytics is not meant for examining a handful of people—it’s for looking at trends, types, aggregates. But, for me, that aggregation leads to serious questions.
Even as we look at beautifully-created Excel graphs that expertly bring home the enormity of the first-lesson-drop-off effect, I hear echoes of Andre the Giant in the Princess Bride; “Are you sure that means what you think it means?”
When we aggregate, in our particular case, “shoppers” with “students,” don’t we run the risk that a site adjusted to the needs of their amalgam might actually be worse for both groups of individual users? Then again, in our case, we don’t actually know who it is we’re aggregating, and though an aggregation of unknowns is surely better than a single unknown case, I still don’t know if it’s something I want to be basing decisions off.
I’ve seen data-driven decisions based on landing page A/B testing result in huge traffic and conversion gains, and I have been particularly pleased when “the data” have backed me up and finally tipped the scales with a difficult boss.
Those successes notwithstanding, making “data-driven” decisions before we can be certain what the data actually represent seems to me a foolish, and potentially counter-productive, proposition.
PS: I’m also forming a hypothesis that the answers instructional designers beg from analytics data are going to be, on the whole, a good deal more complex than those petitioned by sellers of bivy sacks or purveyors of celebrity gossip. More on this later…at the rate I’ve been going lately, much later.