In November of last year a blog post by Blackboard was making the rounds. A Blackboard study documented how instructors use their system which was somehow conflated as being equivalent to knowing how teachers teach. We could talk about how terrifying this blog post is, or how it’s devoid of solid analysis. Instead I’d like to turn over a new leaf and not harp on about privacy, or a company using the aggregate data they’ve collected with your consent (although you probably weren’t aware that when you agreed to “allow Blackboard to take your data and improve the system” that they’d make this sort of “analysis”), but just tear into the findings.
Their arbitrary classifications (supplemental? complementary?) are entirely devoid of the main driver of LMS use: grades, not instructor pedagogical philosophy – students go to the tools where they are going to get marked on. Social? I guarantee if you did an analysis of courses that value use of the discussion tool (note, that’s not about quality) as greater than 30% of the final grade, you’ll see a hell of a lot more use of discussions. It’s not that the instructor laid out a social course, with a student making an average of 50 posts throughout the course (which if you break that down to 14 weeks of “activity” that works out to 3.5 posts per week – which is strangely similar to the standard “post once and respond twice” to get your marks – it’s that discussions are the only tool that give some power to students to author.
Time spent is also a terrible metric for measuring anything. Tabs get left open. People sit on a page because there’s an embedded video that’s 20 minutes long. Other pages have text which is scannable. Connections are measured how? Is it entry and exit times? What if there’s no exit, how does the system measure that (typically it measures for a set amount of activity and then if none occurs the system assumes an exit)? Were those values excluded from this study? If so, how many were removed? Is that significant?
Now Blackboard will say that because of the scale of the data collected that those outliers will be mitigated. I don’t buy that for a second because LMS’s are not great at capturing this data from the start (in fact server logs are not much better), because there’s very little pinging for active window or other little tricks that can only be really assessed using eye tracking software and computer activity monitoring. Essentially garbage-in, garbage-out. We have half baked data that is abstractly viewed and an attempt is made at some generalizations.
Here’s a better way to get at this data: ask some teachers how they use the LMS. And why. It’s that simple.
If you look at Blackboard’s findings, you should be frankly scared if you’re Blackboard. Over half of your usage (53% “supplemental” or content-heavy) could be done in a password protected CMS (Content Management System). That’s a system that could cost nothing (using Joomla or any of the millions of free CMS software available) or replicated using something institutions already have. The only benefit institutions have is that they can point at the vendor when stuff goes wrong and save money on outsourcing expertise into an external company.
If you take the findings further, only 12% of courses (“evaluative” and “holistic”) get at the best parts of the LMS, the assessment tools. So 88% of courses reviewed do things in the system that can be replicated better elsewhere. Where’s the value added?