Blackboard Thinks They Know How You Teach

In November of last year a blog post by Blackboard was making the rounds. A Blackboard study documented how instructors use their system which was somehow conflated as being equivalent to knowing how teachers teach. We could talk about how terrifying this blog post is, or how it’s devoid of solid analysis. Instead I’d like to turn over a new leaf and not harp on about privacy, or a company using the aggregate data they’ve collected with your consent (although you probably weren’t aware that when you agreed to “allow Blackboard to take your data and improve the system” that they’d make this sort of “analysis”), but just tear into the findings.

Their arbitrary classifications (supplemental? complementary?) are entirely devoid of the main driver of LMS use: grades, not instructor pedagogical philosophy – students go to the tools where they are going to get marked on.  Social? I guarantee if you did an analysis of courses that value use of the discussion tool (note, that’s not about quality) as greater than 30% of the final grade, you’ll see a  hell of a lot more use of discussions. It’s not that the instructor laid out a social course, with a student making an average of 50 posts throughout the course (which if you break that down to 14 weeks of “activity” that works out to 3.5 posts per week – which is strangely similar to the standard “post once and respond twice” to get your marks – it’s that discussions are the only tool that give some power to students to author.

Time spent is also a terrible metric for measuring anything. Tabs get left open. People sit on a page because there’s an embedded video that’s 20 minutes long. Other pages have text which is scannable. Connections are measured how? Is it entry and exit times? What if there’s no exit, how does the system measure that (typically it measures for a set amount of activity and then if none occurs the system assumes an exit)? Were those values excluded from this study? If so, how many were removed? Is that significant?

Now Blackboard will say that because of the scale of the data collected that those outliers will be mitigated. I don’t buy that for a second because LMS’s are not great at capturing this data from the start (in fact server logs are not much better), because there’s very little pinging for active window or other little tricks that can only be really assessed using eye tracking software and computer activity monitoring. Essentially garbage-in, garbage-out. We have half baked data that is abstractly viewed and an attempt is made at some generalizations.

Here’s a better way to get at this data: ask some teachers how they use the LMS. And why. It’s that simple.

If you look at Blackboard’s findings, you should be frankly scared if you’re Blackboard. Over half of your usage (53% “supplemental” or content-heavy) could be done in a password protected CMS (Content Management System). That’s a system that could cost nothing (using Joomla or any of the millions of free CMS software available) or replicated using something institutions already have. The only benefit institutions have is that they can point at the vendor when stuff goes wrong and save money on outsourcing expertise into an external company.

If you take the findings further, only 12% of courses (“evaluative” and “holistic”) get at the best parts of the LMS, the assessment tools. So 88% of courses reviewed do things in the system that can be replicated better elsewhere. Where’s the value added?

Why Can’t Students Opt-Out of Data Collection in Higher Ed?

You know, for all the talk from EdTech vendors about being student centred (and let’s face it, LMS’s and most of the other products are not student centred) and all the focus on data collection from student activity – why don’t products have an easy opt-out (being student centred and all that) to not allow data to be collected?

What makes matters worse in many ways is that the data collection is hidden from student’s view. For instance, in many LMS’s they track time spent on content items or files uploaded. This tracking is never made explicit to the student unless they go and dig into their own data. And doing that is incredibly difficult and you don’t get a complete picture of what is being collected. If I was a more conspiratorial minded person, I’d suggest that it was done on purpose to make it hard to understand the amount of total surveillance that is accessible by a “teacher” or “administrator”. I’m not. I honestly believe that the total surveillance of students in the LMS is really about feature creep, where one request turned into more requests for more information. LMS’s on the other hand want to keep their customers happy, so why not share what information they have with their clients, after all it’s their data isn’t it?

Actually, it’s not. It’s not the client’s data. It’s the individual’s data. To argue otherwise is to claim that what someone does is not their own – it reduces agency to a hilariously outdated and illogical idea.

The individual, human, user should be allowed to share or not share this data, with teachers, with institutions or with external companies that host that data in an agreement with an institution that probably was signed without them even knowing it. There’s an element of data as a human right that we should be thinking about. As an administrator I have a policy I have to adhere to, and a personal set of ethics that frankly are more important to me (and more stringent) than the obscurely written-in-legalese policy. An unscrupulous, or outright negligent LMS administrator would mean that all bets would be off. They could do things in the LMS that no one, except another administrator, could track. Even then, the other administrator would have to know enough to be able to look at all the hundreds of different changelogs, scattered across different tools, across different courses and do essentially a forensic search that could take a good long time to undo any damage. That lack of checks and balances (a turn of phrase that appears purposefully as I think we’ll see what a lack of checks and balances will be like in the US the next few years) which could be implemented as part of using the system, but aren’t, leaves education in precarious situations.

The idea that the data locked in the LMS without the students being able to say, “I only want my data shared with my College” or “I only want my data shared with company X for the purposes of improving the system” shouldn’t be hard to implement. So why hasn’t it been done? Or, even talked about?

In my opinion, the data that gets harvested (anonymously of course) provides more important information to the company about how people use the system than the optics of having an opt-out button. It allows Blackboard to say how instructors use their system. We could talk about how terrifying this blog post is (instructor use is a proxy for how students use the system because LMSs give power to instructors to construct the student’s experience), or devoid of solid analysis. I’ll deal with the analysis later, so let’s just consider how this is entirely without context.  Blackboard hosted sites have been harvested (probably with the consent of the people who signed the contracts, not the instructors who actually create things on the system, or the students who engage in the system) by Blackboard to tell you how you teach. In one of the cited pieces, Blackboard says that they used this data to improve their notifications. If I put this through for ethics review, and said I’m going to look at notifications improvement and then released a post about how people used the whole system, it may very well be in their rights (and I suspect it is) but it is ethically murky. The fact they’ve released it to the public allows the public to ask these questions, but no one really has? Or if they have, I missed it.

The fact that Blackboard can do this (and I’m talking about Blackboard because they did it, but there’s a similar post from Canvas that’s making the rounds about discussion posts with video being more engaging or some such idea) without really clearing this with any client is chilling. It also constrains how we perceive we are able to use the LMS (it sets the standards for how it is used).

Fusion 2014 – Day Two Recap

So anyone who has gone to a conference before will recognize, it’s a bit more like a marathon than a sprint – you really have to try to pace yourself to get everything in and pay attention to the things you want to. I will say, that for the second year in a row, the food at the Fusion conferences were really good. Ended up talking with Paul Janzen of D2L about our impending PeopleSoft integration and the summer of integrations (Blackboard Collaborate, Pearson, maybe McGraw Hill, iClicker and Top Hat) we’re doing at McMaster. Ken Chapman also joined us at breakfast and asked a little bit about what we’d like to see out of e-mail. Frankly, I hadn’t thought about e-mail in years, because we’ve been mired in hell with IT and us trying to get the Google Mail integration working (not that it doesn’t functionally work, but IT has stalled us for admin level access since we asked a year and a half ago). I said that I’d personally prefer the system to not do e-mail at all, but that would be a difficult task considering we have people who segment their academic teaching e-mail on the LMS rather than their institutional e-mail. The problem for us is that we’re currently not configured to allow external e-mail.  It will be interesting to see if IMAP/POP3 support comes to Brightside sometime in the future – which makes a lot more sense.

Insights Focus Group

I wasn’t sure if I was going to be of any help in this but I  thought that seeing as we’ve run some reports with the Insights tool maybe I could glean better ways to deal with it. Basically, people had concerns with the large data not being able to run org-level reports (which is one of ours as well), the interface needs some improvement, and the time it takes to create ad hoc reports is too long. So those issues were at least noted. Let’s see how they get addressed going forward.

Blended Learning, Learning Portfolios and Portfolio Evaluation

Wendy Lawson and I co-presented this – however it was mostly a Wendy show. She lived it, so she should have the floor. Basically this presentation outlines what we collaborated on for Wendy in her Med Rad Sci 1F03 course – which is a professional development course for first semester, first year science students going into the Medical Radiation Sciences (X Ray, Sonography, etc).

We used the ePortfolio tool as a presentation mechanism – which I think worked well, I’m not sure if we had a good flow of what we were going to show on each page, but other than that, it was a risk that we felt was not big enough to impede our presentation.

We talked about how this redesigned course could use the Learning Portfolio to deliver the course in a blended manner (using ePortfolio/Learning Portfolio activities as the one-hour blended component) and how the students did with it. After working with Wendy on this presentation, the stuff her students did were miles above what we saw on average and I think next year, the weaknesses she acknowledged in the course will be addressed.

Vendor Sessions

I honestly skipped these because, well, I’m not interested in getting more spam in my work e-mail. Plus, my wife was having surgery (everything’s good!) at this time so I wanted to call and make sure we connected before surgery began.

Connecting Learning Tools to the Desire2Learn Platform: Models and Approaches

Attending this session was particularly self-serving – I wanted to say hi to the presenter, George Kroner, who I’ve followed on twitter for what seems like a million years, and the Valence stuff is stuff I feel I should know. I have a decent enough programming background – I can hack together things. So why am I not actually building this stuff?

George walked through UMUC’s process for integrating a new learning tool which can be broken down into three steps:

  1. Evaluate tool fit
  2. Determine level of effort to integrate
  3. Do it/Don’t do it

It seems so simple when I re-write out my notes, but it’s a really interesting set of steps – for instance, in the determine the level of effort to integrate – you also have to think of post-integration support, who supports what when you integrate? Is it the vendor? Or your support desk? What’s the protocol for resolving issues – do people put in a ticket with you, then you chase the vendor? Is the tool a one-time configuration and does it import/export nicely, or does it need to be configured every time it’s deployed in a course?

We did a Valence exercise next, and I want to merely link two tools that when I circle back to doing some Valence work (soon, I swear!) I’ll need:

https://apitesttool.desire2learnvalence.com/ and http://www.jsoneditoronline.org/. The API test tool is a no brainer really, and I knew about it before but never knew where it was – incredibly helpful for debugging your calls and what you might get back. JSON editor online is new to me, and something that I really, really needed. I’m a JSON idiot – for some reason, Javascript never resonated with me. I’ve always preferred Python or PHP as web scripts, despite the power of Javascript. Guess I’ll have to put on my big boy pants and learn it all over again. Maybe Dr. Chuck will do a JSON course like he’s done with Python?

Social Event at the Country Music Hall of Fame

The evening social event was great – the Hatch Show print is actually something I might just hang in my office. There was some shop talk, some fun stuff, a drone… Oh yeah, this happened:

Happy New Year

In the past I’ve looked at previous posts about what I think will happen, and reflect on those ideas. It’s not that I don’t think reflection is valuable, it’s just that I’m not that interested in navel gazing (hell, I can see my navel getting bigger by day).

This year, I’ll outline some of the projects I’m currently involved with and will try to write about this year.

Work Projects

So for work, I’m working on two large-ish projects. One is a Productivity and Innovation Grant funded project lead by the University of Guelph, around learning outcomes in D2L. What the project encapsulates is ensuring there’s alignment between course, program and ultimately University related outcomes – and the reporting that D2L will suggests where there are holes in the alignment. It seems like it will improve the Analytics/Insights tool greatly with global reporting options – which is something I’ve struggled with greatly.

The other, is around Learning Portfolios. The department that I’m embedded with has gotten some funding from the University to advance Learning Portfolios (the ePortfolio tool in D2L) on campus and it’s looking like we will be responsible for this area from here on out. I think that some improvements to the way the tool works by D2L will only help the adoption of the tool – however there’s still some major hurdles that have to be overcome before there’s widespread adoption. That’s not to say that adoption and use hasn’t grown greatly, it has – just the impact of the use so far has not produced enough of a ripple to spread campus-wide. That’s our job in year two. I’m putting in a Fusion 2014 proposal to co-present one of the really interesting stories from first semester that ties blended learning, learning portfolios and helping students reflect (in this case on career choices).

Personal Projects

Other than the banal things like redo the bathroom and visit more places, I’m putting out a record with my one band and releasing another record with my other band. Not very exciting unless you like hardcore punk.

While this is work-related, I want to put together a rubrics repository (like Rubistar, but much more focused on local courses, and local sharing) that has a series of rubrics saved covering higher education courses that the University teaches. This way, it gathers together some of the best work that faculty have done, recognizes them, allows them to set sharing permissions, and ultimately, choose to export as PDF or into D2L. This is a big project, and really not on anyone’s timeline, but I want it to happen. It’ll have to be open source, and to that end, maybe it doesn’t just spit into D2L but into Blackboard or other systems too. The first iteration will of course work with our system (D2L) and then maybe we can branch out.

I’d love to help update the Feed2JS codebase to get it WCAG 2.0 compliant.  I’d also love to blog more.