Dead Drop no.2

A second hodgepodge of things I’ve found on the Internet.

How to Build a Twitter Bot

The third article is the most interesting because it’s from 2014 (remember Klout scores?) and really shows how little (and how much) has changed in the Twitter bot universe. Essentially the heart of the strategy is the same, but mixed with a way to learn (through AI, or natural language processing which is clearly being done at some more sophisticated campaigns) and adapt depending on where we are, I suspect that we are on the cusp of what the future looks like when everyone you meet online could not actually be a person. My Philip K. Dick collection became a lot less sci-fi and more prescient.

More Bots (Cisco Spark/Microsoft Teams):

Both Cisco Spark and Microsoft Teams are available where I work. Both have vague collaboration mandates, both can be used for communication. At this point, Spark might be more advanced with third party enhancements and bots. Teams might end up being the platform of choice if students are involved. I’m hoping to build a chatbot to handle common sorts of requests that our LMS support team get (ie. the stuff we have a predefined reply in our ticketing system, which isn’t attended to after 4:30 PM).

Natural Language Processing:

Natural language is key for the chatbot above, and of course the first attempt will be ruthlessly primitive I’m sure.

Chatbots in General:

D2L Fusion 2017 Recap

Every year it’s a bit different; some new faces appear, some old ones disappear. Jobs and roles change, focus changes. Las Vegas is a place that exemplifies an absurd demonstration of capitalism at its most consumer-driven absurdity. With the event happening in Las Vegas, I thought that D2L might be subtly telling us that they were taking a gamble. Turns out, that the gamble is not a big, all-in shove to take the pot, but perhaps a less-sexy gamble of shying away from glitzy announcements, to a more mature, iterative approach.

People talk about Las Vegas light shows being spectacular, but the one over Oklahoma City was pretty dang cool.

Flying Near Lightning from Jon Kruithof on Vimeo.

Usually in my recaps I’ll talk about the sessions I attended and what I learned. This year, was like every other, the sessions were well done, interesting, and useful (although maybe not immediately useful in my case). I won’t break down each thing learned, or really talk about what the learning was, instead I’ll reflect on the conference as a whole and maybe some of the underlying things that were interesting.

My focus this year was to really better understand two things, API extensibility and the data hub (Brightspace Data Platform, or BDP, which is awful close to ODB). I think I did that. I attended one session that described how they use the API to scrape everyone’s course outline, which was cool because that’s something that has come up periodically at McMaster, and I’ll have to get in touch with them to see how they handled ownership and other matters. It’s that fine line that admins run up against, where they can do things, but should they? We often fall on the no, we shouldn’t side.

I didn’t really pay attention to the keynotes – John Baker’s become quite a good speaker – but I’ve heard it before. I’ve been working at D2L clients now for almost a decade. I’m probably as intimately familiar with the product as many of the people working at D2L. Ray Kurzweil is interesting as a person, and I appreciate some of his theories, I don’t think the singularity will happen (we may approach it, but never achieve it) and I don’t really dig his speaking style. The talk (for me) really circled around the changing nature of work and that is a reality of my everyday.

I did a session about the work we’re doing badging faculty for skill in LMS use, more in another post about that.

I saw echoes of what I’ve done around faculty training in other presentations (which I stole from others whom I’ve worked with and talked to). More importantly, I walked around chatting with folks I know, wondering if this was still right for me? Is EdTech still the thing that I want to be working in? I’ve been in this field since 2001 (or 2003, or 2005, depending on when you want to mark my start – whether as a computer programming co-op student looking after a language lab, or when I graduated or when I first started working in higher education relatively uninterrupted). That’s a long time. I’m still energized by seeing the exciting things that other people are doing. And almost to a person, whenever I say to those people, “hey that’s awesome” or express my interest, there’s the same, humble responses of surprise that anyone would want to talk to them about what they’re doing. I don’t know if that’s a Fusion specific thing, or an EdTech person thing, or just my humble radar forcing me to talk to those people.

It also reminded me that any interesting work was extending the LMS –  using Node.js to create an Awards Leaderboard, doing data analysis for LMS wide analytics using the Brightspace Application API. Hell, even understanding what the Discrimination Index means in the Quiz tool.

My personal underlying theme was that there’s more that can be done with the LMS if you extend it’s capabilities. I think we’re on the cusp of doing just that. 

Once again, it was a fun after the conference experience. Hanging out with some of my favourite people (and recognizing the posse from just a few short years ago is just about gone!).


The CRAP Test Doesn’t Work?

Mike Caulfield (@holden on Twitter) delivered a keynote a long while back which I only caught on twitter by way of others posts. It’s interesting that one of his comments was that the CRAP Test Doesn’t work in a world filled with misinformation. Now, the fact that the CRAP Test isn’t a widely taught technique, nor that the real components that one needs to do (including such archaic things as WHOIS searching, and name searches geolocation of posts by IP and cross referencing with known authorities) something that the general public doesn’t​ do. It’s too much work. So sure, that aspect of it “doesn’t work”, but the underlying skills founded in skepticism work. If you use them. I think that’s an important distinction. To say the tool doesn’t work because the person doesn’t use it is a bit disingenuous.

Most people don’t, because they take a short cut, because it was a parent, in-law or someone they know and trust who forwarded it. Or it already fits their point of view. Or it passed a superficial sniff test. And that’s where the conspiracy theory peddlers make money. And much like advertising where it’s not effective the first time you see the ad and think you need a new sofa, it’s the thirtieth time when you actually need a sofa, that your brain brings up the ad you saw repeatedly.

The public has been primed for this for years. It’s essentially advertising that has replaced truth. It’s bullshit that’s replaced truth. It’s repetition that’s replaced truth.

And with that, the Internet has ended. Well, the Internet that held hope for a lot of educators, for a lot of activists, for a lot of free thinkers and artists. For marginalized folks. If we take this as a parrallel to punk rock – which exploded in 1976 with diversity of sound and voice, and quickly marginalized people by 1979 by getting harder, faster and more straightforward, we can see what the future of a “free” Internet will look like. Lots of the same, but with pockets of interesting, but marginal works. Except this time, we got close to 10 years of the web before it became an advertising platform for bullshit, junk and waste, with some good information out there.

This is the end of truth:

What conspiracy theorists, bullshit peddlers, technology hacks, blackmailers, and those who value lies over truth want has won. Technology has helped them get there faster than we can manage. And no one is working on a way to fight this bullshit, because there’s no money in a truth serum.

Dead Drop no.1

Dead drop is a term that describes a way for two espionage agents to signal an exchange, or some event. This collection of sites with a bit of text is similar to that. It’s funny how I’ve done these things in waves – this time the infrequency will continue but I’ll try to use this titling to link it all together.


Both these articles are deep-ish dives into the related fields of gamification, digital badges and extrinsic motivation. One from a design perspective, which is part of the story I’ll be talking about this summer – how well designed badging experiences can help with adoption – and really understanding your audience and who you’ll be badging. The other is part of the HASTAC research project on K-12 students and looks at the effectiveness of badges on that group.

Data Visualization/Storytelling with Data

I’ve really rediscovered how much I love A List Apart. I haven’t built a full on website for a couple years, and wouldn’t consider it part of my daily job, but I really do love designing things. With that said, there’s a narrative story that most sites have, whether you recognize it or not. Big data visualizations are really telling you a story as a website tells/sells you a story.

Web Design

A really great framework for sites courtesy of Cogdogblog. I’ve been thinking about creating a website for myself, which would link here, and to the photo gallery I’m planning on doing up and trying to build something that touches the intersecting parts of my life. I’m not sure why, after 20 years on the Internet I think this is the time to build a vanity site, but maybe it is? I think back to all the things I’ve done that should be documented somewhere (but aren’t) and all the work I’ve done on ePortfolios, and it seems like I should be doing this more seriously. And age is a factor I’m sure. It’s funny that I’ve tried to keep my social life, my family life and my “professional” life separate, and done quite well at segmenting the three areas. I have no problem sharing parts of those lives within each context, but I’m 99% sure that people that know me from educational technology circles don’t necessarily care about my band.

Deep Learning/Surveillance Society

Yeah, creepy. I’ve had three conversations about leaving Facebook with three different people this week. Reminder, in 2014 Facebook did the exact same thing:

Reclaiming My Digital Identity

After deleting my Yahoo accounts and thinking about the stuff that’s gathered all around the web – I think it’s time to reclaim my stuff. So I made  a list of all the places I consume/create things on the web:

  • Faceboook – how I connect to family and friends, manage a band page
  • Flickr – still have some photos there
  • Tumblr – Hamilton Punk and Hardcore visual archive
  • Picasa/Google Photos – have quite a few photos from my phone
  • Instagram – yeah, photos here too
  • Twitter – my edtech tweets
  • Google+ – not really but some stuff there
  • various message boards – music, music and more music
  • LinkedIn – work related
  • PebblePad – ePortfolio
  • Trello – abandoned workflow/project management
  • Vimeo – portfolio related videos
  • YouTube – portfolio, music (two separate accounts), general watching (yes, a third account for stuff I’ve watched)
  • Discogs – record collection
  • Google Drive/Docs – three different accounts for three reasons (work, personal/travel, music)
  • Dropbox – filesharing
  • Diigo – bookmarks 
  • The Old Reader – RSS
  • Netvibes – RSS

Some of those make sense to reclaim (photos for sure), some don’t because the purpose is to leverage their platforms to communicate. The ones in italics make some sense to reclaim to me. The one benefit is that the storage for stuff out there, is paid for by someone else. However I’m thinking about how a website reflects one’s identity and maybe it’s time for a more holistic version of what I am, who I am.

Shenzhen: China’s Cooperation

I watched this documentary the other week. It really made me think about how cultural imperialism (ie. capitalism) might just not be a great thing for economics going forward. Work through this with me – if China can innovate quicker because of the sharing of ideas and materials, and capitalism restricts the sharing of ideas thanks to intellectual property, patents and copyright, how will western countries be able to keep up? Maybe they’ve lost already? In fact I think we’re seeing that the BRIC (Brazil, Russia, India, China) countries ascend to superpower status not on military might, but on economic. This, to me, is a very clear decline of the Western civilization that we’re living through.

The other thing about this is the absolutely stunning (I assume) drone shots, and the generally beautiful cinematography of the entire thing. I’ve been falling back on some of my media development background lately, and after years of hating films (mostly because we analyzed The Manchurian Candidates opening sequence over 14 weeks in frame by frame detail and that arduous process singlehandedly destroyed my ability to turn off the critical framework) I’ve finally come around to enjoying the beauty of stuff like this without thinking about how to make it better. Educational media needs to become far more literate with this sort of visual storytelling because it’s cheap to do now and really, really accessible. Every place of higher education has a media group – and they can do this if you can’t. It used to require thousands of dollars of equipment, studios, editing suites, but really if you have a modern DSLR (or mirrorless compact) and can get good photos, you can probably get good video. Sound might take a cellphone and a post-production sync with the camera – but that’s easy enough in iMovie, Adobe Premiere, Final Cut Pro or Camtasia.

Goodbye to All the Yahoos

No, this isn’t some grandiose sign-off from public life (although I’m fairly certain I should probably do that…). Really it’s the end of an era where I’ve finally deleted all my Yahoo related accounts. I really only had two Yahoo accounts, one for life ( and one for “professional life” ( But with stuff like Facebook linked to my life account at Yahoo (who’ve had some serious data breeches) I really didn’t feel safe anymore with it. I had the Yahoo account since the late 90’s, because I was a Rocketmail user, and Yahoo bought them out. It was a long process, to finally pull the plug. I started migrating mail from Yahoo late last year. I thought about migrating my band photos from Flickr but realized that I didn’t really care – I have them still and can post them in a gallery on my own website.Or maybe I don’t really care about them either? I don’t know. Flickr’s social aspects are a mere blink of what they used to be. I don’t know that they need to be online at all. The hardest part is all the places with logins tied to the yahoo account. Holy smokes, ten years of logins, some at places that don’t exist anymore.

The tipping point was my Instagram account being hacked, e-mail changed and then deleted in rather quick succession. I got on to the Yahoo account associated and deleted it. And that was it. Identity snuffed out. All the angry e-mails, pranks, early evidence transferred to Gmail for searchability and for it to exist somewhere (I do have some prized correspondence with now dead people that I treasure – as well as digital ephemera that is fun to dig up and look at from time to time) was comforting,

With all that said, I don’t think moving to Gmail is that great an idea but it’s what I’ve done. I keep the name dietsociety because, well, it’s in a lot of places, but 15 years since doing anything resembling a zine, maybe I need to let that go too.

Blackboard Thinks They Know How You Teach

In November of last year a blog post by Blackboard was making the rounds. A Blackboard study documented how instructors use their system which was somehow conflated as being equivalent to knowing how teachers teach. We could talk about how terrifying this blog post is, or how it’s devoid of solid analysis. Instead I’d like to turn over a new leaf and not harp on about privacy, or a company using the aggregate data they’ve collected with your consent (although you probably weren’t aware that when you agreed to “allow Blackboard to take your data and improve the system” that they’d make this sort of “analysis”), but just tear into the findings.

Their arbitrary classifications (supplemental? complementary?) are entirely devoid of the main driver of LMS use: grades, not instructor pedagogical philosophy – students go to the tools where they are going to get marked on.  Social? I guarantee if you did an analysis of courses that value use of the discussion tool (note, that’s not about quality) as greater than 30% of the final grade, you’ll see a  hell of a lot more use of discussions. It’s not that the instructor laid out a social course, with a student making an average of 50 posts throughout the course (which if you break that down to 14 weeks of “activity” that works out to 3.5 posts per week – which is strangely similar to the standard “post once and respond twice” to get your marks – it’s that discussions are the only tool that give some power to students to author.

Time spent is also a terrible metric for measuring anything. Tabs get left open. People sit on a page because there’s an embedded video that’s 20 minutes long. Other pages have text which is scannable. Connections are measured how? Is it entry and exit times? What if there’s no exit, how does the system measure that (typically it measures for a set amount of activity and then if none occurs the system assumes an exit)? Were those values excluded from this study? If so, how many were removed? Is that significant?

Now Blackboard will say that because of the scale of the data collected that those outliers will be mitigated. I don’t buy that for a second because LMS’s are not great at capturing this data from the start (in fact server logs are not much better), because there’s very little pinging for active window or other little tricks that can only be really assessed using eye tracking software and computer activity monitoring. Essentially garbage-in, garbage-out. We have half baked data that is abstractly viewed and an attempt is made at some generalizations.

Here’s a better way to get at this data: ask some teachers how they use the LMS. And why. It’s that simple.

If you look at Blackboard’s findings, you should be frankly scared if you’re Blackboard. Over half of your usage (53% “supplemental” or content-heavy) could be done in a password protected CMS (Content Management System). That’s a system that could cost nothing (using Joomla or any of the millions of free CMS software available) or replicated using something institutions already have. The only benefit institutions have is that they can point at the vendor when stuff goes wrong and save money on outsourcing expertise into an external company.

If you take the findings further, only 12% of courses (“evaluative” and “holistic”) get at the best parts of the LMS, the assessment tools. So 88% of courses reviewed do things in the system that can be replicated better elsewhere. Where’s the value added?

Why Can’t Students Opt-Out of Data Collection in Higher Ed?

You know, for all the talk from EdTech vendors about being student centred (and let’s face it, LMS’s and most of the other products are not student centred) and all the focus on data collection from student activity – why don’t products have an easy opt-out (being student centred and all that) to not allow data to be collected?

What makes matters worse in many ways is that the data collection is hidden from student’s view. For instance, in many LMS’s they track time spent on content items or files uploaded. This tracking is never made explicit to the student unless they go and dig into their own data. And doing that is incredibly difficult and you don’t get a complete picture of what is being collected. If I was a more conspiratorial minded person, I’d suggest that it was done on purpose to make it hard to understand the amount of total surveillance that is accessible by a “teacher” or “administrator”. I’m not. I honestly believe that the total surveillance of students in the LMS is really about feature creep, where one request turned into more requests for more information. LMS’s on the other hand want to keep their customers happy, so why not share what information they have with their clients, after all it’s their data isn’t it?

Actually, it’s not. It’s not the client’s data. It’s the individual’s data. To argue otherwise is to claim that what someone does is not their own – it reduces agency to a hilariously outdated and illogical idea.

The individual, human, user should be allowed to share or not share this data, with teachers, with institutions or with external companies that host that data in an agreement with an institution that probably was signed without them even knowing it. There’s an element of data as a human right that we should be thinking about. As an administrator I have a policy I have to adhere to, and a personal set of ethics that frankly are more important to me (and more stringent) than the obscurely written-in-legalese policy. An unscrupulous, or outright negligent LMS administrator would mean that all bets would be off. They could do things in the LMS that no one, except another administrator, could track. Even then, the other administrator would have to know enough to be able to look at all the hundreds of different changelogs, scattered across different tools, across different courses and do essentially a forensic search that could take a good long time to undo any damage. That lack of checks and balances (a turn of phrase that appears purposefully as I think we’ll see what a lack of checks and balances will be like in the US the next few years) which could be implemented as part of using the system, but aren’t, leaves education in precarious situations.

The idea that the data locked in the LMS without the students being able to say, “I only want my data shared with my College” or “I only want my data shared with company X for the purposes of improving the system” shouldn’t be hard to implement. So why hasn’t it been done? Or, even talked about?

In my opinion, the data that gets harvested (anonymously of course) provides more important information to the company about how people use the system than the optics of having an opt-out button. It allows Blackboard to say how instructors use their system. We could talk about how terrifying this blog post is (instructor use is a proxy for how students use the system because LMSs give power to instructors to construct the student’s experience), or devoid of solid analysis. I’ll deal with the analysis later, so let’s just consider how this is entirely without context.  Blackboard hosted sites have been harvested (probably with the consent of the people who signed the contracts, not the instructors who actually create things on the system, or the students who engage in the system) by Blackboard to tell you how you teach. In one of the cited pieces, Blackboard says that they used this data to improve their notifications. If I put this through for ethics review, and said I’m going to look at notifications improvement and then released a post about how people used the whole system, it may very well be in their rights (and I suspect it is) but it is ethically murky. The fact they’ve released it to the public allows the public to ask these questions, but no one really has? Or if they have, I missed it.

The fact that Blackboard can do this (and I’m talking about Blackboard because they did it, but there’s a similar post from Canvas that’s making the rounds about discussion posts with video being more engaging or some such idea) without really clearing this with any client is chilling. It also constrains how we perceive we are able to use the LMS (it sets the standards for how it is used).

Learning Technologies Symposium 2016 Recap

McMaster University holds a Learning Technologies Symposium every year and this year’s event was spread over two days just after (Canadian) Thanksgiving.  I have a bit of a biased view, as I’ve been one of the organizers over the last four years so unlike my other recap type posts where I share the things I’ve learned attending the sessions, this one will document the things that I did and the things I learned.

A few days before, we had some session cancellations so I, being the diligent jack-of-all-trades offered to run a few sessions to fill in the gaps. The first was on using PebblePad as a peer-review platform, the second was a session I co-presented with a colleague on WebEx, our new web conferencing tool and lastly I ran a quick introduction to digital badges.

I also had to dip back into my history as a media developer to help sort out (with the help of our current digital media specialists) how to get a video camera (actually two video cameras) with HDMI outs into a WebEx broadcast of our keynote (the answer was that it’s best to run it through a video mixer, in this case a Roland VR-50HD). It came together fairly well, without the caveat that WebEx is persnickety when you don’t connect devices in the correct order (ie. it accepted the video feed fine, but didn’t switch audio to match, a quick audio reset fixed the issue).

So I missed most of Barbara Oakley’s keynote, which dealt with a lot of scientific research into how we learn and pay attention, and a lot of the discussion that followed as I was busy with the setup and breakdown of the streaming rig.

The first session I attended was one I was presenting for 15 minutes about peer review – the other session was about Individual Assessment and Personal Learning – in the context of an Italian language course (which as an aside, uses an open textbook). Unfortunately, no one showed up. It was great to catch up with Wendy, who was presenting in the same slot as I, and we chatted about all sorts of educational technology things.

Onto day two… I missed the first block of sessions working on preparing for my own session on WebEx (and catching up on e-mail from the last week, when I was off). The WebEx introduction session went really well, people were happy with the visual fidelity and audio quality. They also seemed intrigued with the testing option, but who knows if anyone will actually take it up. I’m really hoping we could do something with it – maybe review classes at the end of a semester could take advantage of it?

Day two had a series of presentations on some of the VR/AR work that’s being done on campus right now, which is new and exciting even though I’m not involved in it. Every project I’ve heard about is using the technology in a way that makes sense and should enhance learning (unlike in the past where technology is used for flash and wow, without any consideration for it making any sense). I wish I didn’t present twice on day two, because those sessions would’ve been great to see more than for a few seconds.

At lunch David Porter (who I didn’t know I was following on Twitter) new CEO of eCampusOntario (our version of BC Campus) did an overview of how he sees eCampusOntario developing and essentially threw a job recruitment pitch out there as well. We’ll see how the latest round of funding for research and projects goes, because the call was very specific and rigorous – so that vision is really well defined. I did start a proposal, but of course, it was due on October 31st, and I was in Texas watching my favourite band play their last show (NSFW), so I was a little busy.

After lunch I did a presentation on digital badges (do we need them?) that was a brief introduction to digital badges and tried to answer whether we need them (the precis is no, we don’t need them but you may still want to use them in cases where you want to explicitly assess skills or document experiences – transcripts and portfolios can handle the rest). This was paired with an instructor who uses our LMS for exams, in a relatively unique way. The exam is a paper based math exam, mostly generating matricies. The students complete the exam using pencil, paper and calculators, then are given half an hour to transcribe the work into D2L. The instructor then uses the auto marking feature to grade exams using fill-in-the-blanks. It’s a pretty clever way to make marking easier.

After closing remarks – and a giveaway – it was all wrapped up for another year.