D2L Fusion 2017 Recap

Every year it’s a bit different; some new faces appear, some old ones disappear. Jobs and roles change, focus changes. Las Vegas is a place that exemplifies an absurd demonstration of capitalism at its most consumer-driven absurdity. With the event happening in Las Vegas, I thought that D2L might be subtly telling us that they were taking a gamble. Turns out, that the gamble is not a big, all-in shove to take the pot, but perhaps a less-sexy gamble of shying away from glitzy announcements, to a more mature, iterative approach.

People talk about Las Vegas light shows being spectacular, but the one over Oklahoma City was pretty dang cool.

Flying Near Lightning from Jon Kruithof on Vimeo.

Usually in my recaps I’ll talk about the sessions I attended and what I learned. This year, was like every other, the sessions were well done, interesting, and useful (although maybe not immediately useful in my case). I won’t break down each thing learned, or really talk about what the learning was, instead I’ll reflect on the conference as a whole and maybe some of the underlying things that were interesting.

My focus this year was to really better understand two things, API extensibility and the data hub (Brightspace Data Platform, or BDP, which is awful close to ODB). I think I did that. I attended one session that described how they use the API to scrape everyone’s course outline, which was cool because that’s something that has come up periodically at McMaster, and I’ll have to get in touch with them to see how they handled ownership and other matters. It’s that fine line that admins run up against, where they can do things, but should they? We often fall on the no, we shouldn’t side.

I didn’t really pay attention to the keynotes – John Baker’s become quite a good speaker – but I’ve heard it before. I’ve been working at D2L clients now for almost a decade. I’m probably as intimately familiar with the product as many of the people working at D2L. Ray Kurzweil is interesting as a person, and I appreciate some of his theories, I don’t think the singularity will happen (we may approach it, but never achieve it) and I don’t really dig his speaking style. The talk (for me) really circled around the changing nature of work and that is a reality of my everyday.

I did a session about the work we’re doing badging faculty for skill in LMS use, more in another post about that.

I saw echoes of what I’ve done around faculty training in other presentations (which I stole from others whom I’ve worked with and talked to). More importantly, I walked around chatting with folks I know, wondering if this was still right for me? Is EdTech still the thing that I want to be working in? I’ve been in this field since 2001 (or 2003, or 2005, depending on when you want to mark my start – whether as a computer programming co-op student looking after a language lab, or when I graduated or when I first started working in higher education relatively uninterrupted). That’s a long time. I’m still energized by seeing the exciting things that other people are doing. And almost to a person, whenever I say to those people, “hey that’s awesome” or express my interest, there’s the same, humble responses of surprise that anyone would want to talk to them about what they’re doing. I don’t know if that’s a Fusion specific thing, or an EdTech person thing, or just my humble radar forcing me to talk to those people.

It also reminded me that any interesting work was extending the LMS –  using Node.js to create an Awards Leaderboard, doing data analysis for LMS wide analytics using the Brightspace Application API. Hell, even understanding what the Discrimination Index means in the Quiz tool.

My personal underlying theme was that there’s more that can be done with the LMS if you extend it’s capabilities. I think we’re on the cusp of doing just that. 

Once again, it was a fun after the conference experience. Hanging out with some of my favourite people (and recognizing the posse from just a few short years ago is just about gone!).

 

Fusion 2016 Recap

So, as always, D2L’s Fusion conference is a good time. I always learn a lot, but I always have a good time as well. Usually, I like to fly out of Buffalo airport, but the way things worked out I was flying out of Toronto Island Airport (not Pearson, which is the big one that most people use). It was a great experience, but different. I am always really early for flights, and prefer to pay in cash, and at Toronto Island you don’t have to be two hours ahead, and you can’t pay for anything in cash. Oh well, I’ll know for next time. I did run into a ton of people who were going to the conference, including people from Ryerson University, Conestoga College, and the University of Guelph. Jason, a co-conspirator in provocative thinking (or a troublemaker and a friend), was on the plane, and gave me the best compliment on my readiness to get through customs; “you’re such an organized anarchist”.

I knew this year’s conference was going to be a get-up and go quickly kind of affair; the whole deal itself started on the Sunday night, so I had a whole of 6/7 hours before it started to get in some record shopping, a meal, and maybe see a sight or two (I don’t really sight see, I’ve been to New York City half a dozen times and have never gone to the Statue of Liberty, seen a Broadway play or gone up the Empire State Building). Having never been in DC, I scoped out their record store selection (in hindsight Joint Custody was the best of the bunch, but none of them were what I’d call “bad”) realized it all stemmed from U street, and head off that way. I didn’t really get a chance to hunt down some Go Go, but that was due to a lack of time.  After a stop at Ben’s Chili Bowl (doubling for my sightseeing and a meal, because the importance of the restaurant cannot be understated) for a half-smoke (no onions) and a bit of history, records scored, it was time to get back to the hotel, and get situated for the evening conference welcome.

The conference welcome started, I ran into Barry, who I have this kinship with, so we typically hang out as much as possible (which isn’t as much as I’d like) when you consider the scope of this sort of thing. Also getting odd photos with former Premier of Ontario seems to be a thing for me:

After a brief night out, I returned to the hotel, and noticed that I had missed shaving a chunk of hair on the side of my head right above my ear. How did I go through the whole day, talking to people I know, in two different states and a province, with no one saying, “hey, your barber missed a spot”? At a quarter to midnight it was off to the 24 hour CVS to get some disposable razors. My friends are a bunch of jerks. OK, not really. It also allowed me to catch a ton of Pokemon on my stroll, so it was all for good.

Day One

For whatever reason, perhaps it’s my inner competitor, I always like playing D2L’s event games. This year’s app was well built and a neat motivator for me to fill in feedback. I know, judging from how many people stuck around for codes at the end of the presentations, that I’m not the only one.

I was early for the breakfast (getting up at 6 AM for some ungodly reason), so I wandered around the vendor displays and talked to Kaltura and ReadSpeaker about their products (Kaltura we have as a streaming solution for our department needs, ReadSpeaker we have looked at for a while but haven’t had any group on campus suggest they want to foot the bill for).

The opening remarks were good enough – sales pitch for how good D2L is really – but that’s to be expected. I was really interested to hear about some of the success stories that D2L has had over the last year in different areas of the world and education sector. It seems that they’re really pushing into the corporate training space, and an LMS (especially when you consider the work being done in outcomes that they’re investing in) makes sense in that context. I wonder if that diversification of clients is something that will make getting higher education’s needs met harder? Will D2L spread themselves too thin? I guess we’ll find out over the next few years.

Getting Started with Brightspace Data Access

I always try to attend Valence API (now called the Application API) sessions even though I’m doing zero API work – one of these days I’ll get back to the programming stuff I used to do (just in time for my outdated skills to be even more outdated…) because there’s some need to do it. In addition to the Application API, D2L is opening up it’s Data API, so you can access the data storehouse for custom queries rather than using the Application API to periodically collect that data yourself. It only works with Brightspace Data Platform, which is hosted by Amazon Web Services. There’s still some questions floating out there about what exactly is stored in Amazon, and from this session I gather a lot of it is just transaction stuff – so the code to call the data, but not the data itself. If there are data points stored, they would have to be obfuscated. They also mentioned a Data Hub, which allows you to get data extracts (if you’re an Insights/Analytics customer first) in CSV format, so hopefully that means we can run some networking analysis on how people use the system based on extracted data.

Magic Brightspace Widgets

Again, another session that I went to that wasn’t about what I’m doing currently. This one detailed how you can use jQuery and the DOM to scale through D2L’s design of the system to create widgets that change how the entire course looks and feels. Essentially you create code in the middle – which wouldn’t work at scale – but on the course level might be useful. Some really impressive stuff – too bad I couldn’t find the Prezi online anywhere, I’d like to take another look at the examples that Ms. Milanovic at Deakin University showed.

Who’s Got Game? How Badging and Certificates Drives Learner Engagement

Matt Murphy of D2L ran this session, and I wanted to attend as I’m co-running a workshop on Tuesday about badging programs – and I wanted to see what he had to say about badges. Thankfully, he covered a lot of the groundwork about what badges are (using one of the same images I used in my slides!), when to use them and a lot of the foundational work that we didn’t have time to cover in our workshop. Big high fives to Matt for laying the groundwork for us to be able to be successful. And introducing me to Untappd, an app that documents the beers you consume.

Valence Possibilities: Demonstrating Automated Course Setup and Enrollment Applications Supporting Distributed and Centralized Curricular Models

This was an interesting session about how one university uses the API to manage course creations in creative ways (essentially copying from Master courses or Sandboxes on creation). Most of their code was in C#, so that’s not a lot of use to me, but the principles are always good to have a look at.

Keynote by Sir Ken Robinson

I will say, Sir Ken Robinson’s talk was fun, thoughtful, but lean on content. Essentially he riffed on the nature of individuality, and how creativity is a context specific idea, for an hour. An entertaining hour, yes, absolutely. I didn’t learn anything from it that I didn’t hear from his TED talks, or other sessions. I imagine he’s a great guy to get a pint with, and mull over ideas with, but this talk didn’t exactly set my heart ablaze. It was good, and maybe I was expecting that feeling the first time I saw the original TED talk about the ways systems (not just educational systems, but all systems) handle individuality (or creativity in some contexts).

A Night at the Newseum

It was cool to see a piece of the Berlin Wall. Other than that, it seemed very american-centric, and I guess that’s not really a valid criticism as it’s in Washington DC, but I was hoping for insight into how news exists elsewhere. Meh. Back to the hotel to grab a decent pint (as much as I like free beer, Amstel Light as the best beer of the bunch, requires a follow-up with something a little more full bodied) and then head up to finish up some last minute touch ups to the presentation, documents, and other stuff for the workshop tomorrow.

Day Two

Got up, did some last minute touch ups to the presentation part of the workshop for today, reviewed my bits and was early for the Technical Account Manager breakfast. Now we have good relationship with JP, or TAM, so it was neat to have some of the other TAMs around as well to see who else is one. Solution Spotlight was next, this year they announced that they’re giving YouSeeU to all their clients – although we’ll have to see where the data is hosted, what the terms of the integration are, and what the specifications of what we’re getting is. I did note, that this might impact our WebEx offering, or maybe not – depends on which product our faculty and students gravitate towards. Off to sessions.

Reimagining Portfolio Practice: An Audience Led Exploration of High Impact Portfolio Practice

Originally, this was intended to be our Learning Portfolio Program Manager co-presenting with Shane about PebblePad’s flexibility to provide portfolios of different types for different uses, but she retired and Shane from PebblePad asked me to be in the audience should there be a question about integration between D2L and PebblePad. It was a good whirlwind tour of different ways to use portfolios.

Start Your Badging Program Today!

This was the session I co-ran with Lavinia Oltean, who I often introduce as the younger, smarter, better looking, less mustachioed, more organized version of me. This may be the first time in my life I’ve left a session feeling really, really good about it. We hit all our timings (it got bumpy towards the middle) but we got through the bit of an introduction, let them get to work and we had great interaction. I think people came away with a good sense of what could be done with badges, and had a good start in working through what a badge could mean in their context.

Writing Your Own LTI + Combining It With Valence Calls = Solving Unique Problems

I missed the start of this as the end of our presentation leaked into the break and conversations were too good to abandon. The part of the session that I got was that they created a widget that used the API to create a test student account and LTI to connect their API use to Brightspace. I’m not sure why they didn’t simplify it a bit and just use the API to create and enroll the test student account (for faculty to use to view a course), but as a proof of concept it’s pretty cool.

Maximizing the Power of Brightspace: How a College is able to Generate Official course Syllabi from the Learning Environment

Again, I was late for this one as I was chatting with folks that I knew I wouldn’t see again until the next time at Fusion or at the regional conference. La Cite College have an external site that contains their course outlines, and have used the Application API (aka Valence) to read that external database and create a PDF course outline for faculty, automatically included in D2L. This was by far, the slickest idea I saw using the API at this Fusion. Really nice work.

What’s All this Buzzing About Next Generation Digital Learning Architecture?

Rob Abel from IMS Global gave a talk about how the people who help faculty learn about technology will move from an IT support sort of role, to a strategy consultant role as a result of the move from the monolithic LMS to a more distributed, next generation learning environment. His talk was wide ranging, and I’ve found it’s available here: http://demo.desire2learncapture.com/139/Watch/3688.aspx

Keynote by Angela Meiers

I only could stay for 15 minutes because the concierge suggested that we take public transit rather than a cab to the airport. Jason and I decided to go together, seeing as we’re on the same flight and need to be there at the same time. Basically, you matter is what I got from the brief time in Angela’s talk.

The Way Home..

So I admit, there were less interesting run-ins, weird moments, funny things occurring that I’m used to at a conference. And then there was the way home. It got good once we got on the express bus to Dulles. Jason got called a Tom Cruise look alike, I had to get him back on the bus when he got off at a parking lot just outside Dulles (in his defense, everyone was getting off the bus, so it seemed logical), the driver laughed at us, and we had a great chat the rest of the way.

That’s not the end of it. After clearing through TSA in what felt like record time, we were both starving and having an hour before the flight boarded, thought a meal was in order. Of course, the restaurant we choose doesn’t move fast. You’d think that perhaps airport restaurants understand some people might be in a hurry? No? Me either.

Needless to say, Jason settled up (I had cash so no need to wait!) as I ran to the gate, to get on the plane. A minute before the doors closed, he got on the plane. It was a bit of a last minute dash.

Digital Marginalia 3 – Connective Tissue is Key

I ran across a scrap of paper that I had scrawled two ideas on:

“Aesthetics help inform people of the usual cues for identity. They identify a person as as a participant of a culture”.

The whole idea that aesthetics are a cultural artifact that I haven’t thought about for probably ten years. So it lead me down a rabbit hole of thinking about aesthetics as a white thing – as in a racist artefact of a dominating culture. Then I ran across a Washington Post article about film and the inherent racist qualities of the technological process (film and lights calibrated for white skin rather  than a multitude of tones).

I started thinking about how we (and by that I mean white people) end up designing software, websites and apps from that privileged perspective. I haven’t really dug deep enough to think about it more than a passing thought, but I wonder about these things when I have some moments alone. It’s not a comfortable space as I’ve always prided myself as being an anti-racist sort of fellow.

“There is only one literacy – the one item that you need to be literate is just in different forms.”

I think this is from Stephen Downes, or it could be from someone else. Whoever said it, that resonates with me right about now.

I’ve been working on using D2L’s Valence API to extract an entire course’s discussions for network analysis, and found Philip Larsen’s Presentation from Fusion 2014 (which I attended, but not that session, dammit!) which will pull the data out, and then I’ll use PHP to create a CSV for input into the network analysis tool. There’s not that much more to write about as I’m basically using the existing project carte blanche and do some heavy lifting after the fact.

D2L Badging

So, I’ve been poking around with D2L’s badging/certificate since it was unveiled in September (and actually writing this blog post since then!) on our test instance and it’s been a fun thing actually thinking about and configuring a new tool. It’s been so long since I’ve actually spun something new up – that I had to really work the part of my brain that frankly hasn’t been worked in a long time – the “what if?” part. What really stinks is that with badging we actually don’t want to have every instructor available to create badges. That simply means controlling access to the tool via Navigation Bars (which we don’t allow our instructors to change) or creating a new role. Either way is a bunch of manual work for me.

The one big problem, and this isn’t by any means a knock against us, is that I haven’t had time to properly configure this on test in a way that makes testing easy – I’ve just been too busy working with ePortfolios and PebblePad to take the time. Thankfully there’s documents like the Assessments Administrator Guide on the Brightspace Community that help a lot when working through the user permissions (which frankly are poorly documented) and what used to be called DOME variables (now Config Variable Browser). So we’ve slowly got the technical side working and we’ve run into a huge issue that could conceivably cripple the whole damn thing. Issuing a certificate or award (or digital badge) means that we’re giving power to instructors that Registrar’s previously held very closely to themselves. So, we’ve devised a way to do this without ruffling institutional feathers – and with a way to control how the badges are used.

We really want to avoid badging as another way to give grades or learning outcomes. There is already a wealth of tools in the LMS that do this (uhhh, Grades and Competencies/Outcomes) so do not recreate what you already do and add a pretty picture to it. That is useless, and students will inevitably find badges useless in that context. More importantly, external parties will find badges useless, which if you really want badges to hold some value, then you will need external people to value them. Giving a badge that says you got an A in a course, is frankly useless (as useless as the A is in determining what a person is capable of or knows).

So as an institution we are looking at ways that we can ensure that people using badges are using them in ways that actually contribute to the student experience, by either awarding badges that have no representation elsewhere (like experiences that could make up a part of a co-curricular record) or awarding badges for skills that are not explicitly found within the core curriculum. Students already have a transcript that uses grades as a way of communication ideas about broad topics. Students should have learning outcomes that syllabi tell them are the important aspects of those courses. Don’t bother recreating the wheel – higher education already has a few that work well enough. Focus on what doesn’t get communicated already.

Our initial plan was to have training help people through this process and when they complete the training, they can then issue badges within their courses context. We’re still doing that – but with an added wrinkle. By the end of the workshop, instructors have designed at least one badge, thinking through the visual design (and sketching it out), the implications of what a digital badge means, how this badge might connect to external groups, what criteria or release condition will issue the badge and finally, how they might value badges coming into the course from other sources.

All of this on paper. Then people can have a serious thought about how it’s technically going to happen. Essentially it’s a two hour enforced planning session.

What will inevitably come up is that some forward thinking instructor will ask “what if we want to have students give each other badges?” and the answer will be “they can’t (providing that the student role is configured in a typical way that controls access to courses)”. It’s a huge gap that’s not a problem with the tool, but with the design of the LMS.

 

First Week of Using PebblePad

A little preamble.

We’ve been looking at bringing on a second ePortfolio platform since January. We’ve used the D2L ePortfolio platform for two years, with some significant gains, but with quite a few growing pains. The tool doesn’t seem to be getting much focus from D2L – instead they seem to be looking at mobile ePortfolios the last few years. While that’s great, and the app is very, very slick, the way we authenticate to our LMS prevents us from using any of the D2L developed apps; Assignment Grader, ePortfolio, Binder, Pulse, none of them will work for us because of the way we authenticate. Changing our authentication process is on our list of things to do, but isn’t going to happen soon (think: one to two years).

While ePortfolio is a good tool for individual use in context of classroom activity, it doesn’t really do co-curricular stuff well without a bunch of workarounds, like creating a course to house the co-curricular activity. It works well in a mentorship situation (where the mentor is an instructor in a class). It also doesn’t allow people to collaborate effectively. You can share a presentation, giving the other person full rights to edit, but for some weird design reason, you cannot edit anything that exists in the presentation. Even with permission to edit the underlying artifact. This was a deal breaker for a few faculty in large classes. Critical reflection in a social space is something our faculty want, our students want and the literature suggests might be more effective at challenging underlying fundamental thinking.

So we started looking at options.

I tried to advocate for a Domain of One’s Own style project using WordPress as a base for how students could construct a “portfolio” – but too many people felt that it couldn’t adapt to a curriculum based approach where an instructor type person could securely grade and provide individual feedback. People that have used WordPress understand that you can in fact, do just that, but it take a little more work. All else fails there’s e-mail right? Honestly, I didn’t think this option would gain much ground, it’s too radical for our campus, and not necessarily a perfect, easy fit for our needs.

We looked at other providers, Known, Digication, Mahara (and holy crap if you think D2L ePortfolio produces elderly looking portfolios, Mahara looks like warm garbage strewn across an empty strip mall from the 80’s) and a number of free solutions (including Pathbrite). Nothing really did all of the things that we wanted – we needed a pretty broad tool that could do many different things (including co-curriculum uses, personal uses, group/shared uses, curricular uses and finally, a mentor/mentee communication tool).

After putting the others through the review process, we ended up selecting PebblePad and began planning. Negotiation took a bit longer than we liked, but all in all, we turned on the system on September 2nd. We didn’t have time to do single sign-on, or connect the Student Information System to it, or our authentication system. Basically we have a link in the LMS that takes the student/user to the login page and they authenticate to an account I’ve created (to date, 7260 accounts). It also means that the assessment areas in PebblePad – called ATLAS –  (which are distinct from assessment in the LMS) were created manually by me (about 40 so far).

The fact that we turned on the system and did some minor configuration and started running within hours (albeit in a hosted environment) was pretty pleasant. There’s been no major issues (so far). There’s been no real questions come in to ask about “how do I?” – mostly about “can I get an account?” I’ll update in a month or so how things have gone to that date in the future.

D2L Badges (If I Designed the Badges In the Image of the Work I’m Doing)

I’ve been working with badges for over a year now, and we’re finally starting to put together a program that will help identify skills, have students award other students badges, and make up essentially a co-curricular record for students.

Internally we struggled (and still struggle) with the software and the design – early ones had easy to achieve badges which didn’t mean much; later iterating into something a bit more robust. I wanted to share what we’re doing but people are also looking at researching our efforts, so I’ve had to abstract that a little bit to share the design, but not the specifics. So I looked around and thought about the things I know that might be transferrable. At the same time D2L has announced that the Learning Environment for continuous delivery clients will get a badging application in the September update. Aha! The badges of D2L. It’s not a perfect analogy, but bear with me, I think there’s some important parallels that should be easy to decipher. First, some history.

Many people who have been through an LMS review will tell you that selecting the software  is not just about features, it’s more about the people behind the software. Much like badges, it’s not about the image that they project, but the metadata that they contain within. So for the purposes of “badging D2L” I’ve decided to take the well known public faces of D2L and apply badges to them. It works as a parallel for the work we’re doing around building a co-curricular record.

topdogThe top of the hierarchy is easy, it’s the Top Dog badge. This badge is awarded for people who are involved in a wide variety of things, evangelize the D2L experience, are endlessly positive and feel like a trusted companion or a good friend. Who would be the best person to represent the Top badge? Founder and CEO John Baker is clearly the most well-known of all D2L employees. Most customers, industry analysts (amazing that EdTech has analysts now), and will think of John Baker when thinking about D2L. Always interested in his clients, and endlessly positive about the D2L products – John fits the top of the hierarchy (and the characteristics of our badges).

Just underneath the Top Dog badge, we have three other that one has to earn, before getting the Top badge. The Community badge, Productivity badge and Development badge.

The community badge could be earned for contributing and developing a sense of community around D2L products.

After John Baker, Barry Dahl is in the #2 spot of people well known within D2L – that can communitybe within the EdTech community (via the Desire2Blog he authored prior to joining D2L), or the Teaching and Learning community (via the D2L Brightspace Community, or the many regional forums that D2L hosts around North America). He has been a D2L award winner multiple times (prior to going to work for them, of course), been on the main stage at Fusion multiple times, probably upwards of a hundred webinars and is just generally well-known throughout the D2L customer base. In fact, I’d say that if you became a client of D2L in the last 5 or 6 years you’d be very likely to know Barry, much like how I got to know Barry, through the community around D2L’s products. Never mind the fact that Barry was a “face” of D2L before being hired by D2L. He was critical of the product at times, but always out there helping the community around the D2L Learning Environment.

For the Productivity badge, the earner has to be able to be a part of different aspects of a project, responding to other member’s (and community) ideas in an inclusive, respectful and productive manner.

productivityThe Productivity bages is represented by Ken Chapman. Ken as VP of Market Strategy is certainly one of the more prominent D2L people due to his longevity with the company and  his engagement in many different forums; including Fusion, webinars, client visits, and  the like. Funny, I went to D2L’s site to grab a picture of Ken for the badge, but he’s not listed under the  “Leadership” page on their website.

The last badge at the first tier would be the Development badge. This badge would be earned by demonstrating the development of themselves, the community and/or project.

developmentThe Development badge is represented by Jeremy Auger, Chief Strategy Officer at D2L. Another long-term employee (one of the originals, I think) who has had pretty high visibility at Fusion and in all the other venues  where  you  have a chance to get to know these people. One caveat here  is that lots of D2L customers know who Jeremy is, but I’m betting that very few can actually tell you what he does (I’m sure it’s something…). The badge is sufficiently vague, like a title that is bestowed on a chief strategy officer. I kid, I kid, cleary D2L’s strategic efforts in their three markets (K-12, Higher Ed, Corporate) must pass through Jeremy.

As you might well know, badges can be cumulative or solitary. I’d imagine that would be the case if I were explaining D2L’s badging hierarchy, and not McMaster’s approach to badging co-curricular activities. So the next level below might be made up of two or more badges that could earn you the Community, Productivity and Development badges. If we were to further drill down D2L’s hierarchy, lesser badges might be represented by some other well known D2L people.

For the Development side of things: Probably Nick Oddson, Sr. VP of Product Development.  Nick hasn’t been at D2L all that long, but he has had high profile announcements at the past couple of Fusion conferences. If he sticks around, he will more and more become one of the faces of D2L. Another person might be Paul Janzen, who’s very active in the Valence community and on the development side of things.

For the Productivity side of things: Mike Moore another employee who came out of education and who gets lots of face time related to the Insights analytics product line and learning outcomes (which are currently undergoing a revamp).

For the Community side of things: Maybe controversial (depending on when you became a D2L client, you might look for Jason Santos here, but in my opinion he’s too new), but I’d choose Terri-Lynn Brown for this spot since it is based on who the best-known employees are, not the most “important” (whatever that means). Terri-Lynn is a Calgary resident and a former K-12 educator who still is very well known within that market, and across Canada.

After that, we could go on with this exercise, but I think you get the idea. If I had made this post a few years ago, it would be a radically different post – filled with names that resonated maybe a little more. I wonder if the changes we’ve felt as clients of D2L will spill forward into the product.

Like badges spilled forward into the latest update.

LMS Review

So our current contract with our LMS vendor is complete in October 2018. I can’t say what we’ll do, but the landscape has changed somewhat since the last LMS review was done at work, so we’re starting to form a committee to look at our options. Ultimately, we’re doing this so far in advance because you should take at least a year to transition, which would put us at fall 2017 for starting to roll out the new system for users, which means that we’d need to know in the spring of 2017. Maybe the cost of change will scare us into another contract. As I see it, there’s four options available to us.

1. Stay with D2L. This is the easiest answer, and least complicated. We’ve been relatively happy with them as a company so there’s really no need to change, and despite the minor blips (the great outage of 2012) it’s been pretty smooth so far. Maybe continuous delivery will be awesome, and really a review of the LMS will be a mere formality.

2. Move to another hosted LMS. This is the second easiest answer – if the campus decides that D2L isn’t the choice for us, then we choose another vendor, enter into protracted negotiations, and go through the formal process of getting vendors to submit to review, tender offers, and go through the motions with selecting a new partner for the next five years. I’m not sure if the campus will think this is an option at all. Blackboard was our previous LMS, and didn’t work – essentially leaving the campus without a functional LMS for a whole semester. By the time it did start working, the relationship was in trouble, and well, that’s why we have D2L. That’s according to faculty who were here during the time it happened, which was prior to my time. Another seismic shift may not be in the cards. Another factor is that we’ve become a PeopleSoft school for our various systems across campus. That implementation has been rough for the campus. I’m not sure they have the appetite for another system to learn so quickly.

3. Go back to self-hosting LMS software. This allows us to look at open source solutions, and rely on our own IT group to take server maintenance, infrastructure and all the other associated risks back under our roof. It’s unlikely that we would do this due to the human cost of running a mission critical server – and we’d have to look at hiring back expertise that was relocated to other groups on campus or into industry. Those costs, are not insignificant. The complexity of running Moodle or Sakai at scale for 25,000 to 30,000 users, isn’t lost on me. It’d be a great challenge. I don’t know that this will be palatable to the campus either as we’ve had people who were running their own Moodle install come over to use the institution’s provided install of D2L. Maybe that’s the path of least resistance? Maybe it’s the students pushing for one platform? Who knows.

4. Do away with the LMS. This is an entirely radical idea, but what if we just left it up to instructors to do it themselves? I’d be ok in this scenario, despite having this a huge part of my job description, because there’s always going to be technology to use to teach. I’d have to adjust. Would this even fly? Probably not. Imagine the headlines: “first University to do away with the LMS”… would be useful to put on my tombstone after everyone lynches me because they need a secure webpage to link their stuff to.

As a teaching and learning centre, we’ll be interested in finding something flexible to teach not only in the modes that people currently teach in, but also in the pedagogy that people want to teach in. All LMS’s say they can do constructivist style setups, but really they require changes globally to do so. No one gives the instructor the power to turn on or off student control of a slice of content, or a discussion, or even a collaborative space for document sharing. I’ll go out on a limb and suggest that all LMS’s are designed as content delivery tools, not knowledge construction tools. And to that end, the choice of tools that can be used is often controlled by LMS administrators, not the instructors. Now, there’s great reasons for structuring things in such a way; theoretically administrators have subject matter expertise in selecting partners to connect to the LMS and have experience with vetting vendors. Right? I hope so. I know I’ve tried my best to make sure I’ve done right for student’s privacy, intellectual property and general safe digital space. I don’t know what I don’t know though. I guess, through the next three years, I’ll start to find out.

Work = Life = ePorfolios

Even though it’s the weekend, it’s August 8. This is my work anniversary. So I’ll be taking a moment to write about work while I enjoy the evening.

After a long discussion with my direct boss, we decided that I needed to stop doing everything that I do and focus on doing a few things. I can say that it’s a good idea, I’m a bit of a control freak. If you’ve worked with anyone like me, you’ve probably been witness to someone who has opinions, shares them with the drop of a hat, and will doggedly defend those beliefs and continue to circle back to fight for them again and again. Where this becomes a problem is when I feel the insane need to do everything. Redesign training? Check. Read the 100+ page document about the LMS upgrade? Check. Dissect it and rewrite it for our campus? Check. Update websites? Check.

I honestly do want other people to feel they have room to do work without me jumping into what they do. I like to think I’m a good person, and I fully recognize my flaws (and this is a big one). I’d also like to think that I’ve tried to make room for others to do stuff. Maybe I haven’t been as effective in doing that, I don’t know as I can’t speak to how others feel. So I’ve vowed to step back from the LMS administration side of things to focus on the ePortfolio and badging projects that we have going.

Now, if you pay attention to LMS’s you’ll know that D2L announced that their badging service will be available for clients on continuous delivery (Brightspace version 10.4 and higher) starting September 2015. We’re also expanding the number of University wide ePortfolio software solutions from just the D2L ePortfolio tool, adding another to complement where D2L eP is weak. There’s not been an official announcement, however we’ve started internal training and I should be writing about the process of getting this other software up and running as I think it’ll be quite an accomplishment in the time frames we have set.

Now you may wonder where the D2L ePortfolio tool is weak?

Well, first, let me give you two caveats. One, I’ve given this information to our account manager and I know for a fact it’s gone up the chain to D2L CEO John Baker. I think we’ll see improvements in the tool over the next while. I’m cautiously optimistic that by this time next year D2L’s ePortfolio tool will be improved, with that I keep an interested eye on the Product Idea Exchange inside the Brightspace Community, for developments. The other caveat, is that we’re using ePortfolios in such a way that we want to leverage social opportunities for reflective practice. Frankly this wasn’t something that we knew when we started with ePortfolios, and hence wasn’t part of our initial needs.

OK the weaknesses from my perspective).

1. The visual appeal of the tool is challenged. The web portfolios that are created are OK looking. It needs a visual design overhaul. Many aspects of the tool still bear the visual look of pre-version 10 look of D2L’s products. It also needs to be able to be intuitive to use for students, and part of that falls on the visual arrangement of tools. There’s three sections of the ePortfolio dashboard where you can do “stuff” (menus, the quick reflection box and on the right hand side for filling out forms and other ephemera). I totally understand why you need complexity for a complex tool – especially one designed to be multi-purpose.

2. Learning goals, which is a huge part of reflective practice, are not built into the portfolio process. You can, yes, create an artifact that could represent your learning goal, and associate other artifacts as evidence of achieving that goal – but I’d ask you to engage in doing that process as a user to see why it’s problematic. Many, many clicks.

3. There is a distinct silo effect between the academic side and the personal side of things. If we extrapolate our learning goals to be equivalent to learning outcomes (and I feel they should be) – those learning goals are still artifacts and outcomes/competencies pulled from the courses are labelled something else. Again, I don’t think the design of the ePortfolio tool is aimed at this idea, however, if we’re serious about student centred learning, shouldn’t we be serious about what the student wants to get out of this experience, and treat what they want out of the experience, whatever that is, at the same level as what teachers, or accreditation bodies, or departments, or schools feel they should know?

4. Too many clicks to do things. Six clicks to upload a file as an artifact is too many.

5. Group portfolios are possible, but so challenging to do, that we’ve instituted a best practice that you organize it socially and make one person responsible for collecting artifacts and submitting the portfolio presentation. Even if you want to take on the challenge, when you share a presentation with another person and give them edit rights, the tool still doesn’t let you edit in the sense that you would expect the word edit to mean. You can add your stuff to the presentation, but can’t do squat with anything else in the presentation. In some ways it makes sense, but functionally it’s a nightmare. What if your group member is a total tool and puts their about me stuff on the wrong page? What if they made an error that you catch, why do you have to make them fix it instead of the sensible thing and being able to fix it yourself?

With all that said, people tend to like the tool once they figure it out. The problem is, that many don’t get past that hurdle without help, and there’s only so much help to go around.

What If… We Made the LMS Truly Modular?

I think we all understand that the LMS as a tool is a pretty cruddy one. It does a lot of things, some well, many not in ways that you, as an instructor would prefer. At last year’s Fusion conference, we heard D2L speak about their LTI integrations, and how Brightspace was the integrated learning platform. I’ve heard that many people envision what D2L are selling as a hub and spoke system – where Brightspace (or the Learning Environment, or even more crudely, the LMS) acts as a hub – and the tools connected are the spokes. I wonder what things would look like if we extrapolate that idea out to the nth degree?

For instance, you could replace the gradebook with a tool that worked for you – Google Spreadsheets, Excel online, or a box plot device. Chalk and Wire has replaced Canvas’ gradebook in one instance, and I’m sure that the inefficient tools of any LMS are things that one would want to replace. Does that mean the all of them? For some snarky folks, yeah, that would mean all of them. Those folks should just teach in the open web.

The LMS also provides a fairly elegant way to get student data into your course area. That’s probably something instructors or faculty wouldn’t want to do. Hell, I’m glad we have someone on my team that wants to do it because it’s an ugly job.

And for many, the content management of these systems are pretty good. In D2L’s case, the way the content tool works is pretty decent (now if hide an item means really hide any evidence of the item, we’re talking). Imagine if you could take elements of one system you like (say Angel) and add-in features to allow for customization for the individual course needs?

Or how about replacing the groups tool with some other mechanism?  Quizzing is something that people already are pushing to publishers like Pearson or McGraw Hill, but what about up and comers like Top Hat or Poll Everywhere (which, lets face it, is essentially a quiz engine wrapped in a polling tool)? Discussions become a Disqus link at the bottom of a item in the content area… there’s lots of clever fun to be had with this idea.

Now to some extent, you can do this already (depending on how locked down your system is by your systems administrators). Change the navigation bars to point to tools you use connected through LTI – however you have some issues with doing this yourself. The biggest one is that at some point, you’ll run into some technological problem that you can’t solve easily. I suspect, that’s what the Internet is for.

At some point one (or many) of you will point out quite rightly, that this sounds an awful lot like what the web is (or more accurately, was). And my answer is yeah, that’s about right. Used to have websites that we plugged bits of HTML into to make what we needed. Until it was commodified.

You can probably draw a comparison to the modern LMS to Facebook – mostly everyone uses it but would probably use a better system if everyone else went to it first. Universities would consider another model as long as everyone will come along. However, in the current higher education system, where we have seasonal and precarious work dominating instructional positions – there’s no time or want to develop a better way. The LMS works as a co-conspirator in the commodification of education. It’s not directly responsible, but it plays a  part in making education at university an easily packaged and consumed affair. The LMS isn’t alone, Coursera, Udacity, those type of MOOCs are equally complicit, or maybe even moreso.

And that’s a more likely reason we’ll see a modular LMS. More vendors get opportunities to get into different institutions that aren’t their current clients.

i>Clicker Integration with D2L/Brightspace

I’m sure i>Clicker won’t be particularly happy with this post, that’s ok, because I’m not happy with them. The one thing as an LMS administrator that you should feel very, very sanctimonious about is sharing data. We have an internal policy that we’ll never share student numbers. I also think that we shouldn’t share usernames, however some folks feel that’s ok. I guess it depends on your username convention. First dot last as a convention is great, except in these cases when you’re trying to maintain privacy. Now, if you’re making a single sign on connection through LTI, you can have the originating LMS username obfuscated, which essentially boils down to a paired database table that has two usernames in it – the one handed off to the third party via LTI, and the one in the LMS. Typically, I’m a little bristly about this as well, because you become reliant on the system not changing how this is handled, and if there’s an HTTP call mixed in there, it still could be sniffed out while in transit… but that’s not what this post is about.

Since last year, I’ve been bugged by i>Clicker to do an integration to make sign-up with their service seamless for students. There has been some requests for integration between i>Clicker and the gradebook in D2L from some of the more heavy users. The first request, I’ve always put off because, frankly, I don’t do anything a private enterprise wants me to do, they aren’t my boss, nor are they a member of my community. They serve one purpose, and that’s collecting money. The second, however, does benefit a select user group on campus. We were thinking about this since last year’s summer of integration, and April had a couple minutes free (more on that in forthcoming blog posts), so we scheduled some time to finally make it happen.

We schedule a call, to walk us through the integration and see if there’s anything that we have questions about. I don’t need anyone to walk me through an integration, but I often like to raise privacy, data collection policies, and other awkward questions to the poor sucker who’s on the end of the other line. Typically they are ill informed. I didn’t even get to the awkward stage, as a request was made that was frankly shocking. I was asked to turn on passing the Org Defined ID – or our student number – to facilitate the connection. Not just at the tool level, but at the configuration for the Tool Information. See below:

config_tool_consumer (1) Now if I understand this panel correctly, it not only changes the configuration settings for the tool in question, but for all the tools. ALL the tools. So not only i>Clicker, but anything else you have connected through using the External Learning Tools administration panel. I asked our technical account manager about changing it, and he basically said, “yeah, that’s not good.”

So in the middle of this exchange where I explain how we don’t pass the student number under any circumstances, the i>Clicker representative seemed to be a little miffed about my protests. He wasn’t particularly nasty about it, but certainly didn’t seem to understand why this was an issue at all. Looking at i>Clicker’s website, students are asked for their ID. It’s different asking a student to give them their ID (consent) and setting up access to everyone’s ID (no consent).

What makes me wonder is, how many other institutions even give this a thought? Surely we can’t be the first person to balk at the idea of handing over student data like this? Or maybe we are being too paranoid? I mean, I guess there’s people who have faith in the third-party vendors, but I’d prefer having a license, stating exactly what they are doing with data, how they’re using it, how long it’s retained and an agreement signed between the two parties. That way if the external party violates the agreement, the institution can hold them liable for the data breech – something a little stiffer than “oops”.