LMS Review

So our current contract with our LMS vendor is complete in October 2018. I can’t say what we’ll do, but the landscape has changed somewhat since the last LMS review was done at work, so we’re starting to form a committee to look at our options. Ultimately, we’re doing this so far in advance because you should take at least a year to transition, which would put us at fall 2017 for starting to roll out the new system for users, which means that we’d need to know in the spring of 2017. Maybe the cost of change will scare us into another contract. As I see it, there’s four options available to us.

1. Stay with D2L. This is the easiest answer, and least complicated. We’ve been relatively happy with them as a company so there’s really no need to change, and despite the minor blips (the great outage of 2012) it’s been pretty smooth so far. Maybe continuous delivery will be awesome, and really a review of the LMS will be a mere formality.

2. Move to another hosted LMS. This is the second easiest answer – if the campus decides that D2L isn’t the choice for us, then we choose another vendor, enter into protracted negotiations, and go through the formal process of getting vendors to submit to review, tender offers, and go through the motions with selecting a new partner for the next five years. I’m not sure if the campus will think this is an option at all. Blackboard was our previous LMS, and didn’t work – essentially leaving the campus without a functional LMS for a whole semester. By the time it did start working, the relationship was in trouble, and well, that’s why we have D2L. That’s according to faculty who were here during the time it happened, which was prior to my time. Another seismic shift may not be in the cards. Another factor is that we’ve become a PeopleSoft school for our various systems across campus. That implementation has been rough for the campus. I’m not sure they have the appetite for another system to learn so quickly.

3. Go back to self-hosting LMS software. This allows us to look at open source solutions, and rely on our own IT group to take server maintenance, infrastructure and all the other associated risks back under our roof. It’s unlikely that we would do this due to the human cost of running a mission critical server – and we’d have to look at hiring back expertise that was relocated to other groups on campus or into industry. Those costs, are not insignificant. The complexity of running Moodle or Sakai at scale for 25,000 to 30,000 users, isn’t lost on me. It’d be a great challenge. I don’t know that this will be palatable to the campus either as we’ve had people who were running their own Moodle install come over to use the institution’s provided install of D2L. Maybe that’s the path of least resistance? Maybe it’s the students pushing for one platform? Who knows.

4. Do away with the LMS. This is an entirely radical idea, but what if we just left it up to instructors to do it themselves? I’d be ok in this scenario, despite having this a huge part of my job description, because there’s always going to be technology to use to teach. I’d have to adjust. Would this even fly? Probably not. Imagine the headlines: “first University to do away with the LMS”… would be useful to put on my tombstone after everyone lynches me because they need a secure webpage to link their stuff to.

As a teaching and learning centre, we’ll be interested in finding something flexible to teach not only in the modes that people currently teach in, but also in the pedagogy that people want to teach in. All LMS’s say they can do constructivist style setups, but really they require changes globally to do so. No one gives the instructor the power to turn on or off student control of a slice of content, or a discussion, or even a collaborative space for document sharing. I’ll go out on a limb and suggest that all LMS’s are designed as content delivery tools, not knowledge construction tools. And to that end, the choice of tools that can be used is often controlled by LMS administrators, not the instructors. Now, there’s great reasons for structuring things in such a way; theoretically administrators have subject matter expertise in selecting partners to connect to the LMS and have experience with vetting vendors. Right? I hope so. I know I’ve tried my best to make sure I’ve done right for student’s privacy, intellectual property and general safe digital space. I don’t know what I don’t know though. I guess, through the next three years, I’ll start to find out.

Digital Marginalia

A collection of links, notes, and things I’ve seen in the last little while that are too long for a tweet but too short for a full blog post unto themselves…

First, and most importantly to me, the soundtrack to this update the brilliant 13th Floor Elevators (and particularly, Roky Erikson’s great solo version of Two Headed Dog):

I updated my laptop to Windows 10 – I primarily use the laptop for checking e-mail, writing more than a tweet, constructing a drum beat or using Word 2007. The process was smooth for a laptop that’s close to 6 years old and has 4 gig RAM and 320 gig hard drive. However, here’s a series of Windows 10 related links that will be of benefit to those who wish to better understand what this upgrade means. The first outlines the new features of the OS. The second has to do with blocking auto-updates. The third has to do with privacy settings, which we all should be interested in.

http://fieldguide.gizmodo.com/14-things-you-can-do-in-windows-10-that-you-couldnt-do-1721271379

http://www.slashgear.com/microsoft-has-a-tool-for-blocking-windows-10-auto-updates-27394432/

http://lifehacker.com/what-windows-10s-privacy-nightmare-settings-actually-1722267229

I’ve been working off and on over the summer with our student centre trying to think of ways badging could work as a co-curricular record for students. I don’t know that we’re much further, but we are going to try some things over the next year and see how they work. I’m interested in ways that we can empower students to grant badges to other students, especially when those badges might contain institutional imagery. How can we ensure that people don’t misunderstand what the badge means and that it’s a peer issued badge? Lots and lots of stuff to unpack there.

http://chromatrope.co.uk/open-badges-for-training-and-development-2/

http://www.dontwasteyourtime.co.uk/technology/mapping-digital-skills-in-he/

http://huxleypiguk.blogspot.co.uk/2015/07/free-open-badge-e-book.html

http://higheredstrategy.com/wp-content/uploads/2014/08/Intelligence-Brief-9-Career-Services-Offices-3.pdf

http://literaci.es/privacy-badges

While in training this week, Carpe Diem learning design was mentioned. I didn’t inquire further, but I did some looking further into it. It strikes me as neat, but prone to my faulty brain labelling it Caveat Emptor learning design, which has a whole separate implication. I would recommend not using Caveat Emptor learning design, if it exists.

http://www.ld-grid.org/resources/methods-and-methodologies/carpe-diem

I didn’t go to Brightspace Fusion/User Conference this year because a) I hate Orlando, and b) the hotel was not within public transportation/walking distance of anything nearby. I did however have my twitter feed blow up for a couple hours when I got mentioned by my good friend Barry. I’m actually speechless about this still (almost two months later!) – it’s honoring and humbling to have others say such nice things about me.  Thank you to you all.

Work = Life = ePorfolios

Even though it’s the weekend, it’s August 8. This is my work anniversary. So I’ll be taking a moment to write about work while I enjoy the evening.

After a long discussion with my direct boss, we decided that I needed to stop doing everything that I do and focus on doing a few things. I can say that it’s a good idea, I’m a bit of a control freak. If you’ve worked with anyone like me, you’ve probably been witness to someone who has opinions, shares them with the drop of a hat, and will doggedly defend those beliefs and continue to circle back to fight for them again and again. Where this becomes a problem is when I feel the insane need to do everything. Redesign training? Check. Read the 100+ page document about the LMS upgrade? Check. Dissect it and rewrite it for our campus? Check. Update websites? Check.

I honestly do want other people to feel they have room to do work without me jumping into what they do. I like to think I’m a good person, and I fully recognize my flaws (and this is a big one). I’d also like to think that I’ve tried to make room for others to do stuff. Maybe I haven’t been as effective in doing that, I don’t know as I can’t speak to how others feel. So I’ve vowed to step back from the LMS administration side of things to focus on the ePortfolio and badging projects that we have going.

Now, if you pay attention to LMS’s you’ll know that D2L announced that their badging service will be available for clients on continuous delivery (Brightspace version 10.4 and higher) starting September 2015. We’re also expanding the number of University wide ePortfolio software solutions from just the D2L ePortfolio tool, adding another to complement where D2L eP is weak. There’s not been an official announcement, however we’ve started internal training and I should be writing about the process of getting this other software up and running as I think it’ll be quite an accomplishment in the time frames we have set.

Now you may wonder where the D2L ePortfolio tool is weak?

Well, first, let me give you two caveats. One, I’ve given this information to our account manager and I know for a fact it’s gone up the chain to D2L CEO John Baker. I think we’ll see improvements in the tool over the next while. I’m cautiously optimistic that by this time next year D2L’s ePortfolio tool will be improved, with that I keep an interested eye on the Product Idea Exchange inside the Brightspace Community, for developments. The other caveat, is that we’re using ePortfolios in such a way that we want to leverage social opportunities for reflective practice. Frankly this wasn’t something that we knew when we started with ePortfolios, and hence wasn’t part of our initial needs.

OK the weaknesses from my perspective).

1. The visual appeal of the tool is challenged. The web portfolios that are created are OK looking. It needs a visual design overhaul. Many aspects of the tool still bear the visual look of pre-version 10 look of D2L’s products. It also needs to be able to be intuitive to use for students, and part of that falls on the visual arrangement of tools. There’s three sections of the ePortfolio dashboard where you can do “stuff” (menus, the quick reflection box and on the right hand side for filling out forms and other ephemera). I totally understand why you need complexity for a complex tool – especially one designed to be multi-purpose.

2. Learning goals, which is a huge part of reflective practice, are not built into the portfolio process. You can, yes, create an artifact that could represent your learning goal, and associate other artifacts as evidence of achieving that goal – but I’d ask you to engage in doing that process as a user to see why it’s problematic. Many, many clicks.

3. There is a distinct silo effect between the academic side and the personal side of things. If we extrapolate our learning goals to be equivalent to learning outcomes (and I feel they should be) – those learning goals are still artifacts and outcomes/competencies pulled from the courses are labelled something else. Again, I don’t think the design of the ePortfolio tool is aimed at this idea, however, if we’re serious about student centred learning, shouldn’t we be serious about what the student wants to get out of this experience, and treat what they want out of the experience, whatever that is, at the same level as what teachers, or accreditation bodies, or departments, or schools feel they should know?

4. Too many clicks to do things. Six clicks to upload a file as an artifact is too many.

5. Group portfolios are possible, but so challenging to do, that we’ve instituted a best practice that you organize it socially and make one person responsible for collecting artifacts and submitting the portfolio presentation. Even if you want to take on the challenge, when you share a presentation with another person and give them edit rights, the tool still doesn’t let you edit in the sense that you would expect the word edit to mean. You can add your stuff to the presentation, but can’t do squat with anything else in the presentation. In some ways it makes sense, but functionally it’s a nightmare. What if your group member is a total tool and puts their about me stuff on the wrong page? What if they made an error that you catch, why do you have to make them fix it instead of the sensible thing and being able to fix it yourself?

With all that said, people tend to like the tool once they figure it out. The problem is, that many don’t get past that hurdle without help, and there’s only so much help to go around.

What If… We Made the LMS Truly Modular?

I think we all understand that the LMS as a tool is a pretty cruddy one. It does a lot of things, some well, many not in ways that you, as an instructor would prefer. At last year’s Fusion conference, we heard D2L speak about their LTI integrations, and how Brightspace was the integrated learning platform. I’ve heard that many people envision what D2L are selling as a hub and spoke system – where Brightspace (or the Learning Environment, or even more crudely, the LMS) acts as a hub – and the tools connected are the spokes. I wonder what things would look like if we extrapolate that idea out to the nth degree?

For instance, you could replace the gradebook with a tool that worked for you – Google Spreadsheets, Excel online, or a box plot device. Chalk and Wire has replaced Canvas’ gradebook in one instance, and I’m sure that the inefficient tools of any LMS are things that one would want to replace. Does that mean the all of them? For some snarky folks, yeah, that would mean all of them. Those folks should just teach in the open web.

The LMS also provides a fairly elegant way to get student data into your course area. That’s probably something instructors or faculty wouldn’t want to do. Hell, I’m glad we have someone on my team that wants to do it because it’s an ugly job.

And for many, the content management of these systems are pretty good. In D2L’s case, the way the content tool works is pretty decent (now if hide an item means really hide any evidence of the item, we’re talking). Imagine if you could take elements of one system you like (say Angel) and add-in features to allow for customization for the individual course needs?

Or how about replacing the groups tool with some other mechanism?  Quizzing is something that people already are pushing to publishers like Pearson or McGraw Hill, but what about up and comers like Top Hat or Poll Everywhere (which, lets face it, is essentially a quiz engine wrapped in a polling tool)? Discussions become a Disqus link at the bottom of a item in the content area… there’s lots of clever fun to be had with this idea.

Now to some extent, you can do this already (depending on how locked down your system is by your systems administrators). Change the navigation bars to point to tools you use connected through LTI – however you have some issues with doing this yourself. The biggest one is that at some point, you’ll run into some technological problem that you can’t solve easily. I suspect, that’s what the Internet is for.

At some point one (or many) of you will point out quite rightly, that this sounds an awful lot like what the web is (or more accurately, was). And my answer is yeah, that’s about right. Used to have websites that we plugged bits of HTML into to make what we needed. Until it was commodified.

You can probably draw a comparison to the modern LMS to Facebook – mostly everyone uses it but would probably use a better system if everyone else went to it first. Universities would consider another model as long as everyone will come along. However, in the current higher education system, where we have seasonal and precarious work dominating instructional positions – there’s no time or want to develop a better way. The LMS works as a co-conspirator in the commodification of education. It’s not directly responsible, but it plays a  part in making education at university an easily packaged and consumed affair. The LMS isn’t alone, Coursera, Udacity, those type of MOOCs are equally complicit, or maybe even moreso.

And that’s a more likely reason we’ll see a modular LMS. More vendors get opportunities to get into different institutions that aren’t their current clients.

i>Clicker Integration with D2L/Brightspace

I’m sure i>Clicker won’t be particularly happy with this post, that’s ok, because I’m not happy with them. The one thing as an LMS administrator that you should feel very, very sanctimonious about is sharing data. We have an internal policy that we’ll never share student numbers. I also think that we shouldn’t share usernames, however some folks feel that’s ok. I guess it depends on your username convention. First dot last as a convention is great, except in these cases when you’re trying to maintain privacy. Now, if you’re making a single sign on connection through LTI, you can have the originating LMS username obfuscated, which essentially boils down to a paired database table that has two usernames in it – the one handed off to the third party via LTI, and the one in the LMS. Typically, I’m a little bristly about this as well, because you become reliant on the system not changing how this is handled, and if there’s an HTTP call mixed in there, it still could be sniffed out while in transit… but that’s not what this post is about.

Since last year, I’ve been bugged by i>Clicker to do an integration to make sign-up with their service seamless for students. There has been some requests for integration between i>Clicker and the gradebook in D2L from some of the more heavy users. The first request, I’ve always put off because, frankly, I don’t do anything a private enterprise wants me to do, they aren’t my boss, nor are they a member of my community. They serve one purpose, and that’s collecting money. The second, however, does benefit a select user group on campus. We were thinking about this since last year’s summer of integration, and April had a couple minutes free (more on that in forthcoming blog posts), so we scheduled some time to finally make it happen.

We schedule a call, to walk us through the integration and see if there’s anything that we have questions about. I don’t need anyone to walk me through an integration, but I often like to raise privacy, data collection policies, and other awkward questions to the poor sucker who’s on the end of the other line. Typically they are ill informed. I didn’t even get to the awkward stage, as a request was made that was frankly shocking. I was asked to turn on passing the Org Defined ID – or our student number – to facilitate the connection. Not just at the tool level, but at the configuration for the Tool Information. See below:

config_tool_consumer (1) Now if I understand this panel correctly, it not only changes the configuration settings for the tool in question, but for all the tools. ALL the tools. So not only i>Clicker, but anything else you have connected through using the External Learning Tools administration panel. I asked our technical account manager about changing it, and he basically said, “yeah, that’s not good.”

So in the middle of this exchange where I explain how we don’t pass the student number under any circumstances, the i>Clicker representative seemed to be a little miffed about my protests. He wasn’t particularly nasty about it, but certainly didn’t seem to understand why this was an issue at all. Looking at i>Clicker’s website, students are asked for their ID. It’s different asking a student to give them their ID (consent) and setting up access to everyone’s ID (no consent).

What makes me wonder is, how many other institutions even give this a thought? Surely we can’t be the first person to balk at the idea of handing over student data like this? Or maybe we are being too paranoid? I mean, I guess there’s people who have faith in the third-party vendors, but I’d prefer having a license, stating exactly what they are doing with data, how they’re using it, how long it’s retained and an agreement signed between the two parties. That way if the external party violates the agreement, the institution can hold them liable for the data breech – something a little stiffer than “oops”.

Extending D2L Reports Using Python

So before I get going on this long post, a little bit about me. I fancy myself a programmer. In fact, the first “real” job I had out of college (the second time I went through college, the first job the first time through college was a video store clerk) was programming a PHP/MySQL driven website in 2003-ish (on a scavenged Windows 2000 server running on an HP box). So I have some chops. Of course, as educational technology has moved away from being a multiple skillset job where a little programming, a little graphic design, a little instructional design was all part of what one did, now it’s been parsed out into teams. Anyways, I haven’t had much of an opportunity to write any code for a long time. The last thing I did seriously was some CSS work on my website about four months ago, which is not really doing anything.

With that all said, the LMS administration I do requires little programming skill, as being an admin, while having it’s perks, isn’t exactly a technical job. I do recognize that some admins deal with enrollments and SIS integrations that require a whole skillset that I could do, but I don’t.

As part of my admin jobs, I run monthly reports on how many items are created in D2L’s ePortfolio – these reports include date and time, username and item type. We use this as a gauge of how active students are and cross-referenced with other information it’s incredibly useful in letting us know things like 22,383 items were created by 2650 unique users in ePortfolio in November 2014 (actual numbers).

Using Excel, I can see if users in a month have ever used ePortfolio before, so I can also track how many users are new to the tool per month. However the one thing I can’t easily look at is how frequently a user might interact with ePortfolio. With a total record number of over 125,000, it’s not something I want to do by hand. So I wrote a script. In Python. A language I don’t know well.

If you want to avoid my lengthy diatribe about what I’m doing and skip right to the code it’s on GitHub here: https://github.com/jonkruithof/python-csv-manipulation-d2l-reports. If you don’t want to click a link, here’s the embedded code here:

Basically the code does this:

Look at the username, see if it exists in memory already. If it does, find the difference since the last login write that information to a CSV, and then overwrite the existing entry with this entry. If it does not exist, then write it to memory. Loop through and at the end, you’ll have a large ass CSV that you’d have to manipulate further to get some sense of whether users are frequently using the ePortfolio tool (which might indicate they see value in the reflection process). I do that in CSVed, a handy little Windows program that handles extremely large CSVs quite well. A quick sort by username (an improvement in the Python code coming sometime soonish) and we have something parseable by machine. I’m thinking of taking the difference in creation times, and creating some sort of chart that shows the distribution of differences… I’m not sure what it’ll look like.

Publisher Integrations with D2L

So I’ve tried to write this post several times. Tried to work up some semblance of why anyone would care about Pearson or McGraw Hill and integrating either publishers attempt at crafting their own LMS into the one that our institution has purchased (at no small amount).

Then it hit me while going through Audrey Watter’s great writing (in fact, go over there and read the whole damn blog then come back here for a quick hit of my snark). The stuff that I’ve been challenging Pearson and McGraw Hill on is stuff that we should be talking about. I’ll encapsulate my experiences with both because they tie nicely together this idea of giving private interests (eg. business, and big business in this case) a disproportionate say in what happens in the classroom.

The concept is really twofold. One, why are we providing a fractioned user experience for students? This disjointed, awkward meld is just a terrible experience between the two parties. Login to the LMS, then login again (once, then accept terms and conditions that are, well unreadable for average users), then go to a different landing page, find the thing I need to do and do it. Two, students are being held hostage for purchasing access (again!) to course activities that in any sense of justice, would be available to them free. Compounding this is the fact that most of these assessments are marked, and count towards their credit. And there’s really no opt-out mechanism for students. Never mind the fact that multiple choice tests are flawed in many cases, but to let the publisher decide how to assess, using the language of their choice, is downright ugly.

Pearson

So Pearson approached us to integrate with their MyLab solution about two years ago. We blankly said no, that request would have to come from a faculty member. Part of that is fed by my paranoia about integrations, especially when they require data transfers to work, part of it is that frankly, any company can request access to our system if we allow that sort of behaviour. Finally a faculty member came forward and we went forward with an integration between Pearson Direct and D2L. I will say that parties at D2L and Pearson were incredibly helpful, the process is simple and we had it setup in hours (barring an issue with using a Canadian data proxy which needed some extra tweaking to get working correctly). The issue really is should we be doing this? The LMS does multiple choice testing very, very well. MyLabs does multiple choice testing very, very well. Why are we forcing students to go somewhere else when the system we’ve bought for considerable sums of money does the very same thing well? Well the faculty member wanted it. What we’ve found is that most faculty who use these sorts of systems inevitably find that students don’t like jumping around for their marks.

Additionally, when the company has a long laundry list of problems with high stakes multiple choice testing, how does this engender faith in their system?

McGraw Hill

Again, all the people at McGraw Hill are lovely. None of them answer any question about accessibility, security or anything it seems straight. That may be because their overworked, that may be because they don’t know. None of the stuff I’ve seen is officially WCAG compliant, however it may have been created prior to that requirement being in place, so they may get a pass on that one. The LTI connection is very greedy, requiring all the options to be turned on to function, even though D2L obfuscates any user ids, and it bears no resemblance of an authenticated user (thanks, IMS inspector!) it still needs to be there or else the connection fails. What kind of bunk programming is this? Why is it there if you don’t use it? Why require it? Those questions were asked in that form in two separate conference calls, to which most went unanswered. I did receive a white paper about their security, which did little to answer my direct questions about what happens to the data (does it travel in a secure format? who has access to this feed?) after it leaves D2L and is in McGraw Hill’s domain.

Now I’m paranoid about data and private companies. I immediately think that if you’re asking me to hand over data that I think is privileged (here’s a hint, it’s all privileged) you should be able to at least answer why in the name of all that is unholy, that you deserve to have access to it, and if I agree to give it to you, what happens to that data, when it leaves my system and sits on yours. That should be easy to answer and not require any sort of thought. It makes me wonder if anyone is asking these questions at all. They must? I mean, everyone is thinking about this sort of stuff right?

D2L Regional Conference (Ontario) 2014 Recap

Thought I published this in late October, but I guess not. Whoops.

So this conference was all about ePortfolio and the Competencies tool – every presentation I went to had something to do with either of those two topics. If neither of those are interesting to you, then well this’ll be a boring recap.

I took part of a focus group around course level analytics – I don’t know if D2L got much out of our conversations, but they were good ones. I hope they heard that the analytics tools are good at what they do, but aren’t robust enough for the complexity of the task.

On to the  presentations.

Tracking Overall Expectations in Brightspace

This presentation was by the Hamilton Wentworth District School Board administrators who are using the Competencies tools to track outcomes. It’s interesting that they use the language learning goals (as a proxy for outcomes) and success criteria (as a proxy for activities). That language is much clearer for K-12 – but would probably be a bit bristly for higher education. John Baker was in the meeting and suggested that there will be a D2L tool to copy outcomes from the standards source (whether that be a District School Board, Provincial standard or other). If that comes to fruition, I could see the CEAB (Canadian Engineering Accreditation Board) lining up to get their standards in, which would be a big boom for D2L clients who have Engineering departments. They also shared their videos for how to use the Competency tool.

At lunch I was sitting with a friend and John Baker came by, and I asked him about future improvements to ePortfolio – as we’ve been struggling with the presentation creation aspect of the tool. He mentioned some things that they were working on, specifically learning goals (unfortunately I couldn’t ask him more about this) and automatic page creation based on tags.

D2L ePortfolio and All About Me

Durham CDSB use ePortfolio as a way for students to think about where they are going. Specifically these are K-6 students, so elementary students for those who don’t reside in Canada. It seems the program is all about identity forming and goal setting. Interesting aspects, and the approach is pretty much how you’d expect. Students are prompted to answer questions about their future, and they periodically answer the questions over the six years. One key thing is that they’ve developed a workflow page, which I think goes a long way to helping students with their tool use. Another interesting aspect is the teacher is selecting artifacts for the student, putting them into a collection and then sharing that collection with the student.

Learning Portfolio: Year One

This was my co-presentation with Catherine Swanson, the Learning Portfolio Program Manager. We talked about some of the challenges of introducing a portfolio process across all disciplines, in all facets of campus and how we did. As an institution we did fine. In many of the cases where the ePortfolio tool was used in conjunction with a good use in a course (particularly good where one has to make a choice or work through a messy process to “learn”) it was ultimately successful. Where the purpose was less directed, more fuzzy, well things didn’t work as well.

 

Fusion 2014 – Day Three Recap

After a long night and an early morning it was surprising that I was as functional as I was. At this point in the conference, I realize I’m just human, can’t do the million things I want to do in a city and at a conference, and just relax. Instead I had a great conversation/rant about ePortfolio with some of the guys who actually develop the product. I told them about the long-term vision I’d have for the product (including learning goals that students can assess themselves on, with rubrics they design, with outcomes that they determine and assess) and some seemingly short term improvements (like taking the improvements they made in 10.0 to the creation of homepages in courses, and grafting it on the ePortfolio presentation tool – which seems like such a no-brainer). We’ll see if my enthusiasm for getting the presentation building piece of ePortfolio fixed is any help. I also intended to attend an 8:00 AM session, but felt that a run-through of our presentation might be time better spent.

I’d like to hear about the Executive Platform Panel if anyone got up at 8:00 to share their story of what they heard.

Implementing Learning Outcomes and Collecting Insightful Data in Higher Education

My co-worker Lavinia and I co-presented this one, and again it went well. This time it was a more traditional presentation, which I’ve embedded below:

We got some really interesting questions, and got into the nitty gritty stuff about Competencies in D2L. I still have a huge issue with the language of “competency” (does not achieving a competency mean that you are incompetent?) and I guess that can be addressed with language packs if I were clever enough to think of what the structure should be called.

Expanding Widget Functionality Using Desire2Learn Interoperability Tools

Lone Star College uses JQuery UI (which works with LE 10.1, but not 10.3) and Javascript with inline CSS to alter widget functionality. You can use D2L replace strings ({OrgUnitId}, {RoleId}) to change content in a widget depending on who is viewing the widget. They found that the most taxing thing was combing through the Inspect element panel of Chrome or Firefox to determine what the name of the item was. Valence documentation will be releasing this information if it’s needed, so that will be really helpful and speed up development time.

Lone Star College used the custom variables in the External Learning Tools to pass custom variables to the project – basically the content of the replace strings.They use it to embed request forms that are typically hosted external to the system – our ticketing system would be a local example. D2L CSS Classes will also be made available for developers, so that you can make custom developments look more like D2L solutions.

Continuous Delivery Model

So like everyone else who’s a D2L admin, it strikes me as Continuous Delivery might be the best thing, or the worst thing ever. I guess we’ll find out next August. Monthly releases will occur the first Tuesday of the month – then there are five application dates – the first two waves are application to test, the last three are application to production. So if you are scheduled for wave 2 and wave 3, you’ll have a week with the update on your test. As an admin you are locked into the waves except for the odd circumstances. Honestly, I don’t particularly like that idea, but it’s what’s happening. I guess I’ll have to see how other schools are feeling the upgrade paths.

The language around what D2L are doing with Continuous Delivery Model is changing as well an Upgrade is your last Upgrade, the one to LSOne/10.4. Updates are what happens in Continuous Delivery Model. Similarly, Release Updates are more of a roadmap – what’s coming and when they’re expected to impact you. Release Notes are what’s changing that particular month, and will be released 3 weeks prior to the application date.

I wasn’t any more scared of this than I was before. As we’re going through it, I spend two months pouring over documentation, distilling it down to four or five pages, rewrite it for our user group then push it out, apply it and train people on it. While major changes will be seen for up to 12 months in advance, and are up to the admins to enable, it strikes me as it changes training and local documentation significantly for us – because we’ve customized our instance, particularly around language use, pretty heavily. I’m not asking D2L to do our documentation work – even though they’ve offered all clients their own videos, but I suspect I’m going to get a whole lot more busy with a Continuous Delivery Model. We’ll see.

Closing Keynote – LeVar Burton

I am not a trekkie, don’t really care about celebrities, but I enjoyed LeVar’s talk, although I struggle to make a connection between Reading Rainbow’s move from public resource to private enterprise and D2L – talk about a missed opportunity, to have John Baker come out at the end and say, “all D2L clients have access to Reading Rainbow’s library of video assets starting now.” I wasn’t let down, per se, but it was a pillow soft ending to a good conference. When the gold-standard of this sort of thing is Apple announcing all upgrades are free for the OS – that’s an oomph.

Actually that wasn’t my end to the conference, it was with a beer with my favourite D2L’er Barry Dahl. Of course, losing my passport was another story…

Fusion 2014 – Day Two Recap

So anyone who has gone to a conference before will recognize, it’s a bit more like a marathon than a sprint – you really have to try to pace yourself to get everything in and pay attention to the things you want to. I will say, that for the second year in a row, the food at the Fusion conferences were really good. Ended up talking with Paul Janzen of D2L about our impending PeopleSoft integration and the summer of integrations (Blackboard Collaborate, Pearson, maybe McGraw Hill, iClicker and Top Hat) we’re doing at McMaster. Ken Chapman also joined us at breakfast and asked a little bit about what we’d like to see out of e-mail. Frankly, I hadn’t thought about e-mail in years, because we’ve been mired in hell with IT and us trying to get the Google Mail integration working (not that it doesn’t functionally work, but IT has stalled us for admin level access since we asked a year and a half ago). I said that I’d personally prefer the system to not do e-mail at all, but that would be a difficult task considering we have people who segment their academic teaching e-mail on the LMS rather than their institutional e-mail. The problem for us is that we’re currently not configured to allow external e-mail.  It will be interesting to see if IMAP/POP3 support comes to Brightside sometime in the future – which makes a lot more sense.

Insights Focus Group

I wasn’t sure if I was going to be of any help in this but I  thought that seeing as we’ve run some reports with the Insights tool maybe I could glean better ways to deal with it. Basically, people had concerns with the large data not being able to run org-level reports (which is one of ours as well), the interface needs some improvement, and the time it takes to create ad hoc reports is too long. So those issues were at least noted. Let’s see how they get addressed going forward.

Blended Learning, Learning Portfolios and Portfolio Evaluation

Wendy Lawson and I co-presented this – however it was mostly a Wendy show. She lived it, so she should have the floor. Basically this presentation outlines what we collaborated on for Wendy in her Med Rad Sci 1F03 course – which is a professional development course for first semester, first year science students going into the Medical Radiation Sciences (X Ray, Sonography, etc).

We used the ePortfolio tool as a presentation mechanism – which I think worked well, I’m not sure if we had a good flow of what we were going to show on each page, but other than that, it was a risk that we felt was not big enough to impede our presentation.

We talked about how this redesigned course could use the Learning Portfolio to deliver the course in a blended manner (using ePortfolio/Learning Portfolio activities as the one-hour blended component) and how the students did with it. After working with Wendy on this presentation, the stuff her students did were miles above what we saw on average and I think next year, the weaknesses she acknowledged in the course will be addressed.

Vendor Sessions

I honestly skipped these because, well, I’m not interested in getting more spam in my work e-mail. Plus, my wife was having surgery (everything’s good!) at this time so I wanted to call and make sure we connected before surgery began.

Connecting Learning Tools to the Desire2Learn Platform: Models and Approaches

Attending this session was particularly self-serving – I wanted to say hi to the presenter, George Kroner, who I’ve followed on twitter for what seems like a million years, and the Valence stuff is stuff I feel I should know. I have a decent enough programming background – I can hack together things. So why am I not actually building this stuff?

George walked through UMUC’s process for integrating a new learning tool which can be broken down into three steps:

  1. Evaluate tool fit
  2. Determine level of effort to integrate
  3. Do it/Don’t do it

It seems so simple when I re-write out my notes, but it’s a really interesting set of steps – for instance, in the determine the level of effort to integrate – you also have to think of post-integration support, who supports what when you integrate? Is it the vendor? Or your support desk? What’s the protocol for resolving issues – do people put in a ticket with you, then you chase the vendor? Is the tool a one-time configuration and does it import/export nicely, or does it need to be configured every time it’s deployed in a course?

We did a Valence exercise next, and I want to merely link two tools that when I circle back to doing some Valence work (soon, I swear!) I’ll need:

https://apitesttool.desire2learnvalence.com/ and http://www.jsoneditoronline.org/. The API test tool is a no brainer really, and I knew about it before but never knew where it was – incredibly helpful for debugging your calls and what you might get back. JSON editor online is new to me, and something that I really, really needed. I’m a JSON idiot – for some reason, Javascript never resonated with me. I’ve always preferred Python or PHP as web scripts, despite the power of Javascript. Guess I’ll have to put on my big boy pants and learn it all over again. Maybe Dr. Chuck will do a JSON course like he’s done with Python?

Social Event at the Country Music Hall of Fame

The evening social event was great – the Hatch Show print is actually something I might just hang in my office. There was some shop talk, some fun stuff, a drone… Oh yeah, this happened: