Work = Life = ePorfolios

Even though it’s the weekend, it’s August 8. This is my work anniversary. So I’ll be taking a moment to write about work while I enjoy the evening.

After a long discussion with my direct boss, we decided that I needed to stop doing everything that I do and focus on doing a few things. I can say that it’s a good idea, I’m a bit of a control freak. If you’ve worked with anyone like me, you’ve probably been witness to someone who has opinions, shares them with the drop of a hat, and will doggedly defend those beliefs and continue to circle back to fight for them again and again. Where this becomes a problem is when I feel the insane need to do everything. Redesign training? Check. Read the 100+ page document about the LMS upgrade? Check. Dissect it and rewrite it for our campus? Check. Update websites? Check.

I honestly do want other people to feel they have room to do work without me jumping into what they do. I like to think I’m a good person, and I fully recognize my flaws (and this is a big one). I’d also like to think that I’ve tried to make room for others to do stuff. Maybe I haven’t been as effective in doing that, I don’t know as I can’t speak to how others feel. So I’ve vowed to step back from the LMS administration side of things to focus on the ePortfolio and badging projects that we have going.

Now, if you pay attention to LMS’s you’ll know that D2L announced that their badging service will be available for clients on continuous delivery (Brightspace version 10.4 and higher) starting September 2015. We’re also expanding the number of University wide ePortfolio software solutions from just the D2L ePortfolio tool, adding another to complement where D2L eP is weak. There’s not been an official announcement, however we’ve started internal training and I should be writing about the process of getting this other software up and running as I think it’ll be quite an accomplishment in the time frames we have set.

Now you may wonder where the D2L ePortfolio tool is weak?

Well, first, let me give you two caveats. One, I’ve given this information to our account manager and I know for a fact it’s gone up the chain to D2L CEO John Baker. I think we’ll see improvements in the tool over the next while. I’m cautiously optimistic that by this time next year D2L’s ePortfolio tool will be improved, with that I keep an interested eye on the Product Idea Exchange inside the Brightspace Community, for developments. The other caveat, is that we’re using ePortfolios in such a way that we want to leverage social opportunities for reflective practice. Frankly this wasn’t something that we knew when we started with ePortfolios, and hence wasn’t part of our initial needs.

OK the weaknesses from my perspective).

1. The visual appeal of the tool is challenged. The web portfolios that are created are OK looking. It needs a visual design overhaul. Many aspects of the tool still bear the visual look of pre-version 10 look of D2L’s products. It also needs to be able to be intuitive to use for students, and part of that falls on the visual arrangement of tools. There’s three sections of the ePortfolio dashboard where you can do “stuff” (menus, the quick reflection box and on the right hand side for filling out forms and other ephemera). I totally understand why you need complexity for a complex tool – especially one designed to be multi-purpose.

2. Learning goals, which is a huge part of reflective practice, are not built into the portfolio process. You can, yes, create an artifact that could represent your learning goal, and associate other artifacts as evidence of achieving that goal – but I’d ask you to engage in doing that process as a user to see why it’s problematic. Many, many clicks.

3. There is a distinct silo effect between the academic side and the personal side of things. If we extrapolate our learning goals to be equivalent to learning outcomes (and I feel they should be) – those learning goals are still artifacts and outcomes/competencies pulled from the courses are labelled something else. Again, I don’t think the design of the ePortfolio tool is aimed at this idea, however, if we’re serious about student centred learning, shouldn’t we be serious about what the student wants to get out of this experience, and treat what they want out of the experience, whatever that is, at the same level as what teachers, or accreditation bodies, or departments, or schools feel they should know?

4. Too many clicks to do things. Six clicks to upload a file as an artifact is too many.

5. Group portfolios are possible, but so challenging to do, that we’ve instituted a best practice that you organize it socially and make one person responsible for collecting artifacts and submitting the portfolio presentation. Even if you want to take on the challenge, when you share a presentation with another person and give them edit rights, the tool still doesn’t let you edit in the sense that you would expect the word edit to mean. You can add your stuff to the presentation, but can’t do squat with anything else in the presentation. In some ways it makes sense, but functionally it’s a nightmare. What if your group member is a total tool and puts their about me stuff on the wrong page? What if they made an error that you catch, why do you have to make them fix it instead of the sensible thing and being able to fix it yourself?

With all that said, people tend to like the tool once they figure it out. The problem is, that many don’t get past that hurdle without help, and there’s only so much help to go around.

What If… We Made the LMS Truly Modular?

I think we all understand that the LMS as a tool is a pretty cruddy one. It does a lot of things, some well, many not in ways that you, as an instructor would prefer. At last year’s Fusion conference, we heard D2L speak about their LTI integrations, and how Brightspace was the integrated learning platform. I’ve heard that many people envision what D2L are selling as a hub and spoke system – where Brightspace (or the Learning Environment, or even more crudely, the LMS) acts as a hub – and the tools connected are the spokes. I wonder what things would look like if we extrapolate that idea out to the nth degree?

For instance, you could replace the gradebook with a tool that worked for you – Google Spreadsheets, Excel online, or a box plot device. Chalk and Wire has replaced Canvas’ gradebook in one instance, and I’m sure that the inefficient tools of any LMS are things that one would want to replace. Does that mean the all of them? For some snarky folks, yeah, that would mean all of them. Those folks should just teach in the open web.

The LMS also provides a fairly elegant way to get student data into your course area. That’s probably something instructors or faculty wouldn’t want to do. Hell, I’m glad we have someone on my team that wants to do it because it’s an ugly job.

And for many, the content management of these systems are pretty good. In D2L’s case, the way the content tool works is pretty decent (now if hide an item means really hide any evidence of the item, we’re talking). Imagine if you could take elements of one system you like (say Angel) and add-in features to allow for customization for the individual course needs?

Or how about replacing the groups tool with some other mechanism?  Quizzing is something that people already are pushing to publishers like Pearson or McGraw Hill, but what about up and comers like Top Hat or Poll Everywhere (which, lets face it, is essentially a quiz engine wrapped in a polling tool)? Discussions become a Disqus link at the bottom of a item in the content area… there’s lots of clever fun to be had with this idea.

Now to some extent, you can do this already (depending on how locked down your system is by your systems administrators). Change the navigation bars to point to tools you use connected through LTI – however you have some issues with doing this yourself. The biggest one is that at some point, you’ll run into some technological problem that you can’t solve easily. I suspect, that’s what the Internet is for.

At some point one (or many) of you will point out quite rightly, that this sounds an awful lot like what the web is (or more accurately, was). And my answer is yeah, that’s about right. Used to have websites that we plugged bits of HTML into to make what we needed. Until it was commodified.

You can probably draw a comparison to the modern LMS to Facebook – mostly everyone uses it but would probably use a better system if everyone else went to it first. Universities would consider another model as long as everyone will come along. However, in the current higher education system, where we have seasonal and precarious work dominating instructional positions – there’s no time or want to develop a better way. The LMS works as a co-conspirator in the commodification of education. It’s not directly responsible, but it plays a  part in making education at university an easily packaged and consumed affair. The LMS isn’t alone, Coursera, Udacity, those type of MOOCs are equally complicit, or maybe even moreso.

And that’s a more likely reason we’ll see a modular LMS. More vendors get opportunities to get into different institutions that aren’t their current clients.

i>Clicker Integration with D2L/Brightspace

I’m sure i>Clicker won’t be particularly happy with this post, that’s ok, because I’m not happy with them. The one thing as an LMS administrator that you should feel very, very sanctimonious about is sharing data. We have an internal policy that we’ll never share student numbers. I also think that we shouldn’t share usernames, however some folks feel that’s ok. I guess it depends on your username convention. First dot last as a convention is great, except in these cases when you’re trying to maintain privacy. Now, if you’re making a single sign on connection through LTI, you can have the originating LMS username obfuscated, which essentially boils down to a paired database table that has two usernames in it – the one handed off to the third party via LTI, and the one in the LMS. Typically, I’m a little bristly about this as well, because you become reliant on the system not changing how this is handled, and if there’s an HTTP call mixed in there, it still could be sniffed out while in transit… but that’s not what this post is about.

Since last year, I’ve been bugged by i>Clicker to do an integration to make sign-up with their service seamless for students. There has been some requests for integration between i>Clicker and the gradebook in D2L from some of the more heavy users. The first request, I’ve always put off because, frankly, I don’t do anything a private enterprise wants me to do, they aren’t my boss, nor are they a member of my community. They serve one purpose, and that’s collecting money. The second, however, does benefit a select user group on campus. We were thinking about this since last year’s summer of integration, and April had a couple minutes free (more on that in forthcoming blog posts), so we scheduled some time to finally make it happen.

We schedule a call, to walk us through the integration and see if there’s anything that we have questions about. I don’t need anyone to walk me through an integration, but I often like to raise privacy, data collection policies, and other awkward questions to the poor sucker who’s on the end of the other line. Typically they are ill informed. I didn’t even get to the awkward stage, as a request was made that was frankly shocking. I was asked to turn on passing the Org Defined ID – or our student number – to facilitate the connection. Not just at the tool level, but at the configuration for the Tool Information. See below:

config_tool_consumer (1) Now if I understand this panel correctly, it not only changes the configuration settings for the tool in question, but for all the tools. ALL the tools. So not only i>Clicker, but anything else you have connected through using the External Learning Tools administration panel. I asked our technical account manager about changing it, and he basically said, “yeah, that’s not good.”

So in the middle of this exchange where I explain how we don’t pass the student number under any circumstances, the i>Clicker representative seemed to be a little miffed about my protests. He wasn’t particularly nasty about it, but certainly didn’t seem to understand why this was an issue at all. Looking at i>Clicker’s website, students are asked for their ID. It’s different asking a student to give them their ID (consent) and setting up access to everyone’s ID (no consent).

What makes me wonder is, how many other institutions even give this a thought? Surely we can’t be the first person to balk at the idea of handing over student data like this? Or maybe we are being too paranoid? I mean, I guess there’s people who have faith in the third-party vendors, but I’d prefer having a license, stating exactly what they are doing with data, how they’re using it, how long it’s retained and an agreement signed between the two parties. That way if the external party violates the agreement, the institution can hold them liable for the data breech – something a little stiffer than “oops”.

Extending D2L Reports Using Python

So before I get going on this long post, a little bit about me. I fancy myself a programmer. In fact, the first “real” job I had out of college (the second time I went through college, the first job the first time through college was a video store clerk) was programming a PHP/MySQL driven website in 2003-ish (on a scavenged Windows 2000 server running on an HP box). So I have some chops. Of course, as educational technology has moved away from being a multiple skillset job where a little programming, a little graphic design, a little instructional design was all part of what one did, now it’s been parsed out into teams. Anyways, I haven’t had much of an opportunity to write any code for a long time. The last thing I did seriously was some CSS work on my website about four months ago, which is not really doing anything.

With that all said, the LMS administration I do requires little programming skill, as being an admin, while having it’s perks, isn’t exactly a technical job. I do recognize that some admins deal with enrollments and SIS integrations that require a whole skillset that I could do, but I don’t.

As part of my admin jobs, I run monthly reports on how many items are created in D2L’s ePortfolio – these reports include date and time, username and item type. We use this as a gauge of how active students are and cross-referenced with other information it’s incredibly useful in letting us know things like 22,383 items were created by 2650 unique users in ePortfolio in November 2014 (actual numbers).

Using Excel, I can see if users in a month have ever used ePortfolio before, so I can also track how many users are new to the tool per month. However the one thing I can’t easily look at is how frequently a user might interact with ePortfolio. With a total record number of over 125,000, it’s not something I want to do by hand. So I wrote a script. In Python. A language I don’t know well.

If you want to avoid my lengthy diatribe about what I’m doing and skip right to the code it’s on GitHub here: https://github.com/jonkruithof/python-csv-manipulation-d2l-reports. If you don’t want to click a link, here’s the embedded code here:

Basically the code does this:

Look at the username, see if it exists in memory already. If it does, find the difference since the last login write that information to a CSV, and then overwrite the existing entry with this entry. If it does not exist, then write it to memory. Loop through and at the end, you’ll have a large ass CSV that you’d have to manipulate further to get some sense of whether users are frequently using the ePortfolio tool (which might indicate they see value in the reflection process). I do that in CSVed, a handy little Windows program that handles extremely large CSVs quite well. A quick sort by username (an improvement in the Python code coming sometime soonish) and we have something parseable by machine. I’m thinking of taking the difference in creation times, and creating some sort of chart that shows the distribution of differences… I’m not sure what it’ll look like.

Publisher Integrations with D2L

So I’ve tried to write this post several times. Tried to work up some semblance of why anyone would care about Pearson or McGraw Hill and integrating either publishers attempt at crafting their own LMS into the one that our institution has purchased (at no small amount).

Then it hit me while going through Audrey Watter’s great writing (in fact, go over there and read the whole damn blog then come back here for a quick hit of my snark). The stuff that I’ve been challenging Pearson and McGraw Hill on is stuff that we should be talking about. I’ll encapsulate my experiences with both because they tie nicely together this idea of giving private interests (eg. business, and big business in this case) a disproportionate say in what happens in the classroom.

The concept is really twofold. One, why are we providing a fractioned user experience for students? This disjointed, awkward meld is just a terrible experience between the two parties. Login to the LMS, then login again (once, then accept terms and conditions that are, well unreadable for average users), then go to a different landing page, find the thing I need to do and do it. Two, students are being held hostage for purchasing access (again!) to course activities that in any sense of justice, would be available to them free. Compounding this is the fact that most of these assessments are marked, and count towards their credit. And there’s really no opt-out mechanism for students. Never mind the fact that multiple choice tests are flawed in many cases, but to let the publisher decide how to assess, using the language of their choice, is downright ugly.

Pearson

So Pearson approached us to integrate with their MyLab solution about two years ago. We blankly said no, that request would have to come from a faculty member. Part of that is fed by my paranoia about integrations, especially when they require data transfers to work, part of it is that frankly, any company can request access to our system if we allow that sort of behaviour. Finally a faculty member came forward and we went forward with an integration between Pearson Direct and D2L. I will say that parties at D2L and Pearson were incredibly helpful, the process is simple and we had it setup in hours (barring an issue with using a Canadian data proxy which needed some extra tweaking to get working correctly). The issue really is should we be doing this? The LMS does multiple choice testing very, very well. MyLabs does multiple choice testing very, very well. Why are we forcing students to go somewhere else when the system we’ve bought for considerable sums of money does the very same thing well? Well the faculty member wanted it. What we’ve found is that most faculty who use these sorts of systems inevitably find that students don’t like jumping around for their marks.

Additionally, when the company has a long laundry list of problems with high stakes multiple choice testing, how does this engender faith in their system?

McGraw Hill

Again, all the people at McGraw Hill are lovely. None of them answer any question about accessibility, security or anything it seems straight. That may be because their overworked, that may be because they don’t know. None of the stuff I’ve seen is officially WCAG compliant, however it may have been created prior to that requirement being in place, so they may get a pass on that one. The LTI connection is very greedy, requiring all the options to be turned on to function, even though D2L obfuscates any user ids, and it bears no resemblance of an authenticated user (thanks, IMS inspector!) it still needs to be there or else the connection fails. What kind of bunk programming is this? Why is it there if you don’t use it? Why require it? Those questions were asked in that form in two separate conference calls, to which most went unanswered. I did receive a white paper about their security, which did little to answer my direct questions about what happens to the data (does it travel in a secure format? who has access to this feed?) after it leaves D2L and is in McGraw Hill’s domain.

Now I’m paranoid about data and private companies. I immediately think that if you’re asking me to hand over data that I think is privileged (here’s a hint, it’s all privileged) you should be able to at least answer why in the name of all that is unholy, that you deserve to have access to it, and if I agree to give it to you, what happens to that data, when it leaves my system and sits on yours. That should be easy to answer and not require any sort of thought. It makes me wonder if anyone is asking these questions at all. They must? I mean, everyone is thinking about this sort of stuff right?

D2L Regional Conference (Ontario) 2014 Recap

Thought I published this in late October, but I guess not. Whoops.

So this conference was all about ePortfolio and the Competencies tool – every presentation I went to had something to do with either of those two topics. If neither of those are interesting to you, then well this’ll be a boring recap.

I took part of a focus group around course level analytics – I don’t know if D2L got much out of our conversations, but they were good ones. I hope they heard that the analytics tools are good at what they do, but aren’t robust enough for the complexity of the task.

On to the  presentations.

Tracking Overall Expectations in Brightspace

This presentation was by the Hamilton Wentworth District School Board administrators who are using the Competencies tools to track outcomes. It’s interesting that they use the language learning goals (as a proxy for outcomes) and success criteria (as a proxy for activities). That language is much clearer for K-12 – but would probably be a bit bristly for higher education. John Baker was in the meeting and suggested that there will be a D2L tool to copy outcomes from the standards source (whether that be a District School Board, Provincial standard or other). If that comes to fruition, I could see the CEAB (Canadian Engineering Accreditation Board) lining up to get their standards in, which would be a big boom for D2L clients who have Engineering departments. They also shared their videos for how to use the Competency tool.

At lunch I was sitting with a friend and John Baker came by, and I asked him about future improvements to ePortfolio – as we’ve been struggling with the presentation creation aspect of the tool. He mentioned some things that they were working on, specifically learning goals (unfortunately I couldn’t ask him more about this) and automatic page creation based on tags.

D2L ePortfolio and All About Me

Durham CDSB use ePortfolio as a way for students to think about where they are going. Specifically these are K-6 students, so elementary students for those who don’t reside in Canada. It seems the program is all about identity forming and goal setting. Interesting aspects, and the approach is pretty much how you’d expect. Students are prompted to answer questions about their future, and they periodically answer the questions over the six years. One key thing is that they’ve developed a workflow page, which I think goes a long way to helping students with their tool use. Another interesting aspect is the teacher is selecting artifacts for the student, putting them into a collection and then sharing that collection with the student.

Learning Portfolio: Year One

This was my co-presentation with Catherine Swanson, the Learning Portfolio Program Manager. We talked about some of the challenges of introducing a portfolio process across all disciplines, in all facets of campus and how we did. As an institution we did fine. In many of the cases where the ePortfolio tool was used in conjunction with a good use in a course (particularly good where one has to make a choice or work through a messy process to “learn”) it was ultimately successful. Where the purpose was less directed, more fuzzy, well things didn’t work as well.

 

Fusion 2014 – Day Three Recap

After a long night and an early morning it was surprising that I was as functional as I was. At this point in the conference, I realize I’m just human, can’t do the million things I want to do in a city and at a conference, and just relax. Instead I had a great conversation/rant about ePortfolio with some of the guys who actually develop the product. I told them about the long-term vision I’d have for the product (including learning goals that students can assess themselves on, with rubrics they design, with outcomes that they determine and assess) and some seemingly short term improvements (like taking the improvements they made in 10.0 to the creation of homepages in courses, and grafting it on the ePortfolio presentation tool – which seems like such a no-brainer). We’ll see if my enthusiasm for getting the presentation building piece of ePortfolio fixed is any help. I also intended to attend an 8:00 AM session, but felt that a run-through of our presentation might be time better spent.

I’d like to hear about the Executive Platform Panel if anyone got up at 8:00 to share their story of what they heard.

Implementing Learning Outcomes and Collecting Insightful Data in Higher Education

My co-worker Lavinia and I co-presented this one, and again it went well. This time it was a more traditional presentation, which I’ve embedded below:

We got some really interesting questions, and got into the nitty gritty stuff about Competencies in D2L. I still have a huge issue with the language of “competency” (does not achieving a competency mean that you are incompetent?) and I guess that can be addressed with language packs if I were clever enough to think of what the structure should be called.

Expanding Widget Functionality Using Desire2Learn Interoperability Tools

Lone Star College uses JQuery UI (which works with LE 10.1, but not 10.3) and Javascript with inline CSS to alter widget functionality. You can use D2L replace strings ({OrgUnitId}, {RoleId}) to change content in a widget depending on who is viewing the widget. They found that the most taxing thing was combing through the Inspect element panel of Chrome or Firefox to determine what the name of the item was. Valence documentation will be releasing this information if it’s needed, so that will be really helpful and speed up development time.

Lone Star College used the custom variables in the External Learning Tools to pass custom variables to the project – basically the content of the replace strings.They use it to embed request forms that are typically hosted external to the system – our ticketing system would be a local example. D2L CSS Classes will also be made available for developers, so that you can make custom developments look more like D2L solutions.

Continuous Delivery Model

So like everyone else who’s a D2L admin, it strikes me as Continuous Delivery might be the best thing, or the worst thing ever. I guess we’ll find out next August. Monthly releases will occur the first Tuesday of the month – then there are five application dates – the first two waves are application to test, the last three are application to production. So if you are scheduled for wave 2 and wave 3, you’ll have a week with the update on your test. As an admin you are locked into the waves except for the odd circumstances. Honestly, I don’t particularly like that idea, but it’s what’s happening. I guess I’ll have to see how other schools are feeling the upgrade paths.

The language around what D2L are doing with Continuous Delivery Model is changing as well an Upgrade is your last Upgrade, the one to LSOne/10.4. Updates are what happens in Continuous Delivery Model. Similarly, Release Updates are more of a roadmap – what’s coming and when they’re expected to impact you. Release Notes are what’s changing that particular month, and will be released 3 weeks prior to the application date.

I wasn’t any more scared of this than I was before. As we’re going through it, I spend two months pouring over documentation, distilling it down to four or five pages, rewrite it for our user group then push it out, apply it and train people on it. While major changes will be seen for up to 12 months in advance, and are up to the admins to enable, it strikes me as it changes training and local documentation significantly for us – because we’ve customized our instance, particularly around language use, pretty heavily. I’m not asking D2L to do our documentation work – even though they’ve offered all clients their own videos, but I suspect I’m going to get a whole lot more busy with a Continuous Delivery Model. We’ll see.

Closing Keynote – LeVar Burton

I am not a trekkie, don’t really care about celebrities, but I enjoyed LeVar’s talk, although I struggle to make a connection between Reading Rainbow’s move from public resource to private enterprise and D2L – talk about a missed opportunity, to have John Baker come out at the end and say, “all D2L clients have access to Reading Rainbow’s library of video assets starting now.” I wasn’t let down, per se, but it was a pillow soft ending to a good conference. When the gold-standard of this sort of thing is Apple announcing all upgrades are free for the OS – that’s an oomph.

Actually that wasn’t my end to the conference, it was with a beer with my favourite D2L’er Barry Dahl. Of course, losing my passport was another story…

Fusion 2014 – Day Two Recap

So anyone who has gone to a conference before will recognize, it’s a bit more like a marathon than a sprint – you really have to try to pace yourself to get everything in and pay attention to the things you want to. I will say, that for the second year in a row, the food at the Fusion conferences were really good. Ended up talking with Paul Janzen of D2L about our impending PeopleSoft integration and the summer of integrations (Blackboard Collaborate, Pearson, maybe McGraw Hill, iClicker and Top Hat) we’re doing at McMaster. Ken Chapman also joined us at breakfast and asked a little bit about what we’d like to see out of e-mail. Frankly, I hadn’t thought about e-mail in years, because we’ve been mired in hell with IT and us trying to get the Google Mail integration working (not that it doesn’t functionally work, but IT has stalled us for admin level access since we asked a year and a half ago). I said that I’d personally prefer the system to not do e-mail at all, but that would be a difficult task considering we have people who segment their academic teaching e-mail on the LMS rather than their institutional e-mail. The problem for us is that we’re currently not configured to allow external e-mail.  It will be interesting to see if IMAP/POP3 support comes to Brightside sometime in the future – which makes a lot more sense.

Insights Focus Group

I wasn’t sure if I was going to be of any help in this but I  thought that seeing as we’ve run some reports with the Insights tool maybe I could glean better ways to deal with it. Basically, people had concerns with the large data not being able to run org-level reports (which is one of ours as well), the interface needs some improvement, and the time it takes to create ad hoc reports is too long. So those issues were at least noted. Let’s see how they get addressed going forward.

Blended Learning, Learning Portfolios and Portfolio Evaluation

Wendy Lawson and I co-presented this – however it was mostly a Wendy show. She lived it, so she should have the floor. Basically this presentation outlines what we collaborated on for Wendy in her Med Rad Sci 1F03 course – which is a professional development course for first semester, first year science students going into the Medical Radiation Sciences (X Ray, Sonography, etc).

We used the ePortfolio tool as a presentation mechanism – which I think worked well, I’m not sure if we had a good flow of what we were going to show on each page, but other than that, it was a risk that we felt was not big enough to impede our presentation.

We talked about how this redesigned course could use the Learning Portfolio to deliver the course in a blended manner (using ePortfolio/Learning Portfolio activities as the one-hour blended component) and how the students did with it. After working with Wendy on this presentation, the stuff her students did were miles above what we saw on average and I think next year, the weaknesses she acknowledged in the course will be addressed.

Vendor Sessions

I honestly skipped these because, well, I’m not interested in getting more spam in my work e-mail. Plus, my wife was having surgery (everything’s good!) at this time so I wanted to call and make sure we connected before surgery began.

Connecting Learning Tools to the Desire2Learn Platform: Models and Approaches

Attending this session was particularly self-serving – I wanted to say hi to the presenter, George Kroner, who I’ve followed on twitter for what seems like a million years, and the Valence stuff is stuff I feel I should know. I have a decent enough programming background – I can hack together things. So why am I not actually building this stuff?

George walked through UMUC’s process for integrating a new learning tool which can be broken down into three steps:

  1. Evaluate tool fit
  2. Determine level of effort to integrate
  3. Do it/Don’t do it

It seems so simple when I re-write out my notes, but it’s a really interesting set of steps – for instance, in the determine the level of effort to integrate – you also have to think of post-integration support, who supports what when you integrate? Is it the vendor? Or your support desk? What’s the protocol for resolving issues – do people put in a ticket with you, then you chase the vendor? Is the tool a one-time configuration and does it import/export nicely, or does it need to be configured every time it’s deployed in a course?

We did a Valence exercise next, and I want to merely link two tools that when I circle back to doing some Valence work (soon, I swear!) I’ll need:

https://apitesttool.desire2learnvalence.com/ and http://www.jsoneditoronline.org/. The API test tool is a no brainer really, and I knew about it before but never knew where it was – incredibly helpful for debugging your calls and what you might get back. JSON editor online is new to me, and something that I really, really needed. I’m a JSON idiot – for some reason, Javascript never resonated with me. I’ve always preferred Python or PHP as web scripts, despite the power of Javascript. Guess I’ll have to put on my big boy pants and learn it all over again. Maybe Dr. Chuck will do a JSON course like he’s done with Python?

Social Event at the Country Music Hall of Fame

The evening social event was great – the Hatch Show print is actually something I might just hang in my office. There was some shop talk, some fun stuff, a drone… Oh yeah, this happened:

Fusion 2014 – Unconference and Day One Recap

Instead of a big post I’m going to break my experiences up into three distinct posts because a) it’ll get me to post more frequently, b) that’s something I want to do and c) no one wants to read a monolithic block of text.

So I flew out of Buffalo, and it was an interesting time crossing the border where I got the fifth degree about where I was going and what I was doing. I think they thought I was being paid to speak at a conference, next time I’ll have to change the language I use to say something like attending a conference. After the border and the pornoscanners at the airport. I arrive in Nashville. Now, I’m not that worldly, but I’ve been to a few places. Nashville is not one of my favourites, not because the city is particularly terrible, it’s not particularly walkable, and it has well, public transportation issues. Outside of those quibbles (which are big problems for me) it’s a fine city with some fine people.

The Unconference

One of the best things that happens at Fusion for the last 5 or 6 years is the Unconference. I missed the first few because I was never able to actually get to Fusion, but the last couple of years I’ve been able to go, this was the event kick-off that was fun, social, and often leads to previously undiscovered ideas and new ways to break D2L. I didn’t stick around for the full discussion because I was a bit tired, but the one thing that I did learn was that VHS (Virtual High School) use Javascript to develop interactive elements of courses. Now that’s not a particularly shocking example, but combine that with the Valence API and maybe you could do some in situ testing and push results to the gradebook. Later a few of us went out for a nightcap and a good time was had by all.

Fusion Day One

Typically the first day has a ton of beginners and introduction sessions in the morning, so I ended up meeting with my co-presenter to go over our session the next day. The sessions I did attend were incredibly useful for me and I learned a ton about how other places develop in-house solutions. In fact most of my attendance was in sessions that were around External Learning Tools or the Valence API.

Keynote

So John Baker’s keynote threw the audience a little, the big takeaway was that Desire2Learn is now D2L, and the Learning Environment, or the LMS, is now called Brightside Brightspace. I guess there’s a thinly veiled jab at the competitors being the dark side, but I can’t say that I understand the need to change names. Lots of people at the conference have suggested that Desire2Learn seems a very 1990’s thing, reminiscent of the dot com boom/bust. I can’t say that they’re wrong. However, it would’ve been nice to have been told that officially. I’m a bit of a smartass when it comes to names, so my immediate nature is to shorten this to it’s logical shortform, BS. Not necessarily flattering. I don’t think D2L is big enough to have gotten out in front of it to shorten it to B, which in and of itself is not a good acronym either (B product? B movie?).

I’m not the only person who’s looking for a short form for it either. Considering I don’t know the difference between Brightspace and the LE (so is the new version called Brightspace version 1?) or if the existing products are called the LE 10.3 still… so many questions. None of them answered.

I’m sure many will talk about Chris Hadfield’s inspirational speech, it was great and all, and I certainly appreciate what he’s done. I just don’t see the connection to the conference that he brings.

Integrating Neat Tools and Activities into your Course through LTI

This session was all about External Learning Tools – which we’ve had a summer of dealing with so far. This particular session talked about integrations between SoftChalk, SWoRD and TitanPad. I’m familiar with SoftChalk through a series of courses I’m taking at Brock University and I can say that I’ve never been particularly impressed with the product – perhaps that’s the way that Brock is using it, or the way the course was developed, or a limitation of Sakai, Brock’s LMS. Either way, this session demonstrated the connection between SoftChalk activities hosted in Content then connecting to add grades into the Gradebook – certainly a more interesting way to deal with whatever you design in SoftChalk.

SWoRD was a particularly an interesting case – although I don’ t know how robust or deep the integration was (I suspect D2Lwas merely passing enrollment data to SWoRD). SWoRD is a peer assessment tool that might be an alternative to something like PeerScholar.

I’m always happy to see Etherpad clones, and TitanPad was used as an example, but if you’ve hosted an Etherpad clone at your institution you can pass user names to the Etherpad for auto tracking in the document. I’m not sure how robust Etherpad is for say, classes of 600+, but that would be an interesting experiment.

One thing the D2L presenter said was that in the configuration of the external tools, when you check the option to send User ID, it means sending the anonymized version of the username, which is interesting because the language used in the external tools dialog would benefit from adding this tidbit – we’ve turned it off in most cases (and seem to have no issue with students/instructors logging into the external tool) because we thought it would violate our University privacy rules.

The Secret to APIs

The second session of the day for me was around the use of Valence (D2L’s API) to create personal discussions in a course with enrollment of one student and the instructor. The big takeaway for this was that in courses that have enrollments set, you can save a ton of time by writing a script to do the repetitive boring stuff like create a group of one, enroll a student in it, then create a discussion topic and restrict it to that group. Was interesting to see C# used as the middleware programming language – I thought that C# was out of favor but maybe not? PHP would’ve been easier, and PERL/Python might’ve been faster to complete the task. Either way, this is the work that earned Ryan Mistura the Desire2Excel award in the student category. Cool stuff

Solution Spotlight/D2L Year Recap

Nick Oddson and Ken Chapman handled the recap of the D2L year, focusing on the extensibility of the platform. They did point out that there is a 40% faster time to resolution because they’ve increased their support and SAAS service teams. Which is good, because their service was slow before. I have noticed that their support turnaround is probably the best it’s been in years.

The looking forward part of their talk was interesting – it seems like they talked a lot about either 10.3 improvements (that were already announced last year, and available now), or stuff that we can’t see yet. Perhaps a chart:

10.3 Feature Unreleased to the Public
  • Wiggio
  • Discussions (Grid View restore)
  • Binder – Windows 8 support
  • Quizzing UI/UX improvements
  • Content Notifications
  • Student Success System
  • LEaP – Adaptive Path learning
  • Course Catalog (currently being used on Open Courses)
  • Visual Course Widgets (customization, I presume at a cost)
  • Built-in practice questions in Content (contextualized learning)
  • Gamification built into the Learning Environment (I assume 10.4/LSOne/Brightspace)

I suspect that the amount of talk about predictive modelling is something they want to build primarily for remedial use, and for online courses primarily. As a market strategy, that makes some sense. Some of D2L’s bigger clients are primarily online universities.

Blackboard Collaborate Integration with Desire2Learn, Uhh D2L, LE uhh Brightspace 10.3

I think I did that right?

Back in June we took a few weeks and integrated Blackboard Collaborate (our web conferencing tool) with our instance of the Learning Environment (Brightspace just doesn’t feel right). We are currently running 10.2 SP9 of the LE.

Reflections? Well, for such a simple integration (and really the D2L interface is waaaaay better than the Blackboard Collaborate interface) it took a hell of a long time. We had to purchase and get D2L to install the IPSCT pack – so if you’re entering into an agreement with D2L and may way to do this later, definitely spend the cash up front. From start to unveil it was over six weeks – now that’s not solid work on just this. After D2L installed the IPSCT pack, we had to contact Blackboard support to get our credentials. Seeing as we’ve had total turnover in who supports Blackboard Collaborate, our new Collaborate support person was not on the list of approved contacts – which is funny because she’s the one who does all the tickets. So we contact our account manager. No response. It turns out that well, they are no longer our account manager, that’s why we haven’t heard from them in over 9 months. Great. So support can’t do anything, neither can our phantom account manager. Finally we get to the bottom of who our new Blackboard account manager is, they straighten out the mess and our person is now an approved contact. After that it still takes a week to get our credentials for test and prod.

Configuration on test went smoothly enough – if you’ve ever worked with External Learning Tools in the LE, it’s the same as any other configuation in that tool – have the address to make the connection, secret key and password, check a few more boxes, and then off you go. Now everyone who gets enrolled in the LE gets a Default Role at the org level, and then gets assigned a more applicable role at the course offering level, which means for us, you have to go through not only the Instructor/Student and TA roles, but the Default Role as well. While this is a pain to do, it’s often easy to forget to do it – and that’s what we promptly did. A day or two was spent tearing what’s left of my hair out, until the lightning struck and it sparked the engine enough to get it firing again.

Fast forward a couple of weeks and we get some time to implement it on prod, we yet again forget what we did to make it work. A week later we said something to the effect of  “Fudge, Default Role…” ran off to the LE and fixed our error. Sometimes it’s not the technology that fails you…