Extending D2L Reports Using Python

So before I get going on this long post, a little bit about me. I fancy myself a programmer. In fact, the first “real” job I had out of college (the second time I went through college, the first job the first time through college was a video store clerk) was programming a PHP/MySQL driven website in 2003-ish (on a scavenged Windows 2000 server running on an HP box). So I have some chops. Of course, as educational technology has moved away from being a multiple skillset job where a little programming, a little graphic design, a little instructional design was all part of what one did, now it’s been parsed out into teams. Anyways, I haven’t had much of an opportunity to write any code for a long time. The last thing I did seriously was some CSS work on my website about four months ago, which is not really doing anything.

With that all said, the LMS administration I do requires little programming skill, as being an admin, while having it’s perks, isn’t exactly a technical job. I do recognize that some admins deal with enrollments and SIS integrations that require a whole skillset that I could do, but I don’t.

As part of my admin jobs, I run monthly reports on how many items are created in D2L’s ePortfolio – these reports include date and time, username and item type. We use this as a gauge of how active students are and cross-referenced with other information it’s incredibly useful in letting us know things like 22,383 items were created by 2650 unique users in ePortfolio in November 2014 (actual numbers).

Using Excel, I can see if users in a month have ever used ePortfolio before, so I can also track how many users are new to the tool per month. However the one thing I can’t easily look at is how frequently a user might interact with ePortfolio. With a total record number of over 125,000, it’s not something I want to do by hand. So I wrote a script. In Python. A language I don’t know well.

If you want to avoid my lengthy diatribe about what I’m doing and skip right to the code it’s on GitHub here: https://github.com/jonkruithof/python-csv-manipulation-d2l-reports. If you don’t want to click a link, here’s the embedded code here:

Basically the code does this:

Look at the username, see if it exists in memory already. If it does, find the difference since the last login write that information to a CSV, and then overwrite the existing entry with this entry. If it does not exist, then write it to memory. Loop through and at the end, you’ll have a large ass CSV that you’d have to manipulate further to get some sense of whether users are frequently using the ePortfolio tool (which might indicate they see value in the reflection process). I do that in CSVed, a handy little Windows program that handles extremely large CSVs quite well. A quick sort by username (an improvement in the Python code coming sometime soonish) and we have something parseable by machine. I’m thinking of taking the difference in creation times, and creating some sort of chart that shows the distribution of differences… I’m not sure what it’ll look like.

Publisher Integrations with D2L

So I’ve tried to write this post several times. Tried to work up some semblance of why anyone would care about Pearson or McGraw Hill and integrating either publishers attempt at crafting their own LMS into the one that our institution has purchased (at no small amount).

Then it hit me while going through Audrey Watter’s great writing (in fact, go over there and read the whole damn blog then come back here for a quick hit of my snark). The stuff that I’ve been challenging Pearson and McGraw Hill on is stuff that we should be talking about. I’ll encapsulate my experiences with both because they tie nicely together this idea of giving private interests (eg. business, and big business in this case) a disproportionate say in what happens in the classroom.

The concept is really twofold. One, why are we providing a fractioned user experience for students? This disjointed, awkward meld is just a terrible experience between the two parties. Login to the LMS, then login again (once, then accept terms and conditions that are, well unreadable for average users), then go to a different landing page, find the thing I need to do and do it. Two, students are being held hostage for purchasing access (again!) to course activities that in any sense of justice, would be available to them free. Compounding this is the fact that most of these assessments are marked, and count towards their credit. And there’s really no opt-out mechanism for students. Never mind the fact that multiple choice tests are flawed in many cases, but to let the publisher decide how to assess, using the language of their choice, is downright ugly.

Pearson

So Pearson approached us to integrate with their MyLab solution about two years ago. We blankly said no, that request would have to come from a faculty member. Part of that is fed by my paranoia about integrations, especially when they require data transfers to work, part of it is that frankly, any company can request access to our system if we allow that sort of behaviour. Finally a faculty member came forward and we went forward with an integration between Pearson Direct and D2L. I will say that parties at D2L and Pearson were incredibly helpful, the process is simple and we had it setup in hours (barring an issue with using a Canadian data proxy which needed some extra tweaking to get working correctly). The issue really is should we be doing this? The LMS does multiple choice testing very, very well. MyLabs does multiple choice testing very, very well. Why are we forcing students to go somewhere else when the system we’ve bought for considerable sums of money does the very same thing well? Well the faculty member wanted it. What we’ve found is that most faculty who use these sorts of systems inevitably find that students don’t like jumping around for their marks.

Additionally, when the company has a long laundry list of problems with high stakes multiple choice testing, how does this engender faith in their system?

McGraw Hill

Again, all the people at McGraw Hill are lovely. None of them answer any question about accessibility, security or anything it seems straight. That may be because their overworked, that may be because they don’t know. None of the stuff I’ve seen is officially WCAG compliant, however it may have been created prior to that requirement being in place, so they may get a pass on that one. The LTI connection is very greedy, requiring all the options to be turned on to function, even though D2L obfuscates any user ids, and it bears no resemblance of an authenticated user (thanks, IMS inspector!) it still needs to be there or else the connection fails. What kind of bunk programming is this? Why is it there if you don’t use it? Why require it? Those questions were asked in that form in two separate conference calls, to which most went unanswered. I did receive a white paper about their security, which did little to answer my direct questions about what happens to the data (does it travel in a secure format? who has access to this feed?) after it leaves D2L and is in McGraw Hill’s domain.

Now I’m paranoid about data and private companies. I immediately think that if you’re asking me to hand over data that I think is privileged (here’s a hint, it’s all privileged) you should be able to at least answer why in the name of all that is unholy, that you deserve to have access to it, and if I agree to give it to you, what happens to that data, when it leaves my system and sits on yours. That should be easy to answer and not require any sort of thought. It makes me wonder if anyone is asking these questions at all. They must? I mean, everyone is thinking about this sort of stuff right?

D2L Regional Conference (Ontario) 2014 Recap

Thought I published this in late October, but I guess not. Whoops.

So this conference was all about ePortfolio and the Competencies tool – every presentation I went to had something to do with either of those two topics. If neither of those are interesting to you, then well this’ll be a boring recap.

I took part of a focus group around course level analytics – I don’t know if D2L got much out of our conversations, but they were good ones. I hope they heard that the analytics tools are good at what they do, but aren’t robust enough for the complexity of the task.

On to the  presentations.

Tracking Overall Expectations in Brightspace

This presentation was by the Hamilton Wentworth District School Board administrators who are using the Competencies tools to track outcomes. It’s interesting that they use the language learning goals (as a proxy for outcomes) and success criteria (as a proxy for activities). That language is much clearer for K-12 – but would probably be a bit bristly for higher education. John Baker was in the meeting and suggested that there will be a D2L tool to copy outcomes from the standards source (whether that be a District School Board, Provincial standard or other). If that comes to fruition, I could see the CEAB (Canadian Engineering Accreditation Board) lining up to get their standards in, which would be a big boom for D2L clients who have Engineering departments. They also shared their videos for how to use the Competency tool.

At lunch I was sitting with a friend and John Baker came by, and I asked him about future improvements to ePortfolio – as we’ve been struggling with the presentation creation aspect of the tool. He mentioned some things that they were working on, specifically learning goals (unfortunately I couldn’t ask him more about this) and automatic page creation based on tags.

D2L ePortfolio and All About Me

Durham CDSB use ePortfolio as a way for students to think about where they are going. Specifically these are K-6 students, so elementary students for those who don’t reside in Canada. It seems the program is all about identity forming and goal setting. Interesting aspects, and the approach is pretty much how you’d expect. Students are prompted to answer questions about their future, and they periodically answer the questions over the six years. One key thing is that they’ve developed a workflow page, which I think goes a long way to helping students with their tool use. Another interesting aspect is the teacher is selecting artifacts for the student, putting them into a collection and then sharing that collection with the student.

Learning Portfolio: Year One

This was my co-presentation with Catherine Swanson, the Learning Portfolio Program Manager. We talked about some of the challenges of introducing a portfolio process across all disciplines, in all facets of campus and how we did. As an institution we did fine. In many of the cases where the ePortfolio tool was used in conjunction with a good use in a course (particularly good where one has to make a choice or work through a messy process to “learn”) it was ultimately successful. Where the purpose was less directed, more fuzzy, well things didn’t work as well.

 

AAEEBL Conference Notes – Day Three

And on the third day, we rested… well not really. We heard about and connected with some people at Deakin University who are using the D2L ePortfolio tool in a collaborative way – which is a real limitation that we’ve run into in that even if the permissions on the shared item are “edit”, a presentation is still not editable. It’s a weird problem that seems to stump a lot of us, so it’ll be nice to talk to the people at Deakin who are using ePortfolio in collaboration.

ePortfolio Meets Peer Mentoring

At McMaster, we’re entering year two of the Learning Portfolio and one of the hopes is that students who used it in year one continue to use it in year two as well as mentor new students and offload some of the technical support that we do. Mercy College have a stipend for their peer mentors, who have to complete bi-weekly portfolio tasks which are submitted for assessment against the learning outcomes of the program (which include leadership, personal and professional development). It’s an interesting scenario because our student success office has something similar and is not running into the problems that Mercy College exhibited around time investment to put stuff in their ePortfolio platform (Mercy College uses Digication).

Evaluationg Artifact Selection to Improve EPIC Initiatives at Wentworth Institute of Technology

The first question on my mind was what’s EPIC? Turns out EPIC is External, Problem based, Interdisciplinary Curriculum. What they found was that students added anything that was graded above a B, and was related to their major. Students found that there were three top motivators to keep things up to date – jobs, professors and completion of the course. The real interesting thing is that forming a committee that’s student driven – basically an ePortfolio Advisory Board. I think this is something that we’ve missed here at McMaster.

ePortfolio and the Whole Learner: Intergrating Curricular, Co-Curricular and Experiential Learning

Guttman Community College’s Laura Gambino presented this session which was essentially a very specific use case – Guttman’s curriculum was designed in a very unique way, essentially tailored to the specifics of their student body. This includes a lot of experiential learning, curriculum tied to community based learning and the first year curriculum is very tailored to the Guttman learning outcomes. They use the ePortfolio for integrating learning between the courses – to help transfer and connect knowledge. The first year curriculum is based around a theme like Urban Ecology, and the ePortfolio helps connect the specific course curriculum to that theme by collecting reflections on the theme, and document community based learning days. To “standardize” ePortfolio marking Guttman has Assessment Days to look through portfolios, review processes and use ePortfolio as an LMS itself.

Future of Technology in Higher Education

Bryan Alexander’s keynote was a walk through potential futures of education – depending on how you view what the trends suggest where we might be heading. I can’t really recap the key points because it was something that you should really experience as a whole event. I will say that many of the scenarios were dire for education as we know it. That’s not necessarily bad, it suggests that we might be in for some turbulent times. The best quote of the day was during the keynote:

Social media will become media, media that isn’t social will be asocial.. like Blackboard.

I’m not sure that this is entirely true, I have a bit of a different view of new media in that it doesn’t replace or supplant existing media; it diminishes the prominence of the previous media structure. Social media is diminishing phone calls, letters, e-mail and that sort of long-form personal communication. If we look at previous media shifts, TV didn’t replace radio, DVD/VHS didn’t replace film and so on. Where it does replace existing media (digital photography replacing commercial film photography, telephone replacing the telegraph) it’s really the same media mode – with social media it’s a multimedia thing – social media is pictures, video, text and anything else people create and share. Maybe a slight criticism, but I get what Bryan was getting at. In that social media will become so ubiquitous that it’s just media. There will be very little differentiation between professional media created for consumption and amateur created media as the quality and accessibility of professional grade tools to create media.

AAEEBL Conference Notes – Day Two

Day two started as a sleepy morning in Boston – coffee and a bagel helped a fair bit. I was still jazzed over some of the ideas shared the previous day so that excitement began to sink into where my brain goes next, which is how to do this badging server at my place. Trent Batson said at one of the breaks, ” The beauty of badges is the metadata behind the badge.” Now a recap of the sessions I attended.

Lessons Learned – Mobilizing an Institution to Embrace ePortfolios to Measure Essential Learning Outcomes

Before this session even started, I was skeptical. We all know that marking using ePortfolios take longer, so how does one actually get the entire institution to take on that extra work. I mean, there’s a reason faculty use multiple choice, scantron style assessment methods right? It’s faster to mark. Well, this session didn’t totally answer the question, but it did mention two key concepts that the entire conference revolved around – learning outcomes (competencies or objectives depending on your local syntax) and curriculum change from teacher focused to student focused. Faculty and students were looking for a better experience, and the only way to do it was through curriculum change. Stockton College took four years to identify the correct outcomes (at the institutional level), gain consensus, map out the connections to the institution level and the course and program level, create an assessment plan and select an ePortfolio platform (Blackboard and Digication).

Digital Connections: From College to Career

This presentation was about a ePortfolio use in Business Administration program at Tunxis Community College – many of the same lessons we’ve learned in the first year; you need to make it worth something, and you need to integrate other elements of academic life into the ePortfolio process. It’s also nice to see that students there resist the ePortfolio stuff but recognize the value at the end when they see their own growth.

Iterating on the Academic Transcript: Linking Outcomes to ePortfolio Evidence

This session was about working on the lack of information in a traditional transcript. Stanford University has flipped the transcript – it presents information along the outcomes that were in the program. So it would list the outcome (say, something like ethical reasoning) and list the courses that assess that outcome and the result of that course. ePortfolios sit beside this process as an unofficial record of what was achieved. Drexel University is currently in the process of doing this – except they have things called student learning priorities, which are measured from three areas: co-curricular, curricular and co-op. I see a lot of McMaster’s Learning Portfolio initiative in Drexel’s approach.

Cultivating Learning Cultures: Reflective Habits of Mind and the Value of Uncertainty

“I wanted to hijack the eportfolio to be about the learning process.” Kathy Takayama

This keynote was one of those talks that sits with you for a while. I really had to think about this one, so my notes are not that great… but the gist of it was that ePortfolios are learning spaces for metacognition. By using open language (like “so far” and “at this time”) and integrating the language of uncertainty into the course (and obviously the requirements) it allowed Kathy a pathway to allow students to develop the idea of metacognition about one’s own learning.

Design Thinking: Digital Badges and Portfolios

This was a workshop, which engaged us in thinking about developing a series of badges that could be offered at our institutions, and give that framework some thought. We primarily worked with the Jisc Open Badge Design Kit. We did have to consider what goes into a badge to make it powerful – what we came up with was that the criteria is part of the badge, and that students provide evidence of “earning” or achieving that badge. I’ll write something in the near future about this, but we’ll be exploring this area in the next few months as our MacServe program wants to issue badges, and I have to make it happen!

AAEEBL Conference Notes – Day One

As part of my personal effort to broaden my scope beyond just LMS work (which is what I’ve done the last three years or so) I’m trying to attend all sorts of different educational technology related conferences. A lot of the work I’m doing has this weird intersection of outcomes based assessment and evidence, which ePortfolios really understand. Coincidentally, McMaster has this Learning Portfolio initiative going on. Strange how these things line up? So going back a few months, Tracy Penny Light was working with McMaster as a visiting professor around how we can better integrate the Learning Portfolio around campus, as well as how McMaster can start participating with ePortfolio campuses around the world – one of the suggested ways to start was to attend AAEEBL’s annual conference.

So after the brilliant people at Passport Canada got me an emergency passport (I had left mine in a car service the week before – and the car service was in Buffalo, and unable to return my passport), off to Boston I went. The sessions I was in were typically 20 minutes, so the notes won’t be as extensive as what I did for Fusion. Here’s my notes:

ePortfolio in Study Abroad: A Model for Engaged Intercultural Learning

A couple of interesting ideas – Indiana University Purdue offer 70+ study abroad programs and the ePortfolio use in those courses are widespread across disciplines (Humanities, Biology and Liberal Arts were mentioned specifically). These study abroad programs are aligned with graduation outcomes – I didn’t catch whether or not they were assessed for graduation, but certainly they could be. I wonder what a University or College would be like if they built in some experiential component, that required them to document what they’ve learned, and show evidence of that learning as part of graduating?

ePortfolios in Graduate Education – Developing Critical Reflection and Lifelong Learning

Athabasca University uses ePortfolios at the Master’s level in their Distance Education program to assess for PLA (prior learning assessment) and as a program long piece. One of the big takeaways was that they have to work really hard to steer students away from a showcase style ePortfolio, to a more reflective critical practice portfolio. I wonder if this is the end goal for us, to have users engage in this critical practice, do we have to get away from the showcase style stuff we’re doing already? Or can we accept that cognitive dissonance, and really push students to use the Learning Portfolio for more than one reason? That is going to be a tough task.

Keynote: Catalyst For Learning

So I’ve heard about the Catalyst for Learning website, it comes up fairly frequently in ePortfolio circles, and it really is a valuable resource. Some interesting ideas brought up during the keynote – the one that really resonated was the preliminary research that suggested that ePortfolio use on campus can aid retention by 20% – which is a huge number. Another was this sub-site for the use of ePortfolio as a Curriculum change agent. The keys for success in implementing ePortfolios is to find opportunities that use inquiry and reflection, and make the ePortfolio component of those teaching acts meaningful (beyond grades).

A small portion of the keynote was spent on scaling up – and that’s something that I’ve struggled with getting my head around. There’s the typical connect ePortfolio use to institutional goals, engage students (well duh!) but two of the scalability points resonated and bodes well for what we’ve done at McMaster. The first was “develop an ePortfolio team” – which I think we’ve done very well. We’re forming a Learning Portfolio advisory committee, which will include students and student government as well as faculty and staff. The second was really nice to hear, and that was “make use of evidence of the impact of ePortfolio use”. That’s the stuff that we’re digging into this year.

Building a Sound Foundation: Supporting University Wide Learning Portfolios

My name in blurry lights.

20140729_150740

This was my presentation about the technical supports that we put in to support campus wide ePortfolio use. We did some informal data collection around the effectiveness of the stuff we built – what students used, and basically the resources I expected to be used were not used as much as the stuff that I felt would not be used. Basically I’m a bad judge of what people will use.

Two tough questions I got that I couldn’t answer: Is there evidence that attendance in the workshops for faculty help delivery? Does faculty taking the workshop filter down to the students?

Make It Do What It Do: The Intersection of Culturally Relevant Teaching, Digital Rhetoric in Freshman Writing Classrooms

I will say, this pairing of presentations was definitely the odd couple of the conference. What is not astounding is that this session blew my mind. I was drawing comparisons to all sorts of communication theory, Walter Ong’s oral tradition, cultural studies, bell hoooks, Public Enemy songs… just a cornucopia of stuff firing off. Also, the quote of the conference right here:

“Uploading Word Documents to a predefined template emulates a violent history of technology that reinforces existing power paradigms”

So what was my takeaway from all this? Being a white dude, I have to remember that this technology and initiative comes from a white dude perspective. How do we diversify this initiative in a way that is respectful and not tokenizing? I guess there’s some element of diversification of ePortfolios – remembering that they are not some panacea, but come from a specific perspective, and while they may be used by any person, the pedagogy that surrounds them is almost certainly from a particular perspective.

How to Design an Assessment Program Using an ePortfolio: Linking Mission to Learning

While this session was stacked at the end of what was an exhausting day, it reinforced a lot of the things that we’re doing at McMaster: ePortfolios allow a channel of communication between institution and student, data from the assessment of ePortfolios (program oriented ePortfolios) aren’t useful for deep analysis, but can reveal opportunities for curriculum improvement, and rubrics used to assess ePortfolios can be linked to program level outcomes.

 

Badging

I’ve been involved, somewhat peripherally, with the Open Badging Initiative for the last six months or so. Initially, it was a way to start thinking about breaking the LMS (Integrated Learning Platform? aw, screw it, I don’t know what the thing is called anymore) out of the box it’s in and communicating what the LMS does well with other parties. I thought it could be a way to communicate skills, think about developing a short-hand language through the badge to communicate with other people. It’s really a way to check all the boxes that get me excited currently. Open standards? Yep. Mutating a system to do something other than what was intended? Yep. Visual design an image that communicates a value to another party? Yep. Explore the value of a systematic education? Yep.

The problem is that I essentially stopped programming in 2004 when I really didn’t need it anymore. Sure I’ve done a few things like hack together a PERL script to parse out values in a text file, and dump it into a database, but using badges at this point, or at least at my institution, I need to get up to speed with programming and handling JSON, XML if I’m going to start tinkering with our LMS and implementing badges. Ouch. Thankfully, I’ve got a few friends and colleagues who’ll help me get there.

For those of you who don’t know, badging is a way of giving value to something by awarding an image that represents that value. At it’s simplest, it works like the Scouts – demonstrate something and get a badge for demonstrating that you know something. It’s basically the same proposition as what grades are in higher education. The neat thing is that the badge doesn’t have to be tied to a number that’s arbitrarily set by someone (a teacher) or something (a computer, schooling system…). It can be tied to evidence or not, depending on the issuer of the badge and what they demand for getting the badge. That’s where badging is cool for me.

When you earn a badge that conforms to the Open Badges Standard, it can be pushed to your backpack. This is the central repository of badges for you. I’ve embedded below a portion of my backpack for you to see how one might display achievements.

What makes badges a little better than a grade value is the evidence of learning which is listed as part of the criteria. Now in many cases this is not as transparent as it should be. For instance, I’ve been working through CodeSchool’s Javascript introduction and JQuery courses that issue badges. Their criteria is displaying on a page that “confirms” I completed a module. Wouldn’t this page be much better if it shared exactly what I did to earn the badge? That would be powerful. I realize that there’s all sorts of considerations for student privacy, and ideally they would be able to control what is shared with the badge (maybe an on/off switch with each badge issuer to allow for a simple description of what earned the badge or a more detailed results page). That might lead to badges being more than a symbol of learning that doesn’t communicate clearly to the viewer what was learned.

The Flipped Classroom Meets a MOOC

This feels like some sort of joke – a flipped classroom walks into a bar and starts talking to a MOOC….

Here’s the quote:

“The use of videos with quizzes puts the learning directly in students’ hands,” said Gries. “If they don’t understand a concept, they can re-watch the videos and take the quizzes again. This allows students to learn at their own pace. It is exciting.”

I’m almost positive that was taken out of context, but if not, watching a video again, or taking a quiz again, in and of itself is not learning. It’s not good learning design, it’s not good thinking, it’s just drill and repeat in another form. It’s drill and repeat, at the student’s command…

I’m purposefully ignoring the good stuff that happens in this article, like the 8% grade increase (which is being attributed to the flipped model, but could just be the students in the class are better than the previous years, maybe the class size was smaller, leading to better instructional opportunities…). I also find it curious that the University of Toronto could find no student to add their voice to this puff piece. Undoubtedly, something is here – but what it is really isn’t shared in this piece. It will be interesting to see if any research is accomplished on this over the next few years.

This isn’t the first time that people have repurposed Coursera or other xMOOC platform content as elements of a flipped classroom. It strikes me that if this approach takes hold, all education has done with these MOOCs is that they’ve created another set of publishers with a repository of content. If so, the future of MOOCs (as content repositories) is pretty grim.

Fusion 2014 – Day Three Recap

After a long night and an early morning it was surprising that I was as functional as I was. At this point in the conference, I realize I’m just human, can’t do the million things I want to do in a city and at a conference, and just relax. Instead I had a great conversation/rant about ePortfolio with some of the guys who actually develop the product. I told them about the long-term vision I’d have for the product (including learning goals that students can assess themselves on, with rubrics they design, with outcomes that they determine and assess) and some seemingly short term improvements (like taking the improvements they made in 10.0 to the creation of homepages in courses, and grafting it on the ePortfolio presentation tool – which seems like such a no-brainer). We’ll see if my enthusiasm for getting the presentation building piece of ePortfolio fixed is any help. I also intended to attend an 8:00 AM session, but felt that a run-through of our presentation might be time better spent.

I’d like to hear about the Executive Platform Panel if anyone got up at 8:00 to share their story of what they heard.

Implementing Learning Outcomes and Collecting Insightful Data in Higher Education

My co-worker Lavinia and I co-presented this one, and again it went well. This time it was a more traditional presentation, which I’ve embedded below:

We got some really interesting questions, and got into the nitty gritty stuff about Competencies in D2L. I still have a huge issue with the language of “competency” (does not achieving a competency mean that you are incompetent?) and I guess that can be addressed with language packs if I were clever enough to think of what the structure should be called.

Expanding Widget Functionality Using Desire2Learn Interoperability Tools

Lone Star College uses JQuery UI (which works with LE 10.1, but not 10.3) and Javascript with inline CSS to alter widget functionality. You can use D2L replace strings ({OrgUnitId}, {RoleId}) to change content in a widget depending on who is viewing the widget. They found that the most taxing thing was combing through the Inspect element panel of Chrome or Firefox to determine what the name of the item was. Valence documentation will be releasing this information if it’s needed, so that will be really helpful and speed up development time.

Lone Star College used the custom variables in the External Learning Tools to pass custom variables to the project – basically the content of the replace strings.They use it to embed request forms that are typically hosted external to the system – our ticketing system would be a local example. D2L CSS Classes will also be made available for developers, so that you can make custom developments look more like D2L solutions.

Continuous Delivery Model

So like everyone else who’s a D2L admin, it strikes me as Continuous Delivery might be the best thing, or the worst thing ever. I guess we’ll find out next August. Monthly releases will occur the first Tuesday of the month – then there are five application dates – the first two waves are application to test, the last three are application to production. So if you are scheduled for wave 2 and wave 3, you’ll have a week with the update on your test. As an admin you are locked into the waves except for the odd circumstances. Honestly, I don’t particularly like that idea, but it’s what’s happening. I guess I’ll have to see how other schools are feeling the upgrade paths.

The language around what D2L are doing with Continuous Delivery Model is changing as well an Upgrade is your last Upgrade, the one to LSOne/10.4. Updates are what happens in Continuous Delivery Model. Similarly, Release Updates are more of a roadmap – what’s coming and when they’re expected to impact you. Release Notes are what’s changing that particular month, and will be released 3 weeks prior to the application date.

I wasn’t any more scared of this than I was before. As we’re going through it, I spend two months pouring over documentation, distilling it down to four or five pages, rewrite it for our user group then push it out, apply it and train people on it. While major changes will be seen for up to 12 months in advance, and are up to the admins to enable, it strikes me as it changes training and local documentation significantly for us – because we’ve customized our instance, particularly around language use, pretty heavily. I’m not asking D2L to do our documentation work – even though they’ve offered all clients their own videos, but I suspect I’m going to get a whole lot more busy with a Continuous Delivery Model. We’ll see.

Closing Keynote – LeVar Burton

I am not a trekkie, don’t really care about celebrities, but I enjoyed LeVar’s talk, although I struggle to make a connection between Reading Rainbow’s move from public resource to private enterprise and D2L – talk about a missed opportunity, to have John Baker come out at the end and say, “all D2L clients have access to Reading Rainbow’s library of video assets starting now.” I wasn’t let down, per se, but it was a pillow soft ending to a good conference. When the gold-standard of this sort of thing is Apple announcing all upgrades are free for the OS – that’s an oomph.

Actually that wasn’t my end to the conference, it was with a beer with my favourite D2L’er Barry Dahl. Of course, losing my passport was another story…