AAEEBL Conference Notes – Day One

As part of my personal effort to broaden my scope beyond just LMS work (which is what I’ve done the last three years or so) I’m trying to attend all sorts of different educational technology related conferences. A lot of the work I’m doing has this weird intersection of outcomes based assessment and evidence, which ePortfolios really understand. Coincidentally, McMaster has this Learning Portfolio initiative going on. Strange how these things line up? So going back a few months, Tracy Penny Light was working with McMaster as a visiting professor around how we can better integrate the Learning Portfolio around campus, as well as how McMaster can start participating with ePortfolio campuses around the world – one of the suggested ways to start was to attend AAEEBL’s annual conference.

So after the brilliant people at Passport Canada got me an emergency passport (I had left mine in a car service the week before – and the car service was in Buffalo, and unable to return my passport), off to Boston I went. The sessions I was in were typically 20 minutes, so the notes won’t be as extensive as what I did for Fusion. Here’s my notes:

ePortfolio in Study Abroad: A Model for Engaged Intercultural Learning

A couple of interesting ideas – Indiana University Purdue offer 70+ study abroad programs and the ePortfolio use in those courses are widespread across disciplines (Humanities, Biology and Liberal Arts were mentioned specifically). These study abroad programs are aligned with graduation outcomes – I didn’t catch whether or not they were assessed for graduation, but certainly they could be. I wonder what a University or College would be like if they built in some experiential component, that required them to document what they’ve learned, and show evidence of that learning as part of graduating?

ePortfolios in Graduate Education – Developing Critical Reflection and Lifelong Learning

Athabasca University uses ePortfolios at the Master’s level in their Distance Education program to assess for PLA (prior learning assessment) and as a program long piece. One of the big takeaways was that they have to work really hard to steer students away from a showcase style ePortfolio, to a more reflective critical practice portfolio. I wonder if this is the end goal for us, to have users engage in this critical practice, do we have to get away from the showcase style stuff we’re doing already? Or can we accept that cognitive dissonance, and really push students to use the Learning Portfolio for more than one reason? That is going to be a tough task.

Keynote: Catalyst For Learning

So I’ve heard about the Catalyst for Learning website, it comes up fairly frequently in ePortfolio circles, and it really is a valuable resource. Some interesting ideas brought up during the keynote – the one that really resonated was the preliminary research that suggested that ePortfolio use on campus can aid retention by 20% – which is a huge number. Another was this sub-site for the use of ePortfolio as a Curriculum change agent. The keys for success in implementing ePortfolios is to find opportunities that use inquiry and reflection, and make the ePortfolio component of those teaching acts meaningful (beyond grades).

A small portion of the keynote was spent on scaling up – and that’s something that I’ve struggled with getting my head around. There’s the typical connect ePortfolio use to institutional goals, engage students (well duh!) but two of the scalability points resonated and bodes well for what we’ve done at McMaster. The first was “develop an ePortfolio team” – which I think we’ve done very well. We’re forming a Learning Portfolio advisory committee, which will include students and student government as well as faculty and staff. The second was really nice to hear, and that was “make use of evidence of the impact of ePortfolio use”. That’s the stuff that we’re digging into this year.

Building a Sound Foundation: Supporting University Wide Learning Portfolios

My name in blurry lights.

20140729_150740

This was my presentation about the technical supports that we put in to support campus wide ePortfolio use. We did some informal data collection around the effectiveness of the stuff we built – what students used, and basically the resources I expected to be used were not used as much as the stuff that I felt would not be used. Basically I’m a bad judge of what people will use.

Two tough questions I got that I couldn’t answer: Is there evidence that attendance in the workshops for faculty help delivery? Does faculty taking the workshop filter down to the students?

Make It Do What It Do: The Intersection of Culturally Relevant Teaching, Digital Rhetoric in Freshman Writing Classrooms

I will say, this pairing of presentations was definitely the odd couple of the conference. What is not astounding is that this session blew my mind. I was drawing comparisons to all sorts of communication theory, Walter Ong’s oral tradition, cultural studies, bell hoooks, Public Enemy songs… just a cornucopia of stuff firing off. Also, the quote of the conference right here:

“Uploading Word Documents to a predefined template emulates a violent history of technology that reinforces existing power paradigms”

So what was my takeaway from all this? Being a white dude, I have to remember that this technology and initiative comes from a white dude perspective. How do we diversify this initiative in a way that is respectful and not tokenizing? I guess there’s some element of diversification of ePortfolios – remembering that they are not some panacea, but come from a specific perspective, and while they may be used by any person, the pedagogy that surrounds them is almost certainly from a particular perspective.

How to Design an Assessment Program Using an ePortfolio: Linking Mission to Learning

While this session was stacked at the end of what was an exhausting day, it reinforced a lot of the things that we’re doing at McMaster: ePortfolios allow a channel of communication between institution and student, data from the assessment of ePortfolios (program oriented ePortfolios) aren’t useful for deep analysis, but can reveal opportunities for curriculum improvement, and rubrics used to assess ePortfolios can be linked to program level outcomes.

 

Badging

I’ve been involved, somewhat peripherally, with the Open Badging Initiative for the last six months or so. Initially, it was a way to start thinking about breaking the LMS (Integrated Learning Platform? aw, screw it, I don’t know what the thing is called anymore) out of the box it’s in and communicating what the LMS does well with other parties. I thought it could be a way to communicate skills, think about developing a short-hand language through the badge to communicate with other people. It’s really a way to check all the boxes that get me excited currently. Open standards? Yep. Mutating a system to do something other than what was intended? Yep. Visual design an image that communicates a value to another party? Yep. Explore the value of a systematic education? Yep.

The problem is that I essentially stopped programming in 2004 when I really didn’t need it anymore. Sure I’ve done a few things like hack together a PERL script to parse out values in a text file, and dump it into a database, but using badges at this point, or at least at my institution, I need to get up to speed with programming and handling JSON, XML if I’m going to start tinkering with our LMS and implementing badges. Ouch. Thankfully, I’ve got a few friends and colleagues who’ll help me get there.

For those of you who don’t know, badging is a way of giving value to something by awarding an image that represents that value. At it’s simplest, it works like the Scouts – demonstrate something and get a badge for demonstrating that you know something. It’s basically the same proposition as what grades are in higher education. The neat thing is that the badge doesn’t have to be tied to a number that’s arbitrarily set by someone (a teacher) or something (a computer, schooling system…). It can be tied to evidence or not, depending on the issuer of the badge and what they demand for getting the badge. That’s where badging is cool for me.

When you earn a badge that conforms to the Open Badges Standard, it can be pushed to your backpack. This is the central repository of badges for you. I’ve embedded below a portion of my backpack for you to see how one might display achievements.

What makes badges a little better than a grade value is the evidence of learning which is listed as part of the criteria. Now in many cases this is not as transparent as it should be. For instance, I’ve been working through CodeSchool’s Javascript introduction and JQuery courses that issue badges. Their criteria is displaying on a page that “confirms” I completed a module. Wouldn’t this page be much better if it shared exactly what I did to earn the badge? That would be powerful. I realize that there’s all sorts of considerations for student privacy, and ideally they would be able to control what is shared with the badge (maybe an on/off switch with each badge issuer to allow for a simple description of what earned the badge or a more detailed results page). That might lead to badges being more than a symbol of learning that doesn’t communicate clearly to the viewer what was learned.

The Flipped Classroom Meets a MOOC

This feels like some sort of joke – a flipped classroom walks into a bar and starts talking to a MOOC….

Here’s the quote:

“The use of videos with quizzes puts the learning directly in students’ hands,” said Gries. “If they don’t understand a concept, they can re-watch the videos and take the quizzes again. This allows students to learn at their own pace. It is exciting.”

I’m almost positive that was taken out of context, but if not, watching a video again, or taking a quiz again, in and of itself is not learning. It’s not good learning design, it’s not good thinking, it’s just drill and repeat in another form. It’s drill and repeat, at the student’s command…

I’m purposefully ignoring the good stuff that happens in this article, like the 8% grade increase (which is being attributed to the flipped model, but could just be the students in the class are better than the previous years, maybe the class size was smaller, leading to better instructional opportunities…). I also find it curious that the University of Toronto could find no student to add their voice to this puff piece. Undoubtedly, something is here – but what it is really isn’t shared in this piece. It will be interesting to see if any research is accomplished on this over the next few years.

This isn’t the first time that people have repurposed Coursera or other xMOOC platform content as elements of a flipped classroom. It strikes me that if this approach takes hold, all education has done with these MOOCs is that they’ve created another set of publishers with a repository of content. If so, the future of MOOCs (as content repositories) is pretty grim.

Fusion 2014 – Day Three Recap

After a long night and an early morning it was surprising that I was as functional as I was. At this point in the conference, I realize I’m just human, can’t do the million things I want to do in a city and at a conference, and just relax. Instead I had a great conversation/rant about ePortfolio with some of the guys who actually develop the product. I told them about the long-term vision I’d have for the product (including learning goals that students can assess themselves on, with rubrics they design, with outcomes that they determine and assess) and some seemingly short term improvements (like taking the improvements they made in 10.0 to the creation of homepages in courses, and grafting it on the ePortfolio presentation tool – which seems like such a no-brainer). We’ll see if my enthusiasm for getting the presentation building piece of ePortfolio fixed is any help. I also intended to attend an 8:00 AM session, but felt that a run-through of our presentation might be time better spent.

I’d like to hear about the Executive Platform Panel if anyone got up at 8:00 to share their story of what they heard.

Implementing Learning Outcomes and Collecting Insightful Data in Higher Education

My co-worker Lavinia and I co-presented this one, and again it went well. This time it was a more traditional presentation, which I’ve embedded below:

We got some really interesting questions, and got into the nitty gritty stuff about Competencies in D2L. I still have a huge issue with the language of “competency” (does not achieving a competency mean that you are incompetent?) and I guess that can be addressed with language packs if I were clever enough to think of what the structure should be called.

Expanding Widget Functionality Using Desire2Learn Interoperability Tools

Lone Star College uses JQuery UI (which works with LE 10.1, but not 10.3) and Javascript with inline CSS to alter widget functionality. You can use D2L replace strings ({OrgUnitId}, {RoleId}) to change content in a widget depending on who is viewing the widget. They found that the most taxing thing was combing through the Inspect element panel of Chrome or Firefox to determine what the name of the item was. Valence documentation will be releasing this information if it’s needed, so that will be really helpful and speed up development time.

Lone Star College used the custom variables in the External Learning Tools to pass custom variables to the project – basically the content of the replace strings.They use it to embed request forms that are typically hosted external to the system – our ticketing system would be a local example. D2L CSS Classes will also be made available for developers, so that you can make custom developments look more like D2L solutions.

Continuous Delivery Model

So like everyone else who’s a D2L admin, it strikes me as Continuous Delivery might be the best thing, or the worst thing ever. I guess we’ll find out next August. Monthly releases will occur the first Tuesday of the month – then there are five application dates – the first two waves are application to test, the last three are application to production. So if you are scheduled for wave 2 and wave 3, you’ll have a week with the update on your test. As an admin you are locked into the waves except for the odd circumstances. Honestly, I don’t particularly like that idea, but it’s what’s happening. I guess I’ll have to see how other schools are feeling the upgrade paths.

The language around what D2L are doing with Continuous Delivery Model is changing as well an Upgrade is your last Upgrade, the one to LSOne/10.4. Updates are what happens in Continuous Delivery Model. Similarly, Release Updates are more of a roadmap – what’s coming and when they’re expected to impact you. Release Notes are what’s changing that particular month, and will be released 3 weeks prior to the application date.

I wasn’t any more scared of this than I was before. As we’re going through it, I spend two months pouring over documentation, distilling it down to four or five pages, rewrite it for our user group then push it out, apply it and train people on it. While major changes will be seen for up to 12 months in advance, and are up to the admins to enable, it strikes me as it changes training and local documentation significantly for us – because we’ve customized our instance, particularly around language use, pretty heavily. I’m not asking D2L to do our documentation work – even though they’ve offered all clients their own videos, but I suspect I’m going to get a whole lot more busy with a Continuous Delivery Model. We’ll see.

Closing Keynote – LeVar Burton

I am not a trekkie, don’t really care about celebrities, but I enjoyed LeVar’s talk, although I struggle to make a connection between Reading Rainbow’s move from public resource to private enterprise and D2L – talk about a missed opportunity, to have John Baker come out at the end and say, “all D2L clients have access to Reading Rainbow’s library of video assets starting now.” I wasn’t let down, per se, but it was a pillow soft ending to a good conference. When the gold-standard of this sort of thing is Apple announcing all upgrades are free for the OS – that’s an oomph.

Actually that wasn’t my end to the conference, it was with a beer with my favourite D2L’er Barry Dahl. Of course, losing my passport was another story…

Fusion 2014 – Day Two Recap

So anyone who has gone to a conference before will recognize, it’s a bit more like a marathon than a sprint – you really have to try to pace yourself to get everything in and pay attention to the things you want to. I will say, that for the second year in a row, the food at the Fusion conferences were really good. Ended up talking with Paul Janzen of D2L about our impending PeopleSoft integration and the summer of integrations (Blackboard Collaborate, Pearson, maybe McGraw Hill, iClicker and Top Hat) we’re doing at McMaster. Ken Chapman also joined us at breakfast and asked a little bit about what we’d like to see out of e-mail. Frankly, I hadn’t thought about e-mail in years, because we’ve been mired in hell with IT and us trying to get the Google Mail integration working (not that it doesn’t functionally work, but IT has stalled us for admin level access since we asked a year and a half ago). I said that I’d personally prefer the system to not do e-mail at all, but that would be a difficult task considering we have people who segment their academic teaching e-mail on the LMS rather than their institutional e-mail. The problem for us is that we’re currently not configured to allow external e-mail.  It will be interesting to see if IMAP/POP3 support comes to Brightside sometime in the future – which makes a lot more sense.

Insights Focus Group

I wasn’t sure if I was going to be of any help in this but I  thought that seeing as we’ve run some reports with the Insights tool maybe I could glean better ways to deal with it. Basically, people had concerns with the large data not being able to run org-level reports (which is one of ours as well), the interface needs some improvement, and the time it takes to create ad hoc reports is too long. So those issues were at least noted. Let’s see how they get addressed going forward.

Blended Learning, Learning Portfolios and Portfolio Evaluation

Wendy Lawson and I co-presented this – however it was mostly a Wendy show. She lived it, so she should have the floor. Basically this presentation outlines what we collaborated on for Wendy in her Med Rad Sci 1F03 course – which is a professional development course for first semester, first year science students going into the Medical Radiation Sciences (X Ray, Sonography, etc).

We used the ePortfolio tool as a presentation mechanism – which I think worked well, I’m not sure if we had a good flow of what we were going to show on each page, but other than that, it was a risk that we felt was not big enough to impede our presentation.

We talked about how this redesigned course could use the Learning Portfolio to deliver the course in a blended manner (using ePortfolio/Learning Portfolio activities as the one-hour blended component) and how the students did with it. After working with Wendy on this presentation, the stuff her students did were miles above what we saw on average and I think next year, the weaknesses she acknowledged in the course will be addressed.

Vendor Sessions

I honestly skipped these because, well, I’m not interested in getting more spam in my work e-mail. Plus, my wife was having surgery (everything’s good!) at this time so I wanted to call and make sure we connected before surgery began.

Connecting Learning Tools to the Desire2Learn Platform: Models and Approaches

Attending this session was particularly self-serving – I wanted to say hi to the presenter, George Kroner, who I’ve followed on twitter for what seems like a million years, and the Valence stuff is stuff I feel I should know. I have a decent enough programming background – I can hack together things. So why am I not actually building this stuff?

George walked through UMUC’s process for integrating a new learning tool which can be broken down into three steps:

  1. Evaluate tool fit
  2. Determine level of effort to integrate
  3. Do it/Don’t do it

It seems so simple when I re-write out my notes, but it’s a really interesting set of steps – for instance, in the determine the level of effort to integrate – you also have to think of post-integration support, who supports what when you integrate? Is it the vendor? Or your support desk? What’s the protocol for resolving issues – do people put in a ticket with you, then you chase the vendor? Is the tool a one-time configuration and does it import/export nicely, or does it need to be configured every time it’s deployed in a course?

We did a Valence exercise next, and I want to merely link two tools that when I circle back to doing some Valence work (soon, I swear!) I’ll need:

https://apitesttool.desire2learnvalence.com/ and http://www.jsoneditoronline.org/. The API test tool is a no brainer really, and I knew about it before but never knew where it was – incredibly helpful for debugging your calls and what you might get back. JSON editor online is new to me, and something that I really, really needed. I’m a JSON idiot – for some reason, Javascript never resonated with me. I’ve always preferred Python or PHP as web scripts, despite the power of Javascript. Guess I’ll have to put on my big boy pants and learn it all over again. Maybe Dr. Chuck will do a JSON course like he’s done with Python?

Social Event at the Country Music Hall of Fame

The evening social event was great - the Hatch Show print is actually something I might just hang in my office. There was some shop talk, some fun stuff, a drone… Oh yeah, this happened:

Fusion 2014 – Unconference and Day One Recap

Instead of a big post I’m going to break my experiences up into three distinct posts because a) it’ll get me to post more frequently, b) that’s something I want to do and c) no one wants to read a monolithic block of text.

So I flew out of Buffalo, and it was an interesting time crossing the border where I got the fifth degree about where I was going and what I was doing. I think they thought I was being paid to speak at a conference, next time I’ll have to change the language I use to say something like attending a conference. After the border and the pornoscanners at the airport. I arrive in Nashville. Now, I’m not that worldly, but I’ve been to a few places. Nashville is not one of my favourites, not because the city is particularly terrible, it’s not particularly walkable, and it has well, public transportation issues. Outside of those quibbles (which are big problems for me) it’s a fine city with some fine people.

The Unconference

One of the best things that happens at Fusion for the last 5 or 6 years is the Unconference. I missed the first few because I was never able to actually get to Fusion, but the last couple of years I’ve been able to go, this was the event kick-off that was fun, social, and often leads to previously undiscovered ideas and new ways to break D2L. I didn’t stick around for the full discussion because I was a bit tired, but the one thing that I did learn was that VHS (Virtual High School) use Javascript to develop interactive elements of courses. Now that’s not a particularly shocking example, but combine that with the Valence API and maybe you could do some in situ testing and push results to the gradebook. Later a few of us went out for a nightcap and a good time was had by all.

Fusion Day One

Typically the first day has a ton of beginners and introduction sessions in the morning, so I ended up meeting with my co-presenter to go over our session the next day. The sessions I did attend were incredibly useful for me and I learned a ton about how other places develop in-house solutions. In fact most of my attendance was in sessions that were around External Learning Tools or the Valence API.

Keynote

So John Baker’s keynote threw the audience a little, the big takeaway was that Desire2Learn is now D2L, and the Learning Environment, or the LMS, is now called Brightside Brightspace. I guess there’s a thinly veiled jab at the competitors being the dark side, but I can’t say that I understand the need to change names. Lots of people at the conference have suggested that Desire2Learn seems a very 1990′s thing, reminiscent of the dot com boom/bust. I can’t say that they’re wrong. However, it would’ve been nice to have been told that officially. I’m a bit of a smartass when it comes to names, so my immediate nature is to shorten this to it’s logical shortform, BS. Not necessarily flattering. I don’t think D2L is big enough to have gotten out in front of it to shorten it to B, which in and of itself is not a good acronym either (B product? B movie?).

I’m not the only person who’s looking for a short form for it either. Considering I don’t know the difference between Brightspace and the LE (so is the new version called Brightspace version 1?) or if the existing products are called the LE 10.3 still… so many questions. None of them answered.

I’m sure many will talk about Chris Hadfield’s inspirational speech, it was great and all, and I certainly appreciate what he’s done. I just don’t see the connection to the conference that he brings.

Integrating Neat Tools and Activities into your Course through LTI

This session was all about External Learning Tools – which we’ve had a summer of dealing with so far. This particular session talked about integrations between SoftChalk, SWoRD and TitanPad. I’m familiar with SoftChalk through a series of courses I’m taking at Brock University and I can say that I’ve never been particularly impressed with the product – perhaps that’s the way that Brock is using it, or the way the course was developed, or a limitation of Sakai, Brock’s LMS. Either way, this session demonstrated the connection between SoftChalk activities hosted in Content then connecting to add grades into the Gradebook – certainly a more interesting way to deal with whatever you design in SoftChalk.

SWoRD was a particularly an interesting case – although I don’ t know how robust or deep the integration was (I suspect D2Lwas merely passing enrollment data to SWoRD). SWoRD is a peer assessment tool that might be an alternative to something like PeerScholar.

I’m always happy to see Etherpad clones, and TitanPad was used as an example, but if you’ve hosted an Etherpad clone at your institution you can pass user names to the Etherpad for auto tracking in the document. I’m not sure how robust Etherpad is for say, classes of 600+, but that would be an interesting experiment.

One thing the D2L presenter said was that in the configuration of the external tools, when you check the option to send User ID, it means sending the anonymized version of the username, which is interesting because the language used in the external tools dialog would benefit from adding this tidbit – we’ve turned it off in most cases (and seem to have no issue with students/instructors logging into the external tool) because we thought it would violate our University privacy rules.

The Secret to APIs

The second session of the day for me was around the use of Valence (D2L’s API) to create personal discussions in a course with enrollment of one student and the instructor. The big takeaway for this was that in courses that have enrollments set, you can save a ton of time by writing a script to do the repetitive boring stuff like create a group of one, enroll a student in it, then create a discussion topic and restrict it to that group. Was interesting to see C# used as the middleware programming language – I thought that C# was out of favor but maybe not? PHP would’ve been easier, and PERL/Python might’ve been faster to complete the task. Either way, this is the work that earned Ryan Mistura the Desire2Excel award in the student category. Cool stuff

Solution Spotlight/D2L Year Recap

Nick Oddson and Ken Chapman handled the recap of the D2L year, focusing on the extensibility of the platform. They did point out that there is a 40% faster time to resolution because they’ve increased their support and SAAS service teams. Which is good, because their service was slow before. I have noticed that their support turnaround is probably the best it’s been in years.

The looking forward part of their talk was interesting – it seems like they talked a lot about either 10.3 improvements (that were already announced last year, and available now), or stuff that we can’t see yet. Perhaps a chart:

10.3 Feature Unreleased to the Public
  • Wiggio
  • Discussions (Grid View restore)
  • Binder – Windows 8 support
  • Quizzing UI/UX improvements
  • Content Notifications
  • Student Success System
  • LEaP – Adaptive Path learning
  • Course Catalog (currently being used on Open Courses)
  • Visual Course Widgets (customization, I presume at a cost)
  • Built-in practice questions in Content (contextualized learning)
  • Gamification built into the Learning Environment (I assume 10.4/LSOne/Brightspace)

I suspect that the amount of talk about predictive modelling is something they want to build primarily for remedial use, and for online courses primarily. As a market strategy, that makes some sense. Some of D2L’s bigger clients are primarily online universities.

Blackboard Collaborate Integration with Desire2Learn, Uhh D2L, LE uhh Brightspace 10.3

I think I did that right?

Back in June we took a few weeks and integrated Blackboard Collaborate (our web conferencing tool) with our instance of the Learning Environment (Brightspace just doesn’t feel right). We are currently running 10.2 SP9 of the LE.

Reflections? Well, for such a simple integration (and really the D2L interface is waaaaay better than the Blackboard Collaborate interface) it took a hell of a long time. We had to purchase and get D2L to install the IPSCT pack – so if you’re entering into an agreement with D2L and may way to do this later, definitely spend the cash up front. From start to unveil it was over six weeks – now that’s not solid work on just this. After D2L installed the IPSCT pack, we had to contact Blackboard support to get our credentials. Seeing as we’ve had total turnover in who supports Blackboard Collaborate, our new Collaborate support person was not on the list of approved contacts – which is funny because she’s the one who does all the tickets. So we contact our account manager. No response. It turns out that well, they are no longer our account manager, that’s why we haven’t heard from them in over 9 months. Great. So support can’t do anything, neither can our phantom account manager. Finally we get to the bottom of who our new Blackboard account manager is, they straighten out the mess and our person is now an approved contact. After that it still takes a week to get our credentials for test and prod.

Configuration on test went smoothly enough – if you’ve ever worked with External Learning Tools in the LE, it’s the same as any other configuation in that tool – have the address to make the connection, secret key and password, check a few more boxes, and then off you go. Now everyone who gets enrolled in the LE gets a Default Role at the org level, and then gets assigned a more applicable role at the course offering level, which means for us, you have to go through not only the Instructor/Student and TA roles, but the Default Role as well. While this is a pain to do, it’s often easy to forget to do it – and that’s what we promptly did. A day or two was spent tearing what’s left of my hair out, until the lightning struck and it sparked the engine enough to get it firing again.

Fast forward a couple of weeks and we get some time to implement it on prod, we yet again forget what we did to make it work. A week later we said something to the effect of  “Fudge, Default Role…” ran off to the LE and fixed our error. Sometimes it’s not the technology that fails you…

What I Want in a xMOOC

Listen, I hear you – many will respond to my title and say “nothing, I want nothing from an xMOOC and I hope they all become the passing fad that they are”. I feel your sentiment, but I think there’s value in xMOOCs, as bad as the pedagogy is behind them.

1. As a training program, which most xMOOCs are, they can be incredibly useful. For base knowledge, and introductory subjects, these are appropriate tools to get learners to the next step. What I want to see from the Coursera’s and Udacity’s is that they provide more flexibility – open enrollments, work at your own pace, maybe even mini-credentials per unit (via badging?) and most importantly, multiple pathways for learning. I would love to see an online course develop multiple methods of instruction that as a student, I could opt into if I’m having trouble with a topic. That would signal to me that it’s about learning, not about profit margins – because frankly, developing multiple methods of instruction for an online course is incredibly expensive. Putting on a venture capitalist’s hat (and it’s ill fitting on me, I will admit), this could give a company an advantage in being able to leverage the data to determine which learning method is best suited to a student/learner, based on prior successes.

2. Better mechanisms for assessment. All the MOOC platforms have great testing banks, but not much else in the way of advancing assessment. Coursera’s peer evaluation is a step towards something good – but it relies on peer assessment without any repercussions – I could do a garbage job of assessing someone and it won’t affect me – there needs to be a balancing here to ensure that peer assessment is valued as important. Programatically, it’d be easy to do, make sure that feedback exists, make sure it’s longer than x characters, and make sure that there’s a mechanism for providing ways to improve (could be as simple as an explicit “ways to improve this critera”).

3. Speaking of open, for the courses that are free, I’d like to see a commitment to openness, meaning that the materials created are able to be repurposed in other contexts, easily acquired, clearly labelled and ideally in a repository. Yeah, like that will happen. The only open in xMOOC is open enrollment.

Learning Portfolio Writing Prompts

One of the problems of asking first year students to “reflect” is that, typically, they don’t know how to reflect. It’s not a skill that a lot of students come prepared to University with, nor have developed. Yet, it’s a critical skill to have – to think about what you’ve done, and identify what worked well, what needs improvement and what can change.

Reflection is really not a simple process, but it’s crucial to learning, and really important to deep learning. Think of all the life lessons you’ve learned, and I bet you’ve thought about them often, and sometimes deeply. They change you. Similarly, good educational experiences (whether that’s reading a book, attending a lecture, practicing a lab, or just trying something out) cause you to think about them, and again, sometimes deeply. It’s that deep learning that the Learning Portfolio wants to get at.

The activities we have unveiled in the first year of the Learning Portfolio were good – but mostly course based. Anecdotally, we didn’t see a lot of extracurricular activity, or if we did, it was part of the program. One potential reason was that we didn’t give any student a reason to actually use the tool. So one way to solve that will be to post writing prompts, to offer students something to reflect on and a reason to use the tool on their own. Each writing prompt will help students connect their academic work to their “outside” life, connecting academia to reality. I consider this sort of thing “translational” – an effort to break academia down to understandable language for the average person. In the process, hopefully students will engage with thinking about what they’re doing, set a goal for themselves and maybe get a little bit more out of their experience here.