Polling In Desire2Learn’s Learning Environment

The process to install a polling widget on your institution’s homepage is fairly straight forward. I tend to prefer self-hosting solutions, and open source at that. Thankfully in my job we have that luxury. If you’re attempting this with no knowledge of PHP or servers, you might have some issues. I’ll try to explain as best as possible, but comment if you get lost in the process, and I’ll be happy to clarify what I can.

The first step is to find a polling software solution; basically any polling software that creates an html/php page can be embedded. It’s preferred that the page lives behind HTTPS, or secure HTTP connection – so if you’re self-hosting the polling solution as we are, you should put it behind the extra security. Why? Well, Internet Explorer doesn’t handle mixed secure and insecure solutions and will give the end user a pop up with some unclear language that in the end, only adds more hurdles for the user to answer the poll. In fact, Firefox now has similar behaviour (with an even less apparent notification that needs intervention before fixing).

We’re using this polling software: http://codefuture.co.uk/projects/cf_polling/ which serves our purposes quite nicely. It’s doesn’t allow for question types other than multiple choice, so if you need that functionality, you’ll have to choose something else. For our polls, we’ve worked the questions so that they fit this mold. The extra bonus of this one is that it stores all the data in a flat file – not in a database. So you only have one thing to maintain.

Within the PHP code, you can edit the options – the PHP file is well commented and shouldn’t give you any issues. One trick I’ve run into is that the D2L widget editor doesn’t refresh the data well – so if you make an error in the PHP, you should create a new file to upload rather than trying to overwrite, I couldn’t figure out why it wasn’t letting me reset the data collected (I suspect that the flat file is generated using the name of the PHP file, so when you update the PHP, it won’t force a reset of the data captured. Of course, why it wouldn’t overwrite the typo in the one answer, I’m not sure).

Another downside, and it’s a big one if you want to use these numbers as more than a general indicator – is that this solution does not track users. So, if you do choose this route, be aware that this poll sets a cookie on the computer that answers the poll, not necessarily attached to the user who answered the poll – so the same person could answer the poll multiple times. We don’t particularly care about that, only because we’re using it for a general sense of how the community feels on these issues. With large enough data, even with some mischevious numbers, we’d be OK.

You’ll need some basic CSS skills as well to edit how the page will look – there’s three options by default – but I’ve trimmed out the script to not include the extra options we aren’t using. I’ve rewritten the CSS to more accurately reflect the branding and colour scheme that we use at my institution.

I’ve included the text of the script listed above for an example of what we run and how we customize it. If you can’t see it, visit the text on pastebin.

 


<?
///////////////////////////////////////////////

// include the cf polling class file
include(‘cfPolling/cf.poll.class.php’);

// your poll question
$poll_question =’How well did the Discussion tool stimulate a conversation that improved understanding of the course material?’;

// In this variable you can enter the answers (voting options),
// which are selectable by the visitors.
// Each vote option gets an own variable. Example

$answers[] = ‘did not use’;
$answers[] = ‘a little bit’;
$answers[] = ‘a lot’;
$answers[] = ‘was crucial’;

// Make new poll

$new_poll = new cf_poll($poll_question,$answers);

// (Option)
// if you do not want to use cookies to log if a user has voted.
// if you are not using one_vote there is no need to use this.
// $new_poll -> setCookieOff(); //(new 0.93)

// (Option)
// One vote per ip address (and cookies if not off)
$new_poll -> one_vote();

// (Option)
// Number of days to run the poll for
$new_poll -> poll_for(28);// end in 28 days
// $new_poll -> endPollOn(02,03,2010);// (D,M,Y) the date to end the poll on (new 0.92)

// (Option)
// Set the Poll container id (used for css)
$new_poll -> css_id(‘cfpoll2’);

// chack to see if a vote has been cast
// used if the user has javascript off
$new_poll -> new_vote($_POST);

// echo/print poll to page
echo $new_poll -> poll_html($_GET);

?>

So that’s the backend of things. We currently manually set up a polling question, and will rotate through six different questions (which means six different unique PHP scripts) in a semester. Every three weeks, we prepare a new script page by copying the previous one and editing the end date, questions and answers, and upload it to the server.

Now getting it into a widget in a course (or at the organization level) is dead simple. Create a new widget, edit that widget and get to the HTML code view for the content of that new widget. Once there, put in this code:

<p style="text-align: center;"><iframe src="LOCATION OF YOUR FILE HERE" height="340" width="280" scrolling="no"></iframe></p>

Of course, you’ll substitute wherever the location of the PHP script you’re using is located where I’ve written “LOCATION OF YOUR FILE HERE”. Click Save to save the widget. You won’t be able to preview this widget, so you’ll have to have a bit of faith that your code (and my code) is correct.  Add the widget to your homepage, and you’re home for dinner.

Our experience with this is pretty surprising. The first time we ran the polls there was 36 responses in 10 minutes (during exams), 1450 in 24 hours and 2655 after one week.  After three weeks the final tally was 3598.  Now remember, that’s votes, not individuals. Even so, consider that each student might only vote as an average of 1.4 times, which might skew the numbers somewhat, even so that’s pretty representational (and corresponds with our internal numbers for the tool we surveyed about).

Here’s what the Poll looks like:

screenshot of D2L polling widget at work.
screenshot of D2L polling widget at work.

What do we hope to find with this? Well, personally I wanted to see how the Analytics tool use numbers would compare with users self-reporting. Does use of the tool make for an impression of using the tool? Are students even aware of the different tools in D2L?

UPDATE: It looks like the Polling tool that I used is no longer around. I looked for a mirror but there was none found in my extensive search. There are alternatives – which I found through this blog post on polling with PHP and without databases which pointed to this site: http://www.dbscripts.net/poll/ – this may not work for you because it requires server access to the htaccess file. I’ll continue to update this post if other alternatives present themselves.

Video Killed The Audio Lecture?

So as part of our summer initiatives, I had the brilliant idea to replicate something that we used to do at another institution where I had worked before: video tutorials for the LMS. I know full well that this may be a futile experience. There’s no possible way to keep up however many videos we do produce, there’s no way to put out the super high quality work I feel is required because we just can’t afford that kind of production time. However, I just couldn’t figure out the quickest way to make sure there’s no technical reason for this initiative to fail other than to crank out several videos, and post them to YouTube so that 20,000 students have some sort of access to information that they can use.

For those of you who haven’t done some sort of screen capture demonstration, here’s what I do. It may not make sense for you, or it may be downright wrong. It works for me – feel free to comment if you have ways that I can improve.

Write the script. 

The script is really the most important thing. It’s what makes you sound professional – you can’t just wing this (unless you’re brilliant). Write the script for what you’re going to say, then record it into your computer. Listen to it. Does it make sense? Good. Is it too long? Do you stumble over phrases? Fix them. Do this months in advance. About two weeks before you actually record, figure out if you can read/recite the script while doing something else. The something else can be typing on the computer, watching a movie… almost anything (I wouldn’t suggest brushing your teeth or having a meal). Can you get through without major issues? Good. It’s good enough to say out loud. Rewrite if you need to. Remember to say why anyone would want to do whatever it is you’re demonstrating. A script, even if it’s simple, will help guide you when you actually do the video. If you think you can skip this step, go ahead. However, I used to feel this way too, and would skip the script – until I was forced to work with one and it made the actual recording process simple.

Know how to do whatever you’re demonstrating.

Really, this is insulting, but it’s amazing how many videos I’ve seen stumble around what they were trying to show. Admittedly, I’m not 100% perfect, I have to often align my mouse up with the things I’m clicking (as my attentions on reading the script). If you happen to say “uhhh, how do I do that again?” stop recording, shut off whatever software you have running and practice the steps. Then practice them again. Make sure you know them inside and out. Do them instinctually.

Decide on what you’re going to record with.

The simplest setup for the best quality is Camtasia. I may be biased because I like the tool a lot, but it’s not open source. I’ve used a ton of tools that do screen capture,  but I know I’ll need the ability to fix audio in post, and edit video. You may like CamStudio, which is pretty damn good – in fact any of the webcasts I did for my early online courses (in 2007/8) used that software. Of course, there’s no post editing options. EZVid also has a similar functions to Camtasia, so I’m interested to try things out.

Nobody wants to see you.

The caveat here is that, nobody wants to see me; you however may look like George Clooney and should be seen. Honestly, people are not looking for a video introduction, so don’t waste time making one. Get to the point. I used to say that there’s a place for a picture-in-picture talking head. Now? I’m not so sure. In certain Distance Education classes it maybe makes some sense, however, the time it takes to do a decent talking head, mix the audio so it matches, and add in the pressure of having to do whatever it is you’re demonstrating in one take, is well, a lot. Cut the extraneous stuff out. Make your video simple and to the point.

Get it done.

I like to record using Camtasia, with a USB Snowball microphone. It’s not a wonderful super duper mic, but it is a good USB microphone at a decent price point. I place the microphone as if I were to speak into it, and then move it to the right of my mouth, so I’m not speaking directly into the mic. It’s a cheap trick to reduce smacking lips, pops and other annoying audio things. You can also make a DIY windscreen if it suits your needs to MacGuyver something. I test audio, and then hit record. Inevitably the first take is usually the one with the most energy, and usually also useless. Listen to it before moving on. Be critical of your performance. Slow down.

One other thing I like to do is take a bit of a break while recording, not like a 2o minute break but 15 seconds or so – don’t move the mouse, don’t do anything in fact. Let your mind reset. Get a bit of a breather and then go on. You can always easily cut out segments of no movement and no audio, as long as you don’t move the mouse no one will be the wiser. Make sure your voice is enthusiastic. As soon as you can’t convince yourself this is important or fun, stop for the day. Go do something else.

Did you go off script? Chances are you did. Make notes where you left the script (you’ll need to transcribe it for captioning if you care about accessibility). YouTube’s captioning ability is pretty amazing.

If you have more energy, go again. You’ll probably find your rhythm. Keep mining that feeling because it is fleeting and you will find it beneficial to make up the time you lost earlier (either in setup, or some other place).

Cut yourself.

No, not with a sharp instrument, but be brutal with your performance. Find three seconds of silence and no movement on screen? Cut it. Find an awkward phrase that can be done without? Cut it. Students, your audience, don’t want to muddle through it. Is the instruction clear? No? Cut it (and in this case, do a voice over). Tighten up the overall rhythm of the video. Do your best to make it flow. Editing yourself will be painful, listening to yourself, is well up there with many uncomfortable things. I think it helps to imagine it’s someone else, but maybe that is delusional.

Small things are important.

Taking the time to get all the small things right is important. They may not mean much individually, but every removal of potential distractions from the content will help your learners. That means every “um” you remove, every breath that you can hear, every awkward pause, whenever you reduce those, you’re making a better product. I personally, like to have a 15 to 30 second introduction – which is simply a couple of pages that I build in Photoshop (you could use Gimp or Paint Shop Pro). Those pages have some sort of identifier, and the topic being covered. I also add a 30 second to a minute  bumper at the end with my institution’s logo on it. If I have time I’ll craft something quick using FruityLoops and Audacity to give the beginning some pep in the audio department. A little four note introduction can help make things seem uplifting, bouncy and set the tone for the rest of the video.

Once the video editing is done really you’re left with tying all the loose ends. Captioning is a big thing, and basically I listen and re-listen and type out the words I actually say. Make sure you export as large a file as possible – in MP4 format with the H.264 codec as a best practice.  Why MP4 and that particular codec? Well, MP4 has become a universal file format and has better compression than MPG/MPEG and has none of the compatibility issues of QuickTime or AVI. That particular codec is also widely used and will be very forgiving. If you’re uploading to YouTube or Vimeo, both sites will take files in those formats encoded in with H.264

That’s it. After the hours it takes to capture and edit, your two minute video will be seen by, well, however many people see it. If you get a year out of it until the next upgrade, you’ll be in good shape.

Fusion 2013 Recap

So I went to Fusion (Desire2Learn’s conference around their products and tools), presented a fairly well received workshop on how to embed an RSS feed into a widget or content page (thanks again to Cogdog aka Alan Levine, Barry Dahl and The Clever Sheep aka Rodd Lucier, for having some part in my ability to do that – perhaps even unbeknownst to them). I also presented how my institution added a Polling widget to our Org level homepage at the Unconference (thanks to Kyle Mackie and his band of very merry helpers in setting that up).

Most of all I stressed about travelling for the first time without my wife since, well, we got married (in 1995). Usually I fill a role in travel, that of planner, navigator, organizer – but she’s the fun and my social mediator. So frankly, I was worried that I would get to Boston, and well, not know what to do, or be the wallflower that I usually am. Thankfully, after arriving early enough on Sunday, getting oriented to the city (a bit) I fell into my usual travel routine and sort of discovered that I still know how to interact on my own. This year’s Unconference, my first, was well, pretty much what I expected. I didn’t expect weirdness galore –  however there was enough of that, but it was the perfect start to my experience at a conference. I got into a pretty good discussion of the why’s and workarounds and issues we’ve had with the Desire2Learn Learning Platform with Andy Freed and Dave Long.

I met a whole bunch of  people I follow on Twitter at the Unconference  – further proving that Twitter is my most important network of connections. Of course, I finally got a change to meet Barry Dahl in person, and of course, we hit it off. I have to admit, I was a bit scared to meet people in person. I always worry that real life is different than online, and well it may just be… well, awkward. I have to say that Barry is the same person online as he is in my real time interactions with him. Meeting the people I’ve interacted with online was the best thing that happened during the conference.

DAY ONE

I arrived at the conference hotel proper, signed in and was assigned to the “Red Socks” team (others were the “Bobby Orrs”,  the “Larry Birds”, etc). The Twitter hashtag for the Red Socks was #RS, not #BS as I wanted to put in a bunch…. Ran into our D2L Account Manager, Lee, who’s honestly one of the best account managers I’ve known. Had a good chat with him, and moved on to talking to the ePortfolio team about all the different ways we want to employ ePortfolio at my institution. Got a really, really good sense of where the product is going, and if it works as easily as it should, the tool should be really, really beneficial to students.

I attended an introductory session on Analytics (now rebranded Insights), because I’m still a bit boggled by the tool, how it does great reports at the course level, but the interesting stuff for me anyways, is at the organizational level, and often I find that the damn tool doesn’t run. I don’t know if that’s me, not really understanding the tool, or the tool not working. Either way, this session didn’t really help, as it was truly an overview.

Lunch rolled around with an OK keynote by Michael Horn, talking about how education is ripe for disruption (like the Auto industry, Music industry or other industries). I guess the analogy doesn’t work in Canada where there’s a level of government involvement in the “competition” between institutions and how education is not a product to be purchased like music or automobiles. Also the charts he showed made no sense to me and communicated even less. John Baker had some suits from other corporations talk with him about education – which I guess was fine. Frankly, I am not a fan of suits, and while I’m sure I could’ve gleaned something from the discussion, all I kept thinking was “these guys are figuring out ways to sell me some product I don’t need”.

Checked out the new Document Templates in a session as well, which was interesting but we won’t have the time post upgrade to do anything with them. Perhaps down the road, but knowing how things work, it’s unlikely we’ll be able to find the time to do anything interesting with them.

Ended the day in a session with Jason Thompson from Guelph about their in-house PEAR tool, which stands for Peer Assessment and Review, which talks with D2L through the API. Probably the most interesting thing I learned today, which was mostly about the peer review process and something that I think will be important as a long-term goal with McMaster and it’s Learning Portfolio project.

In the evening we went bowling and played pool. I’m more of a people watcher but got to hang out with my new friends from Guelph and some old friends from Mohawk College, was good overall but slightly overwhelming. Walking back to the hotel was probably the most interesting thing I did, in the process went by the oldest firehall in Boston. The walk back to the Newbury Guest House was winding as I took an unexpected detour, but it all ended up fine. Part of the fun being in a different city is those weird explorations down roads unexpected. This was a good one.

DAY TWO

Up early, to the conference early.. and well nerve wracked from the anticipation of presenting. I’m never calm about presenting no matter how familiar I am with the subject matter – I suspect that comes from my constant analysis of “what could go wrong?”. More on that later.

The sessions started really early – or maybe it was just me. Of course, I arrive and grab some stuff to eat, start to pour a coffee, and some people exiting the main hall pointed out that I was on the big screen, to which I responded to with a truly confused “huh?”. What a way to make you not hungry, having my mug up on screen twenty feet tall. My wife did say take pictures of yourself in Boston, so I did…

Was only a brief moment of celebrity. Note to self, hide better when Barry has a camera. Another note, compose your shots indoors and check to see if they work. As for the sessions on day two:

I started with the Heutagogy session which was interesting – talked a lot about self directed learning. I think one of the things that get in the way with Learning Management Systems in general is that there’s no mechanisms for students to determine pacing. This is something that I’ve come up against a fair bit – especially in MOOCs – where you would think that students being able to determine their own pacing might be a good thing. I wonder if something like this could be structured using the Checklist tool, students could opt-in to a voluntary “section” to graduate with – and then use restrictions to manage different dropboxes and quizzes? This session was an interesting starter to the day.

The next session I attended was Ohio State’s expanding the LMS session that delved into some of the issues of using third-party (mostly publisher) platforms integrated with the LMS. They did note that Pearson and McGraw Hill integrations were the most technically challenging which makes sense when those publishers have developed their own environments. While my institution isn’t thinking about this sort of stuff yet, it might get there sooner than later. It was interesting to hear and unfortunately, I couldn’t attend the follow-up session which was more technical in nature.

I then attended the ePortfolio lightning round – which may have been the best thing on Tuesday. There was a ton of ways that ePortfolio that is being used, but all of them are using the ePortfolio tool to be a reflective tool. Many find that they scaffold reflective practice at the first with forms to define “how to reflect” and then as the course develops, they tend to bring in less structured reflections. I think this is really valuable for our use in courses – in fact it’s some information that I’ve passed on to a couple instructors in discussions about how they can use the Learning Portfolio (which we’ve called it) at McMaster.

Lunch was next. Delicious. I have to say, the food was excellent throughout the conference. The keynote was from Karen Cantor, and to be honest it didn’t resonate at all because I was presenting right after lunch. Had some interesting conversations with my friends at Mohawk College again – not about work but about life in general.

I did my workshop right after lunch on RSS Feeds using Feed2JS and a bunch of other open source tools. I hit the wifi cutoff switch on my laptop mid demonstration and that lead me to switch to the house laptop for the finish. Panic was coursing through my veins, but I think I held it together pretty well.

After I finished it was a blur again, but I rounded out the day with the Web 2.0 tools “Free and Funky” session. There were a ton of tools listed but there were three that were new to me: Quizlet, Quietube and Twine. Out of all these tools, I should maybe document using some of these for our faculty – just to broaden their horizons as to what can be in content.

At the close of day, we had a police escort to the JFK Library/Museum, which was an awesome building. I ended up seeing 5% of it because, well, I was chatting with a bunch of people. There was more drinking, eating, some dancing (not by me) and after an ill advised stop at another bar, it was time for sleep.

DAY THREE

The sessions everyday seem to start earlier (or maybe bedtime is later)? After a quick breakfast and only one incident of me on screen, I headed off to the sessions.

The first session I attended was a bit out of my wheelhouse, but it was on how ePortfolio was being delivered at the K-12 level. One of the best quotes I got from this session was “Course design is like playing chess”. Indeed it is, there was a lot of talk about nuts and bolts – one interesting concept was that rubrics being embedded with forms that are used as Exit Cards for each week. I wondered where the rubric information goes – back to the student obviously, but can it be connected to a dropbox?

The second was a session on Rubric and Competencies best practices – incredibly useful in my context as not a lot of faculty use Rubrics or Competencies – and I think we’ll need a Rubrics workshop and a Competencies workshop. In fact I hope that the language around the tool changes – and Competencies shift to Learning Objectives. The nice thing about this session was the takeaway in that we got some pre-built rubrics. I think we’ll be designing some basic rubrics (by taking the common assessment methods like essays, proposals and common critera like critical thinking, spelling, structure) and distributing through the org level of the Learning Environment.

I attended the Respondus LockDown Browser session, which was an interesting thing to think about. I know that issues of academic integrity (which is in and of itself a weird buzzword) in blended and online delivered courses are something that my institution might have to think about moving forward as they look at more blended learning projects. I don’t know that it was immediately valuable, but we’ll see if there’s something we can work with going forward. I’m always looking for things that are easy to integrate with the LMS, and this is one thing. I’m not particularly happy with the idea that it’s built off of Internet Explorer, because that browser frankly blows, but I understand their logic.

Day three’s lunch was again, delicious, but distinctly messy. I escaped unscathed, but man, I could imagine dropping pulled pork or baked beans on my shirt no problem… Alec Couros delivered the closing keynote, and even though I’ve seen most of the elements that Alec ran through – I really enjoyed seeing the whole thing put into context. He had a well deserved standing ovation. His keynote was entertaining and informative. A great way to close the conference.

EXCEPT there’s one more session – the last session was incredibly useful and unfortunately poorly attended. The last session was all about optimizing images for the web to make it more mobile friendly – I learned a ton from it. Mostly about the amazing tool Tiny PNG and the optimization tricks for JPG files for Retina Displays (double the pixel size, and use the media queries to shrink) to allow for higher pixel density. Also, I’ve always been pretty staunch about JPG optimization being at the higher level (80% or higher) because of the lossy compression that happens. The presenters were saving images at 40% and getting comparable quality for great filesize improvements. While that kind of nerditry is not necessarily important for anyone outside of developers, it is important for almost everyone who is putting a picture in a course, because that is going to be seen on a mobile device.