Show Me

Just finished reading It’s A Visual Thing: Audio-Visual Technology in Education which is about the unexpected sophisticated visual literacy of young children and how this can be leveraged for better learning environments. One quote really stuck with me:

To his surprise, he found that their search engine of choice was not Google, but YouTube, because it provided them with a clear, visual set of results rather than a series of short paragraphs.

I guess this explains why Google got into the video sharing business in the first place… A counterpoint to this might be that students choose YouTube rather than Google as a simple usability problem – there’s less clicks to get to the information and on a mobile platform such as the iPod Touch, which was used in the assignment, clicks are a painful process.

Google understands this and Google Goggles is an attempt to begin to organize visual data to likeness – acting somewhat like Wolfram Alpha but in a much more visual way. Expanded, Goggles seems to be a computational way to act like the human brain – see something and grab some information about it.

What I get from this is that we can’t bury educational content deep if the users are going to access this from mobile devices. We shouldn’t ask that they click two or more times, and that in and of itself might be damning evidence against a LMS, which can bury content deep under a mass of links.

Wikis Revisited

So back in the day (July 9 and July 10 precisely), I was having trouble with the wiki not embedding in D2L, having it pop out and not stay in the frameset. What it came down to, and was something I wasn’t 100% sure of at the time, was that the MediaWiki version we were using was fairly old. After attempting two upgrades, we now have a wiki that stays in the box, but IE 7/8 break the login (especially on the deep frozen computers in our labs). That problem is part of the default settings of the browser, where you have to override the privacy settings through Tools > Internet Options > Privacy tab > Advanced > Override Automatic Cookie Handling.

settings for cookies in IE to keep logged into the wiki

So for now, we’re still having trouble with legacy versions of MediaWiki, but on my personal page, the latest MediaWiki install is great and works fine with most browsers. Thanks again to Barry Dahl for the inspiration to do this.

An Alternative Approach To Crap Detection

I haven’t written a lot about what Howard Rheingold calls critical consumption of information on the web, mostly because I’m paid to teach about it and somehow I feel that giving things away when I’m paid to do that is a violation of “trade secrets” or something. After finishing a video for Alec Couros, I feel that I should put this part out there because it’s more of an essential skill and less of a key feature of any course I teach. In fact, it’s a key feature of every course I teach… neither here nor there.

So I mentioned the five questions approach rather than crap detection or CRAP test. The five questions (who, what, where, when, why) comes from the journalistic inquisitiveness that we all intuitively ask when we smell something fishy. I’ve brought those questions to bear on looking at the websites we view daily. Here are some questions you might want to ask:

Who?

Who is writing this stuff? Does the author identify themselves? Can you find an author through tools like whois.net if they don’t? Generally if a person signs a name to an article or column, it is more reliable. If you find a  name and an e-mail, do you get a response if you send them something? Can you further find a phone number looking at an online white pages? On top of those questions, ask yourself who this person is and what credentials they have. Is that important? Sure, being a lawyer in California is an impressive credential, but if you’re a lawyer who only practices in California with California law, does that make you credible to talk about issues in other jurisdictions? In some cases, law might be the same everywhere. In others, maybe not. Who links to this author, or writes about them? In many cases, links are from like-minded people. If it is difficult to determine people’s intentions, this is a good barometer of that person’s viewpoint.

What?

What are they saying? Does it ring true with your internal sensibilities? Does it sound reasonable? There’s many cases where a reasonable statement is false, only because it’s close to the truth, but not quite. Check with Snopes to see if it’s an urban legend. We also know how the masses can augment this effect, where we all come to a consensus of what truth is – if we say it often enough and loud enough and repeat it enough, it becomes truth. “What” attempts to combat this with evidence and fact checking. See what other people say about the subject. See if they agree, or is it cut and paste agreement? Is this person trying to sell you something? What is their motivation (which is quite often tied to who they are)?

When?

When was this written? Without a date, you can use the last time Google accessed the site, available on the results page through the Cached link. That gives you an idea of at least the last time Google saw it. When is tricky, because it’s importance depends on the context of the subject matter. Does it matter if information about the war of 1812 was written in 1995 or 2005? Only if something has been discovered in that time period. You can use the WayBack Machine to see if the website or webpage has changed in that period. Now ask if it matters if information on hip replacement surgery was written in 1995 or 2009?

Where?

Where was the article written? Does it matter? Where really only contributes to the case you’re building, rarely does location destroy credibility. It might play a part if liability and copyright are at issue. If you have an article written in a country that does not have strong libel laws, perhaps it might be less credible. You can determine where the site is hosted by the whois registration, and a brief glance at the domain name might tell you something. Of course, there are some obscure domain extensions (.tk anyone?) so refer to this handy list of domain extensions. Is the site hosting for free? If someone is willing to pay for hosting, chances are they aren’t just kidding around. While the cost of hosting is very inexpensive, certainly not as imposing as it once was, it is not a magic bullet to kill the claims of a website. Websites with the extension .org, .edu and .gov (depending on your view of the government) tend to be good sources of information.

Why?

After you’ve put together all the other pieces, you now know a little more about the author. Of course, you can never know why someone writes something, but you can examine what they have to gain from writing the piece. Are they trying to sell something? Is it a blog post or review about a product by an employee of the same company? These sorts of testimonials are hard to decipher because they can be very well written.

If you take into consideration the five questions whenever you are faced with confirming information on the Internet, you should be able to build a case to justify the website’s inclusion into your work.

Rubrics for Discussions and Wiki-Based Work

I’ve been refreshing a course I created in 2004 about Searching the Internet. Instead of the antiquated handouts I’ve replaced those four assignments with a wiki – each learner will contribute links, text, audio, images and video to the wiki. I decided that I can’t let them go and work in it without giving them some expectations so I spent a couple hours drafting this wiki rubric. I also cut 10% out of the mark and added it to a discussion component to drive people to talk in the talk pages of the wiki and the Discussion tool in the LMS. Here’s that discussion rubric.

The course I teach generally is to people who are older, may have a job during the day and are aspiring web designers. I would say that most of these people are carving out a second career. This course is taught at a distance, but I intend to tweak the rubric for a continuing education delivery (maybe take out the discussion element, or reduce it for a face to face class).

If you see some glaring errors I’d appreciate any feedback, in the comments or on twitter (@dietsociety), or something that I didn’t consider in my language. Thanks.

The Value of Well Designed Edu-Spaces

I began working on reviewing and revamping a history module for the Searching The Internet course today, as well as providing some basic training on Audacity for a faculty member. It was humbling, because often I forget how much I know and how much I can help.

While I was gathering sites, notes and other ephemera for the history piece I asked myself why I was including this piece? Who cares about the history of the internet? Is it important? Maybe. I think it’s important, but that’s a real dictatorial thing to do. How is it important? Well, history dictates the future in ways we can’t always predict (although my RRSP (401k for you Americans) wishes that someone was more on the ball seeing the economic conditions before the Great Depression and this recession). Someone out there is going to want to know this stuff. It informs us better. Doesn’t it?

Also while I’m gathering content, I’m also mulling over a couple of other things. Design of LMS learning spaces – I’m not going to have the same look to my courses. I’m putting my hypotheses into action. My thinking is that we have so many well designed websites, we need better designed education spaces. Not to show off, or embarrass others, but to make a statement. We need better designed, better looking spaces. I’ve seen how cleverly designed websites can distort truths (or my truths). Education should be fighting back with our view. We need better looking materials. We have the means now – it’s easier and quicker to put together something that looks good. Why isn’t this policy at institutions?

Isn’t better design, and better eye-candy, a way to combat the Huxley-ian Brave New World/Postman Amusing Ourselves To Death scenario? Attract people to education, and in the process get them to ask tough questions? Isn’t that the case with Michael Wesch and his students work on “The Machine is Us/ing Us” (embedded below).  It’s nearing 10 million views in a year. That’s nothing to sneeze at. If it were a lesser product would it have less impact?

I suspect the answer is that no one really thinks about this. Or maybe they don’t think much of it. Much like the connections between ARPANET, the history of the Internet and it’s sharing of information that has influenced our daily lives. Is that enough of a hook?

Crap Detector Part 2

Continuing on with the theme yesterday I wanted to add that Howard Rheingold has succinctly written a piece on crap detection, which pretty closely mimics what I’ve said in the last year of my Searching the Internet course. I’ll embed last years’ video lectures from that week’s work.

We work on the same principles, in fact when I teach this face-to-face, I try to accentuate that you have to think like a detective or private investigator; build a case for or against this website’s information. That was interesting because I’ve had cops in my class who said that investigative technique is a lot like taking a bunch of disparate pieces and putting them together is a lot of what detective work consists of. I’m glad we’re on the same path.

Built In Crap Detector

Every man should have a built-in automatic crap detector operating inside him. It also should have a manual drill and a crank handle in case the machine breaks down.

Ernest Hemingway, in a 1954 interview with Robert Manning, appearing in the Atlantic Magazine, August 1965

Howard Rheingold had mentioned this quote a couple of times, and it really stuck with me. So much so, I had to look it up and I’ll be using it in my teaching next fall. I teach a course called “Searching The Internet Effectively” and wanted to overhaul the content as it was mainly designed five years ago, with content refreshes every semester to reflect the fluid nature of the beast. I hadn’t really approached the social side of the web – mainly because I was busy keeping up with changes. There were and are elements missing from the course.

I had realized last year that I hated the method of delivery, which consisted of me lecturing and the class doing squat until I was done talking. Part of the problem is that they’re in rows in classrooms. I can’t make things much better; the politics of furniture, or rather the politics of furniture in a computer lab restrict me.

The content, while adequate for the majority of students, is not as engaging as I’d like. I never seemed to get to the stuff where I really enjoyed, which was talking about discerning bullshit from good stuff on the web. So I’ve spent the last four months off and on collecting data and sites that will help inform learners. I think making content a “treasure hunt” of sorts can help with student engagement, and I’ll still “lecture” but more as a method to ensure that learners who have no prior experience with web searching (which strikes me as odd) still participate and can contribute.

I’m planning on replacing the crappy assignments with wiki-work. If people outside the class contribute great, if not, I think it’ll still be worthwhile. I’ll still have a final exam as that’s a mandatory item. I’ll have one assignment which is a culmination of all the skills I hope students acquire. Remember this is only a six-week course, so it’s not as lengthy as a “normal” course.

Which brings me to the point. Students are going to have a hard time with this – if this isn’t done well. Debunking authority, whether it be subject authority or any other kind of authority, unsettles people and screws with people’s expectations. But building this sort of crap detector in someone’s life is a critical skill to have. It’s amazing how many people are very trusting with content they get on the web and a bit frightening when you extrapolate it to how it can affect people in real life. Certainly the ability of unscrupulous hucksters to bilk someone of money is out there, hopefully skepticism prevails for people in my class.

I really appreciated this post, which begins to illuminate the new construction of authority in a distributed environment. Objectivity, trust, authority… all related and tied up. Hopefully none of this sets off any crap detectors.

Google Jumping the Shark?

I read this article about Google courtesy of the Wall Street Journal – frankly I’m surprised by this action. Google wants to leverage their position as premiere search engine by getting Internet Service Providers to give them priority – a fast lane – on the information superhighway (how’s that for an antiquated phrase?). Clearly, this is not the most neutral position a company can take. If this practise were to become commonplace it would be easy to see that this could lead to a multi-tiered system, where voices who have been empowered through the internet (most recently through social networking sites like YouTube) are then further disadvantaged again, as the individual loses their equal footing to media giants again.