Google Is Not My Curator

I don’t want anyone else to curate search results. If I want to curate my own experience, I want to do it on my desktop, on my terms, with my data and my experience. Not with their data, with their choices, with their algorithms and frankly their shit. I can curate my own search results, thank you very much. I guess I’ve gone from Google fanboy (basing an entire course around it) to disgruntled search addict. You know you’re a bad company (yes, you Google) when Microsoft and Yahoo look good in comparison. I wonder if DuckDuckGo is any better? It sure as hell couldn’t be any worse.

Here’s a link to a PDF of the FTC document filed by Centre for Digital Democracy.

Image Matching

I’ve been working on videos the past two weeks, spending a lot of time in a bubble examining clips in a sort of detail that is probably excruciating for everyone else. I like this sort of in depth analysis of what I look at. It also brings up how flawed we are in how we process images online. We generally assign a bunch of words, meaning and descriptors of an image instead of trying to mathematically or logically sort images. Google gets it right by being able to match images, but their similar image algorithm would be much more useful if we could upload an image and say “match that!” It’s further complicated when you factor in the 60 images a second that you get with a video. It would be great to be able to upload a clip and get a bunch of clips, YouTube videos that match not only the actions but the content as well. Want to see Devo’s live performance of Whip It? Just whip up a video of you playing Whip It on Rock Band 3, upload it and get matches for the band, but others also playing Devo on Rock Band.

Making Sense of Old Media in New Ways

I’ve been reading about sensemaking lately, mostly about dealing with information abundance, filter failure and information overload. Most of the articles deal with text and some other form of media, be it video or audio. We have a few tools to help with text based websites, like Google and Bing as well as trust based networks like the ones we build with blogs and twitter. But we haven’t really dealt with information abundance with videos. Recall back less than a decade ago, prior to Google, where we had AltaVista as the best search engine and ranking was based on a series of on-page items. There was  a wealth of deception based on the ability of unscrupulous webmasters to spam keywords unrelated to the content on page to boost ranking. Google changed that by factoring in links to the webpage, and the text describing that link serving as an annotation of that page being linked to. Which leads us to the number one user-created content on the web…. video.

Specifically, how are we going to make sense of these thousands of hours of video, and assess it for quality? Adaptive Path seem to think that video needs a flickr-like revolution, an interpretive layer on top of the mass of tools we have to share videos. The problem that I see is the same as the problem with AltaVista, keywords and tags can be gamed, and made irrelevant. So we’re relying on text to describe video. Is there a better way? Maybe some way of visually linking instead of describing a video? Maybe a selection of key frames from the videos to describe the video profile?

Searching and Learning

We already have changed how we get information – instead of reading books or taking a course a lot of us just use Google or other search engines to grab the information we need. Unfortunately, there’s a contextual issue, where grabbing information off the Internet doesn’t necessarily provide some linear context for that information that may be important. Sure, with books you can use the index, find the pages that the information is on and scan for just what you need, but inevitably you end up reading at least a few paragraphs before and after and getting some context. Searching the Internet is different – we have Google as a situational context provider (even if it is false context, like Google lending it’s authority to search results). I’ve been thinking about how this ties into education – specifically higher education – and I think the way we informally learn information like we do through Google will trickle up to higher education. In ways, we already see this with how students use the Internet for research.

I’m not the only one thinking about this either, Futurelab released a poll a couple days ago asking (primarily K-8 teachers) which search engine they used. I answered the poll, even though I’m not in that demographic, I figured the more data the better. I think that this poll indicates that others are beginning to use search engines in their teaching – which further moves the teacher away from being the sage on the stage, and more towards the guide on the side (with thanks to my friend Otto who had a habit of saying this at least a couple times a semester and burned the phrase into my head).

Also, I was turned on to the book Search Patterns and the accompanying Search Patterns website – both of which the patterns of how people search – which has tremendous implications for how people learn using the Internet.

Howard Rheingold’s Crap Detection 101

This may be one of the most important thing that will determine success or failure in the future. Not just to determine who is telling us bullshit, but what their motivations might be in telling us bullshit. Otherwise, we’ll be exposed to sending personal information to Nigeria to help out princes, which is this generation’s prime real estate in Florida and bridge in Brooklyn.

Reflections on My Use of Wikis in the Classroom

Wikipedia has fundamentally and finally altered epistemology itself—our commonly held ideas about knowledge. For the academy at large, the significance of Wikipedia is roughly equivalent to that which the Heisenberg uncertainty principle had in the sciences in the 1920s—stating what is not possible rather than what is. It is no longer possible to plan, tax, and budget for universities as if their model of knowledge creation is the only epistemological path. No matter how improbable it might seem that a Web page that anyone can edit would lead to valuable knowledge, Wikipedia makes clear that there is now another model for knowledge creation. And it also recasts the comments of the diplomatic chancellor in a supremely ironic light: here is the leader of a massive state system for knowledge creation stating that “when every one is responsible no one is responsible,” while he, and certainly everyone in that audience, has probably relied upon a knowledge acquisition path—from Google to Wikipedia—for which everyone is responsible and no one is responsible at once. — Robert E. Cummings, Wiki Writing: Collaborative Learning in the College Classroom (online book) (link to quote)

I’ve written previously about my wiki problems, assessing the wiki work, but never really assessed the impact of my decision to turn the Searching the Internet course into a guided research course. Now is a good time to do this as the second iteration of the course is done, and I’m handing it off to someone else. For the most part, people embraced the technology once they understood the purpose of using the wiki – which was hard to explain to some people. It was important to understand that user-created content needs some critical consumption before you trust it. It’s constantly amazing that many people don’t think to question broadcast news, newspapers or media in general, which really is the main goal that I hoped to get out there to people. I think in some ways I failed, more on that later.

One major hurdle that I still am not sure about how to get around (through?) is student expectations of what should go on in the classroom. Using the wiki for everything was conceptually difficult for those who attended lectures in the face-to-face offering. They wanted to discuss the issues in class – and I didn’t persuade them otherwise. It’s the thing I love about classrooms – the discussions therein. I should’ve made a better attempt at summarizing these in class discussions in the wiki, that way there would be a digital record of what was discussed, what the decisions were and where to go post-discussion. Of course, having the discussions robbed them of a crucial piece of the collaborative work – the discussions on talk pages. This discussion serves two purposes. The first being the revelation that the general public have a democratic say in the content published. The second being that hopefully the fact that they’re editing the content means that other non-experts are also editing content, and that means you have to take everything written with a grain of salt (sometimes a pound).

Another classroom expectation that I had trouble with was a small minority of students were just not comfortable doing their own research. They wanted specific instructions from me as to what to do. I was clear in that this course would be unlike other courses they may have taken. I didn’t want the authority of the teacher (and considering the subject matter, let’s face it, people have to get over this authority complex they have – it’s decentralized just like the Internet) and was looking for ways to bust ye olde teacher as authority. I tried telling people that I was not an expert and that my role was as a guide through the material laid out before them. Yes, I wrote it and yes, I researched it. Yes, it could be wrong too. I tried telling people that it wasn’t a course, and they weren’t students and they weren’t doing assignments they were doing exercises. Of course, the exam at the end was real. I tried telling students that I only know this stuff because I looked it up on the Internet. That didn’t work out so well, and I never repeated that one. Nothing will devalue the course than telling the truth. In the end I didn’t try to break this power structure, and it’s one of the reasons I won’t be teaching after this semester.

I disagree with Cumming’s assertion that everyone and no one is responsible for the content. It’s neither. It’s you who is responsible for assessing the information you consume. I think that’s where I’ve failed, not getting this point through, that every piece of information you consume has a bias, a history and a reason. Nobody publishes a story in the newspaper or on a blog without a reason. Some are transparent, some are difficult to read. While I’ve given the students of the Searching the Internet over the last seven years the tools and some experience in using them, I’m not sure anyone stayed with it.

With that said, it wasn’t an all-around failure. I did become a better teacher, more confident in the skills I do have (and able to improve the ones that I’m lacking). The content on the wiki was well crafted, well thought out and showed that when students would engage with the subject, they could become subject matter experts on their own.

What If….

In the spirit of the old Marvel comics, What If… series, I bring you:

What if Bloom’s Taxonomy is right?

Well, first a brief primer in Bloom’s. Bloom’s Taxonomy is a tiered structure, like a staircase, that illustrates increasingly complex or difficult cognitive tasks, particularly in an educational setting. At the bottom is Knowledge (knowing the facts) and scaling up to Evaluation (the ability to weigh several arguments, select the best option and defend that selection). In between there are steps that build on the previous one. There are criticisms of Bloom’s, including many arguments about how learning is not sequential and the semantic framing of Bloom’s steps.

What if Bloom had it wrong for learning but right for evolution? On pages 6-18 of the new Pew Report on the Future of the Internet (PDF), people responded to whether or not Google is making us smarter. I think if you apply this transformation to Bloom’s, you see that we’re skipping from Knowledge and Comprehension on Bloom’s scale to something higher, likely Application. The danger, of course, is if we ignore the the critical consumption part of the equation. We no longer have to evaluate the information we receive, but the source of the information. If the source is trustworthy, then it is likely that the information is trustworthy. If we are unable to make this evaluation in a split second, then we are destined for Idiocracy.

Who’s Watching The Wikimen, Or Wikipeople

I just found in a random search (for editing Wikipedia) an article by Wired about an effort to see who’s editing the world’s largest encyclopedia. I have some privacy reservations about this sort of third party monitoring, especially if corporations are turning the screws on people writing about their excesses. I guess though, if everyone can do it, everyone should. Of course, corporations are the sort of bodies that have people who can spare the time to do this sort of activity, which could lead to that sort of misuse. Now, I’m sure that’s not happening, because corporations never behave badly. Right?

Where Journalism Can Go From Here

Happy new year!

There’s been a lot of talk about the death of the newspaper over the last year. In fact, the postings and articles range from the dire to the hopeful almost dismissive (midway down the page). The main culprit is, of course, “the Internet”. Really, this economic downturn has been a chance for further consolidation of corporate assets. It’s not the Internet that has killed these small papers, it’s the (profit) margins. Here’s an idea where journalism (and newspapers) can go from here.

First thing, for full disclaimer, I’m not educated in journalism, although I use a lot of it’s tenets in my Searching The Internet Effectively course when speaking about verifying information and trust. Trust is a very fickle friend that only comes after time, and those who trust implicitly are likely to be burned somewhere along the course of time. Hopefully, these experiences come early enough and without any major damage and the person will gain experience with those situations. As an educator, and a human being, truth is very important to me. Journalism should be the attempt to discover truth, although I suspect that journalism (…not truth) currently resides in the realm of entertainment or at a minimum, distraction.

So with Google working on better search results for you, personally, and a world of apps for the iPhone that focus on geo-location, you’d think local news would be important. Local news is important. So much so, it saved the Birmingham Eccentric from the axe. Yes, the paper was transferred to being a weekly, but newspapers bringing recent news died in the 80’s with a refocusing on TV news. Certainly the rise of cable news and CNN Headline News being a 24 hour news channel for the headlines, helped nail the coffin for breaking news in newspapers. News from your newspaper should contain stories tailored to the location. Yes, I know that this is taught to journalism students everywhere, but it seems like it is ignored. I know that corporate media recycle their wire stories for several different communities, and I’m sure it’s a fairly commonplace activity. Why?

Newspapers aren’t breaking immediate news anymore, so why focus on what isn’t their strength?

Newspapers should be bringing more in depth news, the “why” in the stories. Part of the “why” should be the reason an article is appearing in the local paper. In “Made to Stick“, the book by Chip and Dan Heath, they talk about relevance and how it is important to transmit the relevance of information to an audience. One of the examples of relevance to an audience was about how a local paper focused almost exclusively on local news. If this simple idea of making things relevant to people works, why aren’t people using it? The term for the “why” in a story has become a part of slow news. Much like the local food and slow food movements, slow news can bring a better and deeper understanding of ideas, relevant to people in a community (you can get your Jane Jacobs texts out now to define community). Pausing to reflect on an incident, newspapers can provide this in depth clarification and corrections to the initial news “outbreak” via cable news and online sources that are, ahem, questionable.

You can even have spicy tag-lines, “News you can really trust” and prove it. From a business sense, people are looking for trust, honesty and things we are sorely lacking from our public institutions. Perhaps, a refocused and brave cadre of journalists can bring that to society.  Plus it’ll save paper where they used to be printing corrections (that no one read anyways).

Show Me

Just finished reading It’s A Visual Thing: Audio-Visual Technology in Education which is about the unexpected sophisticated visual literacy of young children and how this can be leveraged for better learning environments. One quote really stuck with me:

To his surprise, he found that their search engine of choice was not Google, but YouTube, because it provided them with a clear, visual set of results rather than a series of short paragraphs.

I guess this explains why Google got into the video sharing business in the first place… A counterpoint to this might be that students choose YouTube rather than Google as a simple usability problem – there’s less clicks to get to the information and on a mobile platform such as the iPod Touch, which was used in the assignment, clicks are a painful process.

Google understands this and Google Goggles is an attempt to begin to organize visual data to likeness – acting somewhat like Wolfram Alpha but in a much more visual way. Expanded, Goggles seems to be a computational way to act like the human brain – see something and grab some information about it.

What I get from this is that we can’t bury educational content deep if the users are going to access this from mobile devices. We shouldn’t ask that they click two or more times, and that in and of itself might be damning evidence against a LMS, which can bury content deep under a mass of links.