Making Sense of Old Media in New Ways

I’ve been reading about sensemaking lately, mostly about dealing with information abundance, filter failure and information overload. Most of the articles deal with text and some other form of media, be it video or audio. We have a few tools to help with text based websites, like Google and Bing as well as trust based networks like the ones we build with blogs and twitter. But we haven’t really dealt with information abundance with videos. Recall back less than a decade ago, prior to Google, where we had AltaVista as the best search engine and ranking was based on a series of on-page items. There was  a wealth of deception based on the ability of unscrupulous webmasters to spam keywords unrelated to the content on page to boost ranking. Google changed that by factoring in links to the webpage, and the text describing that link serving as an annotation of that page being linked to. Which leads us to the number one user-created content on the web…. video.

Specifically, how are we going to make sense of these thousands of hours of video, and assess it for quality? Adaptive Path seem to think that video needs a flickr-like revolution, an interpretive layer on top of the mass of tools we have to share videos. The problem that I see is the same as the problem with AltaVista, keywords and tags can be gamed, and made irrelevant. So we’re relying on text to describe video. Is there a better way? Maybe some way of visually linking instead of describing a video? Maybe a selection of key frames from the videos to describe the video profile?

Reflections on ETC 2010

So here’s a few ideas that I got out of the ETC 2010 conference. Digital literacies aren’t even on the road map for a lot of people at this conference, which is a shame but also an opportunity. Anytime I brought up in conversation that there needs to be a digital literacies course for students (and faculty as well) that looks at evaluating information online, as well as developing skills for creating media in this new paradigm, people thought it was a good idea, but weren’t sure how to proceed beyond that.

Adobe is seriously making a play to solidify their position in education in a smart way – from the student’s perspective. They’ve given away their software to students at several institutions, presumably as a loss-leader, pitching it as an enrollment perk to attract students. The other thing is Adobe’s really good at analyzing a market and identifying gaps, which their new ePortfolio tool somewhat addresses. ePortfolio is part of the Acrobat product, and allows you to grab a folder of stuff (really, they claim any file will work) and import it into ePortfolio, and it will export it as a PDF. So your SWF? Plays in PDF. Your 3D drawing from AutoCAD? Imports and acts as a 3D object in the PDF. First thing I thought was that this was a way around the Flash issue on the iPhone, but after asking a few questions it seemed like it wasn’t the goal. It’s a neat side effect though, if it works.

There was a lot of talk about time management, filtering, how to manage information and information overload (or filter failure as Will Richardson said). Both keynotes made mention of it, but neither talked about tools to help you aggregate information in any depth. A missed opportunity in my presentation, would have been to pick up that thread and go with that angle. I did see a presentation that did the opposite of that, which was about search engines that were not Google and video sites that aren’t YouTube.  I’m not sure if people want more information, that’s why they stay with Google or YouTube, those are the trusted sources. It’s going to be very very hard to fight against those properties because of the entrenched nature of those two sites.

Something that I overheard, which was “we’ve been told that Wikipedia is a bad source for years!” That statement seemed a bit odd, seeing as we’ve seen a study saying that half of the people who edit wikipedia have a Master’s degree or better. We’ve also seen that corporate entities have sanitized their pages as well. I think Wikipedia is fine as a starting point, but really the interesting discussion to have is about what it means when everyone is a consumer and a producer, and even more importantly, what happens to what is good in this new paradigm.

Always-On/Off/On?

As we move to  a pervasive, constantly connected state – and isn’t mobile just another word for everywhere – what does this mean for us as a whole? We’re already struggling for a work-life balance – and stresses are showing at the margins already. People are overwhelmed, unable to keep up and give up. These people are the new impoverished. Impoverished in the sense that they can’t control, manage and then articulate themselves in an information-rich environment. Will this mean a backlash? What will people do?

One thing might be that people create areas of their lives that are unavailable to networks – in essence a “safe room” or along the lines of a sensory deprivation tank. I can see this being something that futuristic home builders might already be working on; a bedroom where no wireless or cellphone network can access. Where else would you want this? Anywhere your privacy might be important. We’ve seen what happens when two people in a relationship at distance will do to show their love (lust), and then later embarrass the other in a revenge plot. Emotions can get in the way. I see the Japanese inventing private hotels, much like the love hotels they have now to service young couples who have no privacy at home.

As we see the Internet of things evolve, we’ll need these sorts of strategies to allow maybe our fridge, stove and phone to talk, but not our fridge and furnace. And this opens hackers to having a real effect on human lives – if we trust devices to tell us we’re out of milk – does that mean a hacker could get us to buy two dozen bags of milk because they spoof a message from the fridge? Brings a whole new meaning to “your fridge is running… you better go catch it”.

This post was inspired by: