I’ve been reading about sensemaking lately, mostly about dealing with information abundance, filter failure and information overload. Most of the articles deal with text and some other form of media, be it video or audio. We have a few tools to help with text based websites, like Google and Bing as well as trust based networks like the ones we build with blogs and twitter. But we haven’t really dealt with information abundance with videos. Recall back less than a decade ago, prior to Google, where we had AltaVista as the best search engine and ranking was based on a series of on-page items. There was a wealth of deception based on the ability of unscrupulous webmasters to spam keywords unrelated to the content on page to boost ranking. Google changed that by factoring in links to the webpage, and the text describing that link serving as an annotation of that page being linked to. Which leads us to the number one user-created content on the web…. video.
Specifically, how are we going to make sense of these thousands of hours of video, and assess it for quality? Adaptive Path seem to think that video needs a flickr-like revolution, an interpretive layer on top of the mass of tools we have to share videos. The problem that I see is the same as the problem with AltaVista, keywords and tags can be gamed, and made irrelevant. So we’re relying on text to describe video. Is there a better way? Maybe some way of visually linking instead of describing a video? Maybe a selection of key frames from the videos to describe the video profile?