Alright, maybe my last suggestion on podcasting went down like a lead balloon with Dave Winer, but I’m going to give another idea a shot. I’ve been ruminating about buying a Tivo, if only to stem the amount of time I spend in front of TV. But the more I think about it, the more I realize that a lot of the most interesting stuff isn’t on TV. It’s on the net.

I know I’m not alone in thinking the combination of RSS and Tivo (or a Windows Media Center equipped PC for that matter) would be a desirable thing to have. With the proliferation of cheap and easy-to-use digital video cameras, video production software, 3D animation tools, and Flash animation tools, I’d wager that the majority of content in the world is currently produced by independent artists. And they just keep getting better, partially because the artists are unencumbered by the traditional economics of distribution; the artists can rev fast, get good fast, and build audiences fast – all the things traditional broadcast video media can’t do. All that’s required now is a simply medium to enable Jill and Joe Couch Potato to access it easily.

I don’t know about you, but there are numerous online flash cartoons that I’d love to follow regularly (StrongBad and Red vs. Blue, to name two). They’re not only free, they’re high quality (nevermind what Craig Palmer might say). But it’s a pain to check back regularly for new releases, and I’d like to watch them on a TV, not a laptop. On the other side of the equation, the bandwidth demands of supplying video is likely a disincentive that is preventing artists from sharing a lot of content – adding BitTorrent capabilities into the mix would enable the audience for an artist’s work to contribute value by partially shouldering the bandwidth load.

All the elements are there. All it needs is a little software to kick start the revolution. The same explosion in personal websites that resulted when blogging software and syndication came on the scene could kick start another revolution in the visual arts.

Of course, once you’ve got this in place, it’s only a hop, skip and a jump from there to a future where people are using the same technology to scarf down the same content and sync it between their home media center and their portable video players. It’s inevitable that video will follow the same path as audio: from a broadcast medium to websites to syndication feeds to personal media devices. Is Video On Demand via Podcasting (vodcasting) an appropriate way to describe this phenomenon?

Book To The Future

Back when I wrote my book, I was surprised at the lack of sophistication in the publishing industry. I had always figured that the desktop publishing revolution would have streamlined the publishing industry – I envisioned elaborate templates and tools that would enable a publisher to easily choke down text and automatically pump out a finished book. Instead, the tools provided by my publisher consisted of a Word template that rendered everything (titles, headings, body text, etc.) as monospaced Courier – all of which was later laid out in Quark Express by hand.

Rewind to last week at Web 2.0: Brewster Kahle presented the seductive vision of universal access to knowledge that could be achieved by scanning the entirety of the Library of Congress for a pitiful $260 million. This revelation followed the announcement of Google Print, Google’s answer to Amazon.com’s Search Inside the Book feature, will enable users to find information in books as part of their Google search experience.

While I applaud both Google and Brewster’s vision, I sense a gap: Brewster’s proposal will give a digital access to books from the past; Google’s service will give (limited) digital access to books from the present. All I can wonder is: who will give digital access to books in the future?

While it is obvious that digitizing the Library of Congress is a manual procedure, it might come as a surprise that Google’s efforts are equally manual. Google generously offers to scan publisher’s content, thereby making it available via the Google Print service while protecting the publisher’s content. Scanning. Just like Amazon.com. By hand. This means that 75 years from the date of an author’s death in the future, Brewster’s organization will have to scan the author’s books by hand – books that Google will already probably already have in a digital form.

All of these undertakings smack of massive amounts of physical (i.e. non-digital) labour. So, if Amazon.com and Google are both doing it, why not cut out the middleman? Why not just have the publisher’s provide the PDF’s (or whatever is the appropriate digital format) of their content directly to Google or Amazon.com? Or, better yet, why not have the Library of Congress solicit electronic versions of books directly from publishers and escrow them for the time when they enter the public domain, just as they do for physical copies? Aside from the efforts of the Library of Congress to digitize rare books, I’m not aware of whether or not they do this already – does anyone know?

My fear here is that Google and Amazon.com will amass a digital library of scanned books that will remain gated off from the public even once the books within it have entered the public domain. Do we really want to still be running Project Gutenberg in another hundred or so years? Probably not.

If the Library of Congress isn’t already cooperating with publishers to escrow electronic copies of books, wouldn’t it make sense for Google and Amazon.com to pledge to release the electronic copies to the public, the Library of Congress, or Brewster Kahle’s organization once they’re in the public domain? After all, it’s not like they even have to fulfill the pledge for another seventy-five years.

Does anyone know if this is already part of Google/Amazon/Brewster’s plans?