Why The Labels Will Fall

Saturday night, Ashley and I were wandering home along Castro Street in Mountain View, when I heard the strains of a bluegrass jam in full swing. We took a quick detour over to the Dana Street Roasting Cafe to see what was going on.

The place was, in a word, packed. Every chair filled, every patron’s eyes and ears glued to the band of the evening, Houston Jones, as Glenn “Houston” Pomianek ripped through a babbling bluegrass solo. We grabbed a coffee and stuck around for a half dozen songs and then toddled home to watch Saturday Night Live.

And that’s when Ashlee Simpson proved, once and for all, why the mainstream music labels are wholly unsuitable to be the stewards of culture.

Ashlee, as you may or may not know, is the younger sister of Jessica Simpson. She’s been following in the trend of other equally untalented starlets expanding their empires into the world of music. She (or at least the genetic pool that gave rise to her) can’t spell, I hear you saying, but surely she can play an instrument? Of course not, says I! Fair enough, you think, I guess she’s a vocalist.

Or so you would hope. Unfortunately, you’d be wrong.

In her first performance of the evening, Ashlee demonstrated that the task of lip-synching her performance stretched the limits of her meager abilities. And it only got worse, as the lip-synch track for the second song was for the wrong song, forcing her rush off-camera. Of course, the all-seeing eye of the Internet caught it all on video.

There are millions of independent bands out there like Houston Jones, stocked with real musicians, with real talent, and original material that they actually wrote. Previously, these bands were unable to reach an audience without the help of a label. But that’s changing. I expect that over the next ten years, the label grips will weaken, driven in part by dissatisfaction with the quality of product available, but also by the shear amount of much better (however the listener chooses to define “better”) material available from independent musicians.

The question now is one of discovery. Chris Anderson’s Long Tail article made a good point of demonstrating the value that lives outside the mainstream – all we need now is a way for people to easily find the stuff. Amazon.com recommendations can only do so much, as Amazon.com is ultimately limited by what it can carry. Bloggers will probably carry some of the weight, though I’d feel a little more confident in this turning the tide if there were some way to reward bloggers for directing traffic to artist’s sites, especially when such redirection resulted in a sale. With that kind of assistance, hopefully the money currently imprisoned in mainstream acts would get smeared across a much larger number of people. Nobody should be making millions for crap music; but any artist with talent and even a modest following should be able to make a living.

Of course, rewarding those who recommend products via their blog comes with its own set of issues.

WWPFD?

Yesterday, I received a note from Perry, the former MBAS President for my year at the Sauder School of Business.

It is with great sadness that I inform you that Peter Frost passed away at 3:00 AM on Monday morning. This was Peter’s third fight against cancer – he was apparently comfortable and attended by his family at the end.

Perry

Peter Frost Action FigurePeter Frost was my favourite professor during the MBA. He projected an aura of calm at all times; he had a genuine gift to draw out the best in people; and he’s the only professor I know that was cool enough to warrant his own action figure (“With Kung-Fu grip and toxic handling action!!!”)

Peter specialized in handling organizational “toxins” – the hurt and pain that people experience during the toil and grind of their careers. He trained managers to act as “toxin handlers” to prevent, when possible, and consume, when necessary, organizational pain to help people achieve their best results.

The effect of Peter’s classes on me was always dramatic. I always came out of Peter’s classes with a sense that I could do better – it’s as if the very essence of Peter’s being were some form of airborne psychoactive virus. You couldn’t help but come out of his classes infected by Peter. For me, the infection took my black-and-white view of the world (informed by the lens of my engineering training) and smeared the colours together. Unfortunately, this effect was only temporary – eventually my engineering immune system reasserted itself, and flushed the virus until the next class re-infected me with a new mutant strain.

More recently, I’ve found myself trying to remember how Peter would look at a situation, and asking myself to inspect people’s emotions and motivations more closely. Written in the back of my lab book, I have a set of questions that serve to remind me to focus on the core issues: What are we trying to accomplish? Why do you ask? I realize that I forgot to add one question that I found myself asking during the MBA: What would Peter Frost do?

Tomorrow I will add that question to my list.

Entrepreneur Meetup

Yet another interesting entrepreneur meetup in Santa Clara with some local budding and established entrepreneurs. More discussion along the lines of the last meeting I attended in August, primarily focused on how to overcome the hurdle to find customers.

Attendees

  • Bego Gerber: a regular attendee at the Santa Clara meetup, Bego is an independent business development agent working on a “pro-sumer” (as opposed to consumer) product that enables individuals to buy directly by from companies at wholesale prices, as well as receive rebates on the products they purchase. (You’ll need a password for his web site: ebiz)
  • Zhi-Hong Liu: an electronics engineer currently working in the financial services industry.
  • Sunil Tagare: Sunil is CEO of recently-launched Research4, a firm focused on providing information that fills the gap between the blank piece of paper provided by CRM systems (like Salesforce.com and sales teams. Sunil is a serial entrepreneur with past successes in the telecommunications industry (Flag Technologies, and Project Oxygen).
  • Tyson Favaloro: Tyson is a Business Analyst with TechStock, a San Jose venture capital firm focusing on finding and funding seed-stage ventures. He was here pounding the pavement to see what kind of entrepreneurs the meetup attracts.
  • Brendon Wilson: is a product manager at PGP who has recently moved to Silicon Valley from Canada to establish himself in the area, build his entrepreneurial skill set, and put the pieces in place that will help him eventually start his first venture.

Topics Discussed

  • The Long Tail: I raised some of the items discussed in the recent Wired article on the opportunity presented by the non-mainstream markets being opened by digital/Internet-based delivery. Discussion of advertising and the “I want it now” society – if the market lies in “the long tail”, and providers of content, services, and products are exploding, how will you overcome the barrier to acquire customers? Sunil recounted his current attempt to use Google Adwords, and just how hard it is to make your product visible. We’re talking non-trivial amounts of money to be made – just look at ring tone sales ($3 billion globally in 2003).
  • Bego proposed an interesting idea: take the Amazon Associates program and augment it. If you brought a customer to Amazon, shouldn’t you not only get a cut of the first purchase, but a smaller cut of every subsequent purchase?
  • Evolution of the “I want it and I want it now” society: things like Scanbuy, a solution to allow users to capture barcodes with their camera phone will ultimately tie the everyday physical world with the digital marketplace. Meanwhile, more applications will be built on web services to entice users to buy immediately, such as Delicious Monster, an application which allows users to track their CD/DVD/book library and find other stuff they might like (all built on Amazon’s Web Services API).
  • When the drugs aren’t profitable: Bego brought up the interesting problem posed by a new, more effective typhus (or was it typhoid) vaccine that’s been created, but won’t be profitable for its creators. And hence, won’t ever make it to market, despite all the good it could do in the third world. Are we doomed to only cure problems that are profitable? And what’s to stop drug companies from creating new viruses that they can “cure” – it sounds crazy, but it’s currently happening in the world of spyware. Perhaps it’s time for a “Chemists without Borders”? Or an open source license for the vaccine? Or a DropCash campaign to raise funds to get the vaccine out there? Then again, maybe Bill Gates has some money to put to this cause?

Interesting Books, Movies, and Events

Book To The Future

Back when I wrote my book, I was surprised at the lack of sophistication in the publishing industry. I had always figured that the desktop publishing revolution would have streamlined the publishing industry – I envisioned elaborate templates and tools that would enable a publisher to easily choke down text and automatically pump out a finished book. Instead, the tools provided by my publisher consisted of a Word template that rendered everything (titles, headings, body text, etc.) as monospaced Courier – all of which was later laid out in Quark Express by hand.

Rewind to last week at Web 2.0: Brewster Kahle presented the seductive vision of universal access to knowledge that could be achieved by scanning the entirety of the Library of Congress for a pitiful $260 million. This revelation followed the announcement of Google Print, Google’s answer to Amazon.com’s Search Inside the Book feature, will enable users to find information in books as part of their Google search experience.

While I applaud both Google and Brewster’s vision, I sense a gap: Brewster’s proposal will give a digital access to books from the past; Google’s service will give (limited) digital access to books from the present. All I can wonder is: who will give digital access to books in the future?

While it is obvious that digitizing the Library of Congress is a manual procedure, it might come as a surprise that Google’s efforts are equally manual. Google generously offers to scan publisher’s content, thereby making it available via the Google Print service while protecting the publisher’s content. Scanning. Just like Amazon.com. By hand. This means that 75 years from the date of an author’s death in the future, Brewster’s organization will have to scan the author’s books by hand – books that Google will already probably already have in a digital form.

All of these undertakings smack of massive amounts of physical (i.e. non-digital) labour. So, if Amazon.com and Google are both doing it, why not cut out the middleman? Why not just have the publisher’s provide the PDF’s (or whatever is the appropriate digital format) of their content directly to Google or Amazon.com? Or, better yet, why not have the Library of Congress solicit electronic versions of books directly from publishers and escrow them for the time when they enter the public domain, just as they do for physical copies? Aside from the efforts of the Library of Congress to digitize rare books, I’m not aware of whether or not they do this already – does anyone know?

My fear here is that Google and Amazon.com will amass a digital library of scanned books that will remain gated off from the public even once the books within it have entered the public domain. Do we really want to still be running Project Gutenberg in another hundred or so years? Probably not.

If the Library of Congress isn’t already cooperating with publishers to escrow electronic copies of books, wouldn’t it make sense for Google and Amazon.com to pledge to release the electronic copies to the public, the Library of Congress, or Brewster Kahle’s organization once they’re in the public domain? After all, it’s not like they even have to fulfill the pledge for another seventy-five years.

Does anyone know if this is already part of Google/Amazon/Brewster’s plans?

Podcasting Conversations

The other day, I forwarded my thoughts on annotating podcasts over to Dave Winer and Adam Curry. The response I got from Dave was a little surprising – I think Dave thinks I don’t “get it” – but it did stir some other thoughts on the subject.

Dave had replied to myself, Adam and Wes Felter:

With all due respect, you’re thinking about it wrong. You’re trying to turn a podcast into something you use at a computer. Look at the first three letters in the name and think about where they came from. Annotation, if it’s going to happen, will be in voice, and implemented in the ipod. It’s easy if you just use it. Wes Felter says he won’t use something he can’t read on a computer. I wonder if Wes ever drives a car or rides a subway or takes a plane flight. And Wes if you don’t have an iPod yet, get one! It’ll change your life, in a nice way. ;-> Dave

Maybe I did a bad job explaining the way it would work (or I may just be totally wrong in my thinking).

I totally agree that podcasting should be maintained as an iPod-centric/portable-device centric user experience, something that you can listen to while you do something else (walk, work, drive, whatever). What I’m proposing is that there needs to be a way for someone to dock their iPod and not only download an audio file (a podcast) via an RSS feed, but also aggregate a number of annotations for that podcast from their usual RSS feeds in such a way that they can view them/listen to the corresponding excerpts of the podcast without having to listen to the whole recording. Dave did make a good point – maybe the annotations are voice/audio recordings rather than text comments. I’d argue both are useful – after all, audio is basically invisible to search engines, and linking to a source file isn’t really specific enough – we need an audio equivalent to the html anchor tag.

In a world of exploding content we still have a finite amount of time – I can’t spend all my time listening to every podcast, and then listening to commenters’ audio annotations separately; and commenters are unlikely to take the time to rip down a full audio post on which they wish to comment, remix it with their comments and repost it on their own site – that takes way too much effort. Commenters need an easy way to annotate podcasts. Listeners need an easy way to scan podcasts.

We need a podcast equivalent to what we have in the blogging world today. When someone posts a blog entry, someone can easily add value to the entry by linking to the entry from their own blog and providing additional information; or a reader can immediately post a comment on the entry itself. We need blog entries and hyperlinks for audio, but in a way that maps to the portable world and the audio world. Example: maybe Lessig only said one really new thing in his speech at Web 2.0 – a mechanism is required to help direct listeners to that segment of the recording (rather than have them listen to the whole thing) and add additional commentary.

It’s likely that to do this, you really would want the functionality of ipodder integrated into your regular RSS aggregator – after all, you don’t know who might annotate a given podcast, so you’d want all the RSS aggregation in one app, rather than maintaining two apps/feed lists. In an ideal world, this application would slice up the original source file to allow spicing of the original source file and the comments aggregated from other feeds, and generate a number of playlists to allow the listener to choose how to consume the source audio and the commentary:

  • Original source playlist: This playlist would play the original content, uninterrupted. (Remember, the application has to pre-slice audio in order to permit the other playlists I’m about to describe, so there needs to be a playlist to knit together all the original source bits into one continuous audio stream)
  • “Greatest Hits” playlist: This playlist would take the commentary audio from other feeds pointing to the source and splice together the commentary audio with the sections of the original source audio to which the annotation relates. Perhaps it would even make sense to allow the user to choose whether to play the comment audio before or after the section of the source podcast audio to which it relates (or perhaps the commentator could signal this in some way in their “link” to the podcast). This playlist would allow the user to simply hear about the sections of the podcast that other people judged to be most important, and skip the rest.
  • “Call-In Show” playlist: This playlist would take the original source audio and intersplice the aggregated commentary from other feeds at the appropriate point in the audio. With this playlist, the original podcast would be augmented with the ongoing commentary aggregated from other sites.

The idea here is to do for podcasting what blogging has done for newspapers – if blogging is a faster, better equivilant of “letters to the editor”, then podcasting should be a faster, better equivalent of the radio call-in show. It’s all about conversations – remember?

Annotating Podcasts?

O’Reilly’s Web 2.0 conference has landed in the midst of Adam Curry and Dave Winer‘s exploding podcasting meme. With Web 2.0 providing MP3’s of the conference proceedings, it becomes readily apparent what is missing in the podcast world: a way of representing and distributing podcast clippings and annotations.

Think about it – if everyone’s recording content all the time, especially conference content, there comes a point of saturation. There’s a theoretical limit to the amount of data you can enjoy. Enabling audio syndication through RSS and iPodder doesn’t just open a flow of information – it opens a fire hose. Nobody has that much time on their hands: RSS, regular web sites, newspapers, radio, magazines, books, conversations, yadda yadda yadda, and now you want to add audio to the mix? And then video (which is the inevitable next step, given the impending release of cheap personal video players)? Something’s got to give. If there’s going to be an abundance of readily available content, there going to need to be an easier way to navigate it.

I envision a hybrid solution to the problem:

  1. One part of the solution is already provided by the existing podcasting solution: RSS feeds use enclosures to provide media files either directly or via BitTorrent to subscribers. This gives the subscribers the raw source material to work with.
  2. The second part of the solution is an annotation format to mark up the source file – a way bloggers to create a simple blog entry that identifies the source of the media file on which they are commenting, the section of the media file on which they are commenting (start time, stop time, duration, etc.), and the comment itself. Of course, these annotations would be distributed via RSS syndication.
  3. The final part of the solution would require an additional user interface element on the iPod (or other media player) itself – a way for the user to peruse the annotations and for a particular source file and jump immediately to the point in the source media to which the annotation is related. This may be the trickiest part to achieve – then again, it might be achievable in other ways. Perhaps the podcast aggregator could divvy up the media files according to the comments it had aggregated for a given media file, embed the comments in the media file’s metadata, and only load the clippings onto the device?

Doing this would eliminate the need for users to listen to an entire audio blog entry in order to get the information they want. It would also put the “conversation” back into podcasting – otherwise, what is podcasting, but radio (a broadcast, one-way media) in disguise? With this kind of annotation system, the user experience would start to resemble the real world a little more – it would be akin to saying, “Hey, I know you caught the latest Gillmor Gang, and you know what I think? I really liked what Dan was saying about X, but in addition I feel <insert commentary>”.

So, who’s going to build it? Or does it already exist?

Oh, I’ve Wasted My Life!

I read about Evan Williams (CEO of Pyra Labs, creator of Blogger) making the decision to leave Blogger/Google and move on. It was the most depressing thing I’d ever read.

From Evan’s site:

Six years is a long time. Or a little. Depending. For me, it’s a little under 20% of this life on Earth.

For the math-challenged in my audience: that’s a little over 30. This guy is only a little older than me (and he’s also a Tragically Hip fan). And undoubtedly quite well off after the IPO of Google. And a founder in a company that has played a core role in developing and nurturing a new wave of a democratizing technology that is set to be (if it isn’t already) The Next Big Thing. It’s The Next Big Thing, and he’s already Been There and Done That, made his money and is moving on.

What the hell have I been doing with my time?

Last week, I had coffee with an enthusiastic entrepreneur looking to change the world. He’s twenty-five. He grew up in India, worked in Australia, and, at the age of 21, was the youngest executive at News Corp.

What the hell have I been doing with my time?

Four years ago at FC 2000 I met Max Levchin. He was CTO of a new little startup called PayPal – PayPal was bought by eBay for $1.5 billion in a stock swap.

WHAT THE HELL HAVE I BEEN DOING WITH MY TIME?!?

I spend my days working, making money, paying bills, and trying to learn what I think I need to learn for whatever the future holds. Don’t get me wrong, I like my job and I’m learning a lot – but is it the right stuff? Do I have the Right Stuff? I come home to try to figure out what I want to do, where the opportunities are, and What Matters. Working on something that Matters is of central importance to me. I go to tech events to chat and network, but I’m growing increasingly uncertain that there’s much point if I haven’t figured out what I want to work on. Nothing’s popping out at me. I grow increasingly uncertain.

What have I been doing with my time?

My biggest fear is that somewhere down the road, I’m going to turn around and ask myself this same question and be equally unsatisfied with the answer: Life happens – but is this it? Oh, I’ve wasted my life!

B-b-b-Bill!

Bill Gates (Bill!) dropped by the Computer History Museum on Friday for a brief congress with the Microserfs of Silicon Valley and a conversation with John Hennessy (President of Stanford). Topics covered by the conversation: security, DRM, malware/adware, making computers even easier, and The Future.

But first some fun.

The event opened with Microsoft’s “Behind the Technology” video, a spoof of VH1’s “Behind the Music” that charted the rise of Microsoft from the days of the Altair to present day. In a particularly hilarious sequence, Anthony Michael Hall recounted the stress of playing Bill Gates in “Pirates of Silicon Valley“:

Anthony Michael Hall: Preparing for that role was challenging – the caffeinated drinks, the cold pizza, the late nights, the lack of showers – it was hell. I mean, this guy was a geek.

Video cuts to Bill Gates

Bill Gates: He’s supposed to look like me?!? Come on – that guy’s a geek!

The video was rounded out with comedy ranging from the absurd (P. Diddy rapping about DOS and his all-DOS rap album project, “DOS Forever”) to the downright scary (Steve Balmer reprising his “Monkey Boy” antics as he hawks Microsoft Bob in a pitch that would put Ron Popeil to shame). Bill even got in on the comedy:

Bill Gates: It was very clear to me that the Internet was where everyone was going to be. It was especially clear to me after everybody had already gone there.

The video wrapped up with a highlight of the fictitious next episode of the program focusing on the exciting world of databases while cutting to a shot of Ellison aboard his yacht.

Right – humour aside – what was on Bill’s mind? Here’s the summary from my notes.

Natural Interfaces

John Hennesey put the question to Bill: what do you see as the biggest failure of computers? Bill responded that we’d been working on speech recognition since the mid-sixties and we were still having a difficult time with getting it work reliably. The need for speech recognition, from his point of view, is being driven by the need to provide more natural user interfaces which enable people to interact with computers in an intuitive way. The same thinking also applied to digital ink and handwriting recognition – and in both cases, Bill believed that Asia would be on the forefront of these technologies, driven by the unsuitability of keyboards for handling Asian languages.

Privacy and Security

When asked about the tradeoff between privacy and usability, Bill started by talking about the threat of spam and phishing attacks. In the case of spam, he felt the current solutions were about halfway to solving the problem – he noted that in the case of Microsoft’s internal network, he’s never received a piece of spam. In contrast, he viewed the threat of malware and adware to be on the rise – and revealed that Microsoft intends to provide a solution. This is rather ironic, given that security holes in it’s own products, primarily Internet Explorer and the Windows operating system, are providing the means to infiltrate users’ computers and propagate this menace.

The conversation turned to talk about the threat of allowing arbitrary code to run on a computer. Bill explained the difficulty Microsoft had in trying to simplify the concept of security for the user – initially, they thought it would be enough to have a popup ask the user if they wanted to allow a script or embedded executable on a web page run on the user’s computer. Unfortunately, Microsoft soon learned that users’ simply clicked on “OK” for everything! Going forward, Bill believes there needs to be tools to “prove” code, to show or describe contracts between code modules.

Preventing bad code from being installed in the first place only provides part of the solution. Another part is isolation to ensure that any infected machine is unable to propagate its infection to other machines. Part of the problem, according to Bill, is that the Internet is an open system. Unlike biological systems, where the spread of a virus is limited by the local environment, a machine on the Internet can contact just about any other machine – infection runs rampant. In the future, Bill believes we need to build systems that enforce isolation by default – who to accept connections from (or data, as the recent JPG decoding flaw illustrated so effectively). In short, the Internet is missing some form of guarantee; whether this is achieved by layering something on top of the existing system or by establishing a new system remains open to debate.

Digital Rights Management (DRM)

Inevitably, the discussion turned to focus on DRM, which is only natural as any system that can “prove” code might just as easily be able to be used to ensure that the user can’t access media for which they haven’t paid. Most interesting was how Bill focused this discussion on privacy of tax records, patient records, and other private information, instead of media. When asked later by Brad Templeton about the feasibility of DRM in light of the analog hole, Bill was quick to contrast the DRM requirements of media from those of other private information. Undoubtedly, Microsoft is going to pursue DRM for applications like health records – and in that domain, he argued that there is no equivalent to the analog hole (though I would argue otherwise – copying the information by hand counts, at least in my mind).

When it comes to media, Bill viewed this mostly as a consumer issue. There will always be leakage, but the key to successful DRM would be removing the barriers to transportability. I should be able to move my music around without a problem – the rights and the music should be held separately, in fact, he argued. The transportability of secured media will be the determining factor for where the balance between free and paid media – make it easy for the user, and they’ll pay for that convenience rather than scrounging to rip off the content producers (a point hit on in “The Long Tail” in this month’s Wired).

Open Source

During the questions, one of the audience members asked about how Microsoft would proceed in light of the threat of Linux and Free/Open Source Software, especially in developing markets like China. Bill got a little out of joint here when the person posing the question mentioned that more that 50% of all servers were running Linux. “First, start with the facts,” Bill quipped, and proceeded to explain that Windows was still dominant in the server market.

Bill then pointed out that China already has free software – they’re running pirated versions of Windows! The key to the future for Microsoft in these markets was proving the value of the software, the system, the support, and the ongoing innovation required to meet customer needs that Linux was not capable of delivering. In countries where Microsoft faced high piracy rates, this strategy had brought compliance rates into line with those in North America, and he seemed convinced that the same would happen in China.

Bill went on to point out that Linux was mainly serving to unite the fragmented UNIX market – something that the UNIX manufacturers had been unable to do (“Every week, they’d all get up on a stage somewhere and swear to work together, and then the HP guy or the IBM guy would go back to the engineers and demand they make their version better than everyone else’s!”). In his view, in the future there will only be two operating systems: Windows and Linux. As for the others? Bill got a bit cocky here, saying:

Bill Gates: Microsoft has had clear competitors in the past. It’s good that we have museums to document them.

Then again, the Computer History Museum just happens to be located in a former SGI building, so perhaps the cockiness is justified.

E-Voting

This topic came up somewhere in the DRM discussion, but was touched on only briefly. Bill contrasted the difficulty of securing software from securing the electoral system in terms of the problem of having to convince the public at large that a system is secure. As he put it quite succinctly:

Bill Gates: Software is magic. People don’t want magic involved in ensuring the integrity of the voting process.

A-freakin’-men!

Conclusion

Unfortunately, I never had the opportunity to ask Bill a question provided by my co-worker:

When you and Paul Allen wrote the first version of Basic for the Altair on a home-made software emulator, the legal system was, shall we say, less mature than it is today. Your hacking led to one of the largest creations of jobs, wealth, and technological progress in this country’s history.

Do you feel that what you did then would be possible in today’s intellectual property framework, and do you see that as a good or bad thing?

Overall, I was pretty impressed with the event. Although I had seen Gates speak before at a Microsoft conference (where he delivered a keynote speech and introduced a few demos), he seemed more engaged in this discussion. Though nothing he said was especially surprising, the breadth and depth of his apparent knowledge was impressive. It’s worthwhile to try and see him if you ever get the chance.

DMV-Brand Glue

I thought the previous experience with the DMV was the most aggravating experience I’d ever have to endure. I was wrong. Why? Because I got a letter today from the DMV today requesting more information to support my previous car registration application. For the third time. Since January.

When I first arrived in California, I dutifully attempted to register my car at the California DMV. California requires you to register your car within 20 days of arrival – something that is impossible to do given the DMV’s totally inconvenient hours of operations. Nevertheless, Ashley and I trudged into the DMV, armed with our vehicle, proof of vehicle ownership, smog certificate, proof of identity, proof of compliance with US safety standards, et cetera, et cetera, et cetera. After filling out the paperwork and letting a DMV employee verify the car’s VIN (vehicle identification number), we got our new California plates and were done. A few days later we got our temporary registration sticker in the mail. Easy, right? A little too easy…

A few weeks later, it started.

We got a letter in the mail from the DMV stating that we had failed to provide proof that the car was compliant with Federal Motor Vehicle Safety Standards and US EPA emission regulations. Despite the fact that we had provided them with the requisite letter from Toyota, as instructed. Oh, and we also forgot to provide a smog certificate – except that we had provided it to them in person, as instructed. Oh, and that we needed to provide a Customs form that showed that the car had passed Customs inspection – despite the fact that we drove into California, and had been told by a US Customs officer no such declaration would be necessary.

Fine. We gathered up the paperwork required. I even went out the SFO to get US Customs to provide the required Customs declaration – even though they didn’t know what form was required, why the DMV would require that form, and lost our application for that form. But we got it all together and sent it in.

A few weeks later, we got another letter. This time, the DMV required our VIN to be verified by a peace officer – despite the fact that they already had the VIN in their computer, and that it had been put there by a DMV employee. I took a quick trip to the Mountain View Police Department, interrupted a police officer from doing real work, got the form filled out, and sent the paperwork back to the DMV. Again.

Then today, we got another letter from the DMV. This time, the DMV wants the original application for vehicle registration. The original registration that we handed to the DMV employee, and that got returned to us with our temporary registration sticker? The same.

This is ridiculous. California is struggling to recover from crippling debt, debt that has required a $15 billion bond offering to keep the state afloat. I think I now understand the source of the problem – then again, it’s the same problem everywhere. Extremely Stupid Bureaucracy™: the glue that holds together the gears of our economy.

Software: The New Law?

There is a theory that the language you speak affects the way you think – that the structure of language itself affects cognition, the basis of civilized society. Computer languages are believed to exert a similar affect on software – the type of solution that programmers can create is ultimately limited by the tools they choose to use to sculpt their digital golems. Hence, it should come as no surprise that software is having a profound effect on society. That’s not to say it’s limiting the form of solutions our society can create through software – if anything, software is breaking through artificial boundaries created by our system of law that should have died a long time ago.

I have been mulling this for a little while, but a recent post by Jeff Jarvis prompted me to consider how quickly software is making government irrelevant. And you too, Big Media (consider this my obligatory blogger slag against the creaking institute of the fourth estate). People are being empowered by software at light speed. It is providing tools that allow them to quickly and easily route around the self-interested, non-functional chunk of brain damage that is our current political and legal system. Software is rewiring our value systems faster, better, and more fairly than what currently exists – and the changes it is wreaking are only accelerating, incorporating each new advance into the next cycle of innovation.

Remember Napster (the original, not the current bastardized incarnation)? No sooner than Napster got sued by the Recording Industry of America than Gnutella sprung up and increased the magnitude of effort required to stop filesharing. The history of filesharing since then reads like a chapter of the Bible – Napster begat Gnutella who begat Limewire who beget…ad nauseum. Meanwhile, the RIAA continues to fumble along and fall further and further behind the innovation curve, suing filesharers, promoting crappy DRM solutions, and backing flawed legislation, oblivious to the fact that new software has rendered their fight not only futile, but also irrelevant. Copyright protection solutions are being cracked literally hours after their release, legal assaults are being thwarted by software that protects users’ identities from legal assault, and a new generation of file-sharing systems is enabling users to slurp down large files and distribute them in a fashion that encourages everyone to contribute their resources to spreading data as fast as possible. Welcome to the new form of democracy.

What’s amazing is the scale of resistance to this change. Look at what’s happening in the burgeoning voice over IP (VOIP) space: legislators are trying to use antiquated legislation, originally designed to ensure rural access to analog phones service, to impose taxes on the emerging technology. Give it up guys – the jig is up, move on and find a new game. I mean, how can you even enforce this tax? Any device with access to bandwidth and a microphone could effectively be transformed into a VOIP solution – what are they going to do, tax them all?

Which brings up a good question: how is government going to enforce just about any of the rules anymore? In a world of software and bits, a world where a person can work from one country but get paid in another, where intellectual “property” is easily transported and duplicated at zero cost, how is it possible for governments to hold onto power? After all, the law is only the law if you can enforce it – something Arnold needs to figure out before signing any more bogus legislation.

If everything in the world is comprised of either bits or atoms, as Nicholas Negroponte pointed out in his book, then the unmanageable nature of bits leads me to the inevitable conclusion that atoms are the sole possible source of government or corporate power. Come to think of it, is this really a change? Historically, the government’s ability to take your land, your stuff, or restrict your movement by encasing you in a prison made of atoms gave it the power it required to tax citizens and to encourage the formation of a civil society. I guess it’s a case of “meet the new world, same as the old world” – at least until we have the technology to bridge between the world of bits and atoms, to construct and reproduce physical objects in a digital fashion. I shudder to think about the social discontinuity that technology will bring.