YouTube/Google Lemonade?

Mark Cuban published a truly fascinating third-party analysis of the inner workings of the YouTube/Google deal and how it was structured to address copyright infringement issues. Whether true or not — Mark includes a dislaimer of his own — it has the twists and turns of a political thriller.

  • Nearly 500 million of the 1.65 billion purchase price is not being disbursed to shareholders but instead being held in escrow.
  • YouTube approached the media companies, who smelled a transaction when YouTube radically changed their initial revenue-sharing offer to one laden with cash. The media companies, thinking short-term and not guessing how big this deal would be, went for about 50 million each to be paid from the Google buyout monies.
  • To avoid paying artist’s royalties, the media companies structured this as an equity transaction.
  • YouTube negotiated a six-month moratorium on infringement litigation during which time its competitors will continue to be sued, giving Google a huge window for further growth and dominance.
  • This, in turn, chokes off venture-capital funding of YouTubes competitors.

The details, speculative or otherwise, are well worth the read. Thanks, Jake Shapiro.

This Week in Law

There’s a great new “netcast” by IT Conversations’ host Denise Howell, called This Week in Law (TWiL), over on Leo Laporte’s TWiT network. The first episode had some fairly severe audio problems, however, and a few “misinformed” individuals are blaming The Levelator. From a comment I just posted on the TWiL site:

Re comments by ‘Nation‘: The issue is that they ran the audio through “The Levelator™” (see: http://www.gigavox.com/levelator ).

Just stop and think for a moment. You all know that Leo is a pro and committed to great audio and content. Do you *really* think that if The Levelator made the audio worse that he would use it? And how can you judge the output without comparing it to the original?

Yes, The Levelator was successfully used for TWiL 1, and the output was superior to the input.

Software by Bounty: The Demand Side Works

Many of us with OS X and Blackberry 8700s wanted a way to combine them for web and email access. No such application existed, so Alex King organized a bounty fund. I and others contributed, and the fund eventually reached US$675. Now Daniel Pasco has stepped up to write that application and claim the bounty. We contributors are looking forward to a test release in the next few days.

This is a tangible manifestation of what Doc Searls has described as the demand side taking control, and I think it bodes well for the further formalization of a demand-driven economy of software products.

The Levelator Abroad

Glowing reviews for The Levelator are coming in from all over, but perhaps none is as entertaining as this one in German, as translated by Google:

Abbott Levelator – the Tool for Podcasts et al.
In the current expenditure of TWiT Doug Kaye, inventors of ITConversations and founders of Gigavox, a Softwaretool places forwards named Levelator. The thing makes exactly that, according to which it means: It pegelt unbalanced audio photographs out. Ex post office. Speak: You take up your thing only and leave then the file by the Levelator. The result is really amazingly well, hab’s tried out. Who perfect always times again with less than clay/tone photographs to do has (parts too loud, parts too quietly and both itself alternating in arbitrary consequence…), which/that can facilitate the life for Levelator much. Matter of expense: $ 0. – Hot Go GET it, while it’s! Levelator is written in Java, what has the advantage that the program is platform independent. On my Mac hat’s perfectly functions!

I love “ex post office.”

Here’s the original: http://infam.antville.org/stories/1487538/

Personal Backup on Amazon S3

Jeremy Zawodny is running some interesting tests and comparisons using Amazon’s S3 storage service for remote backup of his personal computers. Read the comments, too.
For me, a good backup solution has one more requirement: versioning. I wrote about the problem in my first book, Strategies for Web Hosting and Managed Services. I refered to the problem as Propagation of Corrupted Data:

Consider what happens if the original version of  file becomes corrupted–perhaps due to faulty hardware or software–but the corruption goes undetected. The corrupted version of the file will be copied to the archive server at the next backup interval. That’s okay, as long as there’s still at least one other, older archive that contains an uncorrupted copy of the file. Otherwise, the corrupted copy will replace the only remaining uncorrupted version.

There are three backup use cases. First is the one we think about most often, catastrophic failure, in which you probably want to recover an entire drive. This is infrequent but high value. Second, much more frequent, is recovering a file because of human error. Like everyone else, I sometimes delete a file (or directory — aargh!) by accident and I need to get it back.

But what happens when a file is damaged for one reason or another? Most backup systems will simple replace the good copy of the file with the “new” damaged copy, so there’s no way to recover the good copy. If you run  rsync or other backup once a day, you’ve got a 24-hour (or less) window to discover the corruption and recover from your backup. That’s why I always use rotating backups. I don’t overwrite one backup with another. I create a new copy or image. I’ll then keep a weekly, monthly and annual version of everything, so there’s a good chance I can recover pre-corrupted versions. It’s not as robust as a fully versioned system (like svn) but it has saved my butt.

My ideal remote-backup soultion would be (a) automatic, (b) unattended, and (c) one that included at least some form ove versioning to keep different versions of modified files.