Email Gremlins

So I’ve been having this realy strange problem. I use OS X’s Mail app along with SpamSieve for spam filtering. But recently I’ve been noticing that the spam detection has been hyperactive: way too any false positives. I tried re-training SpamSieve. No help. So then I shut it down altogether: Whoa! I was *still* getting messages sent to the spam folder. Next, all the usual steps: rebooting, re-initializing this and that. Still no help. With absolutely no spam filtering turned on, stuff was still being flagged and moved. (Any of you email geeks starting to get a clue here?)

For a totally separate reason I pulled out my MacBook Pro, and that’s when it hit me. I even caught the nasty gremlin in the act. What was it?

I use Google as my inbound and outbound email server. Yes, I use their spam filtering, too — it’s much better than SpamSieve — but that wasn’t it. Because I have three different email clients (if you count the iPhone) I use IMAP4 instead of POP3 to communicate between those clients and the Google server and keep things in sync. So here’s what was happening: My MacBook Pro had been on and running it’s own instances of Mail and SpamSieve. Messages would come into Google and, in some cases, my laptop would grab them. The copy of SpamSieve on that computer decided some of them were spam and would move them to the spam folder. And because I’m using IMAP4, this change was sent to the server and then to the email client running on the desktop. It was my laptop, running this other instance of my spam filtering software that was moving messages around on the email server and hence on my desktop client. It was downright spooky to see the messages moving without a clue as to why, but as soon as I realized my laptop was also running email, it became instantly clear.

Adventures in Full-Text Search

SpokenWord.org calls itself a site for “finding and sharing audio and video spoken-word recordings.” Sounds great, but our “finding” capabilities (search, in particular) have been pretty bad. In mid-March I started writing a fancy new full-text search module that worked across database tables and allowed all sorts of customization and advanced-search features. Six weeks and a few thousand lines of code later, I had a new system that…well, sucked. There are all sorts of reasons why, but it sucked. Bottom line: It just didn’t do a decent job of finding stuff.

I then considered implementing something like Solr, based on Lucene. But the more I thought about it, the more I realized that would be only marginally better.

Searching for audio and video programs from a database that will hit 250,000 in the next few hours comes down to a few architectural issues:

  • You’ve got to search the text of titles, descriptions, keywords, tags and comments, which in our case are stored in separate database tables.
  • There are three ways of doing this: (1) read the database tables in which these strings are stored in real time; (2) in background/batch, build a separate table of the integrated text from the separate tables, then search this integrated table in real time; or (3) build the integrated table by scraping/crawling the site’s HTML pages then, as in #2, search that table in real time.
  • Make your search smart by ignoring noise words, being tolerant of (or correct) spelling mistakes, understand synonyms, etc.
  • Develop an ranking algorithm to display the most-relevant results first.
  • Provide users advanced-search options such as boolean logic and restricting the search to a subset of objects such as only searching programs or only searching feeds.

My fancy search code used method #1 and the resulting code generated some of the longest, most confusing and slowest SQL queries I’ve ever seen. And it’s buggy. Solr uses technique #2, and that’s clearly better for all sorts of reasons. #3 seemed like a particularly poor solution because (a) you lose track of the differences between titles and tags, for example, and (b) it’s kludgy. Or so I thought.

But I’ve now implemented technique #3 by outsourcing the whole thing to Google Custom Search and the initial results are spectacular. Here’s why:

  • Scraping HTML may sound kludgy, but it works.
  • Google knows how to scrape web pages better than anyone.
  • So long as you’re keeping the text you want searched in the page (eg, not served by Ajax) Google will find it.
  • Google’s smart-search, advanced-search and relevance-ranking are better than anything you can write or find elsewhere.
  • Google does all of this with their CPU cycles, not ours, thereby eventually saving us an entire server and its management.
  • Google allows educational institutions and non-profit organizations to disable ads.
  • Google does a better job of finding what you want than is possible using an in-house full-text search with lots of customized filtering options.

This last one is important. I spent a lot of time on giving users tools for narrowing their search. For example, I provided radio buttons to distinguish between programs, feeds and collections. But it annoyed even me that users had to check one of these buttons. People would search for “IT Conversations” and find nothing because the default was to search for individual programs not feeds and there are no individual programs with that string in their titles or descriptions. Annoying and confusing.

Then I had a moment of clarity. Rather than proactively providing users control of the object type up front, I came up with another scheme. I changed the HTML <title>s of the pages so that they now start with strings like Audio:, Video:, Feed: and Collection:. This way (once Google re-scrapes all quarter-million pages) the search results will allow you to immediately and clearly distinguish programs (by media type) from RSS/Atom feeds and personal collections. I’ve tried it on my development server and it’s great. Because of the value of serendipity and the fact that Google’s search is so good, I find it’s much more valuable to discover objects in this way than to specify a subset of the results in advance.

Finally, I’ve discovered that Custom Search supports a feature from regular Google search. You can specify part of a URL as a filter. For example, if you want to search only for feeds, you can start your search string with “http://spokenword.org/feed&#8221;. The result will only include our feeds. Same for /collections, /members and /programs. How cool is that? (Thank goodness for RESTful URLs!) I have yet to integrate that into the web site — a weekend project — but it means we can offer the user the ability to restrict the search to a particular type of object if that’s what they want.

I’m so glad that Google Custom Search works as well as it does, that I’ve decided not to brood about the six weeks of my life wasted designing, coding and debugging my own search. It was another one of those learning experiences.

Note: Not all of the features described above appear on SpokenWord.org yet, and the maximum benefit won’t be visible until Google re-scrapes the site, but if you use the Search box on the top of the right-hand column you’ll get the idea. Very cool.

The Submission Wizard

Making it easier to submit content to SpokenWord.org has always been high on the to-do list. For the past seven weeks I’ve been working on a Submission Wizard, which I hope goes a long way towards that goal. It’s a wizard because it takes what you give it and tries to figure out what you meant. If you supply the URL of a media file, it will then ask you for an associated web page from which it will suggest the title, description and keywords. If you start by supplying a web-page URL, the wizard will scrape that page looking for RSS/Atom and OPML feeds. And whether it finds those feeds or you explicitly supply a feed’s URL, the wizard will give you choices of what to submit and what to add to your collection(s) before showing you all the steps it takes to follow your instructions.

After the RSS/Atom feed parser, which continues to be a maintenance challenge, the Submission Wizard is probably the most-complex single piece of code for the site. It weighs in at about 6,000 lines of new code and it’s certainly not done. Give it a try, and if it doesn’t do what you think it should, let me know. I’m particularly interested in finding more web pages that the Submission Wizard can learn how to scrape.

Trying to Crack YouTube Videos

Anyone out there have an idea how to solve this?

Over at SpokenWord.org we’re trying to figure out how to scrape YouTube pages (or pages with embedded YouTube players), then hack a video or ShockWave URL that we can include in the <enclosure> element of RSS feeds. We’ve been able to do this for programs in YouTube EDU such as this page (http://www.youtube.com/watch?v=Y1XpTc1-lh0), which we convert to this media-file URL (http://www.youtube.com/v/Y1XpTc1-lh0&f=user_uploads&app=youtube_gdata). The latter URL can be played by standard Flash players, so we can include it in RSS feeds. But this only works for certain special cases such as YouTube EDU, not for mainstream YouTube pages.

I’m a TWiT Again

Had a lot of fun Sunday. Drove up to the TWiT Cottage in Petaluma to be on Leo Laporte’s This Week in Tech (TWiT) episode 199. (Wow, the last time I was on was over a year ago!) Leo and the chat room seemed to think it was a pretty good show. The big treat for me was to be able to meet Wil Harris who was also in the studio instead of his usual participation via Skype. Leo is a real pro, and it’s always an honor to be invited to join the show.

Happy Birthday to You

Today is the 6th anniversary of the first IT Conversations program, which pre-dated podcasting by about 15 months. And who was our second guest? None other than Phil Windley, who is now Executive Producer of the channel. What you may not realize is that Phil has actually presided over IT Conversations for longer than I did — he began his stint in April 2006 — and has certainly published the majority of the channel’s 1,895 programs to date.

Behind Phil is TeamITC: our worldwide gang of 40 audio engineers, website editors and series producers headed by Paul Figgiani (audio) and Joel Tscherne (producers) who do all the heavy lifting to bring you new high-quality programs every day. The same team also brings you Social Innovation Conversations, in collaboration with the Center for Social Innovation at the Stanford Graduate School of Business (Bernadette Clavier, Executive Producer) and soon to be launched, CHI Conversations (Steve Williams, EP).

We’re so used to doing this day in and day out, that it’s easy to forget our own history. For example, as Ian Forrester pointed out this morning, we were one of the first (perhaps *the* first) to publish conferences online for free. It began with our live audio streams from the O’Reilly Digital Democracy Teach-In and Emerging Technology Conference in Ferbruary, 2004.

Here’s a special thanks to everyone on the team including those who have helped us and moved on to other endeavors. Approximately 145 people have been members of TeamITC at one time or another. And thanks to all our listeners, fans and particularly donors and supporters who help us pay the bills.

Happy Birthday to You.