Collection Limits for SpokenWord.org

Because SpokenWord.org collections can subscribe to feeds and even follow other collections, they can grow to a size that is unmanageable. We’ve therefore added three ways in which you can keep your collections under control.

  1. Limit the number of programs.
  2. Limit the age of programs.
  3. Limit the size of a collections RSS feed.

On your collection’s page, click the Info link under “Edit This Collection”.

1. “Remove oldest programs when there are more than [count] or [age].” The default value for [count] is 1,000, the maximum number of programs any collection can contain. If you want to keep your collection smaller, select another value: 10, 25, 100 or 250. As you add new programs, earlier-added programs will be removed in order to maintain the maximum size you specify.

2. Likewise [age] tells us how long to keep programs from the date you collect them. The default is never to delete them (by age), but you can change this to automatically remove programs that have been in your collection for more than one week, one month or one year.

3. “Most-recent programs to include in RSS feed: [count].” By default, we’ll include up to 100 programs from your collection in its RSS feed. But you can use this option to change that value to 10, 25, 50, 100, 250 or “all”.

Note: Although you can set all of these values now, only #3 (RSS limits) is operational. We won’t turn on #1 and #2 until at least Wednesday morning (July 15) at 9am Pacific time to allow you time to modify your collections that may be affected by the change.

Facebook Connect for SpokenWord.org

Yesterday I rolled out Facebook Connect for SpokenWord.org, and if you have a Facebook account I urge you to stop by, give it a try, and let us know if it works for you. The integration is about two-thirds done, but you probably won’t notice the missing one-third. It has been an interesting process so far. I previously implemented OpenID, and I expected something similar, but that’s not the case. The concepts of the two systems are similar, but the realities are quite different. For example:

  • Facebook’s documentation is awful. Rather than one or two coherent documents there are dozens of wiki pages written, as far as I can tell, by the developers themselves, not good tech writers. Each page is written in a different style and documents (usually incompletely) one small piece of the big picture. To actually integrate Facebook into an existing identity system, there are many — more than becessary — moving parts.
  • Although a FB user explicitly authorizes your application, FB refuses to supply his or her email address through the API. Instead, there’s a very Baroque system by which you send FB hashed versions of the email addresses of all your existing registered members in advance so that Facebook can then let you know that one of them matches a FB user at the time that user authorizes your application. But if a new (to you) FB user logs into your site, you don’t have that existing data. (OpenId’s API gives you an email address if the user approves.)
  • The Facebook Terms of Service are oppressive. They must have been written by Facebook’s Business Prevention Division. For example, you are not allowed to store (in a database) any personal data you receive from Facebook Connect. When a user authorizes our app, FB sends us the user’s first and last names. We’re allowed to display those while the user is connected, but not thereafter. (We get around this by asking the user to give us this data independently.) I noticed that TechCrunch uses Facebook Connect for comments, so I was curious what would happen if I left a comment on their blog and then de-authorized the TechCrunch app. Sure enough, my comments disappeared from their site, and when I re-enabled the app, the comments re-appeared. Weird.
  • The email thing is particularly nasty, for while we’re not sending FB our users’ emaill addresses unencrypted (which would violate our own Privacy Policy), we are sending an MD5 hash of those addresses. This means FB can compare the hashes we send them to the 100+ million email addresses they already have, allowing them to determine that someone is a registered members on our site even before that person authorizes the use of his/her FB identity to access our site.
  • FB requires that if a user is logged in via Facebook, you display that user’s Facebook photo on every page they view. No reason is given for this requirement, and very few Facebook Connect sites do so. (Digg is an exception.) Note that this (and other ToS issues) requires that you load FB’s supporting JavaScript on every page.
  • Oh, did I mention how bad their documentation is?

All of that said — and there are many more issues — we’ve had many requests for this integration as a way to make it easier to register for and login to SpokenWord.org. I hope you find it valuable.

Adventures in Full-Text Search

SpokenWord.org calls itself a site for “finding and sharing audio and video spoken-word recordings.” Sounds great, but our “finding” capabilities (search, in particular) have been pretty bad. In mid-March I started writing a fancy new full-text search module that worked across database tables and allowed all sorts of customization and advanced-search features. Six weeks and a few thousand lines of code later, I had a new system that…well, sucked. There are all sorts of reasons why, but it sucked. Bottom line: It just didn’t do a decent job of finding stuff.

I then considered implementing something like Solr, based on Lucene. But the more I thought about it, the more I realized that would be only marginally better.

Searching for audio and video programs from a database that will hit 250,000 in the next few hours comes down to a few architectural issues:

  • You’ve got to search the text of titles, descriptions, keywords, tags and comments, which in our case are stored in separate database tables.
  • There are three ways of doing this: (1) read the database tables in which these strings are stored in real time; (2) in background/batch, build a separate table of the integrated text from the separate tables, then search this integrated table in real time; or (3) build the integrated table by scraping/crawling the site’s HTML pages then, as in #2, search that table in real time.
  • Make your search smart by ignoring noise words, being tolerant of (or correct) spelling mistakes, understand synonyms, etc.
  • Develop an ranking algorithm to display the most-relevant results first.
  • Provide users advanced-search options such as boolean logic and restricting the search to a subset of objects such as only searching programs or only searching feeds.

My fancy search code used method #1 and the resulting code generated some of the longest, most confusing and slowest SQL queries I’ve ever seen. And it’s buggy. Solr uses technique #2, and that’s clearly better for all sorts of reasons. #3 seemed like a particularly poor solution because (a) you lose track of the differences between titles and tags, for example, and (b) it’s kludgy. Or so I thought.

But I’ve now implemented technique #3 by outsourcing the whole thing to Google Custom Search and the initial results are spectacular. Here’s why:

  • Scraping HTML may sound kludgy, but it works.
  • Google knows how to scrape web pages better than anyone.
  • So long as you’re keeping the text you want searched in the page (eg, not served by Ajax) Google will find it.
  • Google’s smart-search, advanced-search and relevance-ranking are better than anything you can write or find elsewhere.
  • Google does all of this with their CPU cycles, not ours, thereby eventually saving us an entire server and its management.
  • Google allows educational institutions and non-profit organizations to disable ads.
  • Google does a better job of finding what you want than is possible using an in-house full-text search with lots of customized filtering options.

This last one is important. I spent a lot of time on giving users tools for narrowing their search. For example, I provided radio buttons to distinguish between programs, feeds and collections. But it annoyed even me that users had to check one of these buttons. People would search for “IT Conversations” and find nothing because the default was to search for individual programs not feeds and there are no individual programs with that string in their titles or descriptions. Annoying and confusing.

Then I had a moment of clarity. Rather than proactively providing users control of the object type up front, I came up with another scheme. I changed the HTML <title>s of the pages so that they now start with strings like Audio:, Video:, Feed: and Collection:. This way (once Google re-scrapes all quarter-million pages) the search results will allow you to immediately and clearly distinguish programs (by media type) from RSS/Atom feeds and personal collections. I’ve tried it on my development server and it’s great. Because of the value of serendipity and the fact that Google’s search is so good, I find it’s much more valuable to discover objects in this way than to specify a subset of the results in advance.

Finally, I’ve discovered that Custom Search supports a feature from regular Google search. You can specify part of a URL as a filter. For example, if you want to search only for feeds, you can start your search string with “http://spokenword.org/feed&#8221;. The result will only include our feeds. Same for /collections, /members and /programs. How cool is that? (Thank goodness for RESTful URLs!) I have yet to integrate that into the web site — a weekend project — but it means we can offer the user the ability to restrict the search to a particular type of object if that’s what they want.

I’m so glad that Google Custom Search works as well as it does, that I’ve decided not to brood about the six weeks of my life wasted designing, coding and debugging my own search. It was another one of those learning experiences.

Note: Not all of the features described above appear on SpokenWord.org yet, and the maximum benefit won’t be visible until Google re-scrapes the site, but if you use the Search box on the top of the right-hand column you’ll get the idea. Very cool.

The Submission Wizard

Making it easier to submit content to SpokenWord.org has always been high on the to-do list. For the past seven weeks I’ve been working on a Submission Wizard, which I hope goes a long way towards that goal. It’s a wizard because it takes what you give it and tries to figure out what you meant. If you supply the URL of a media file, it will then ask you for an associated web page from which it will suggest the title, description and keywords. If you start by supplying a web-page URL, the wizard will scrape that page looking for RSS/Atom and OPML feeds. And whether it finds those feeds or you explicitly supply a feed’s URL, the wizard will give you choices of what to submit and what to add to your collection(s) before showing you all the steps it takes to follow your instructions.

After the RSS/Atom feed parser, which continues to be a maintenance challenge, the Submission Wizard is probably the most-complex single piece of code for the site. It weighs in at about 6,000 lines of new code and it’s certainly not done. Give it a try, and if it doesn’t do what you think it should, let me know. I’m particularly interested in finding more web pages that the Submission Wizard can learn how to scrape.

Trying to Crack YouTube Videos

Anyone out there have an idea how to solve this?

Over at SpokenWord.org we’re trying to figure out how to scrape YouTube pages (or pages with embedded YouTube players), then hack a video or ShockWave URL that we can include in the <enclosure> element of RSS feeds. We’ve been able to do this for programs in YouTube EDU such as this page (http://www.youtube.com/watch?v=Y1XpTc1-lh0), which we convert to this media-file URL (http://www.youtube.com/v/Y1XpTc1-lh0&f=user_uploads&app=youtube_gdata). The latter URL can be played by standard Flash players, so we can include it in RSS feeds. But this only works for certain special cases such as YouTube EDU, not for mainstream YouTube pages.

Terms of Service

I guess it had to happen sooner or later. Someone submitted a hard-core porn video feed to SpokenWord.org. (No, don’t go looking for it!) Maybe we’ve just been lucky thanks to keeping a fairly low profile. We do accept RSS feeds with content tagged as ‘explicit’ but there’s explicit (perhaps just audio with adult language) and then there’s really explicit. I’m thinking of dealing with it in a few ways.

  • No content tagged as ‘explicit’ on the home page.
  • Content tagged as ‘explicit’ is invisible to those who have not opted-in via their profiles.
  • In order to opt-in, you must read and agree to the Terms of Service *and* you must claim to be 18 years old or older.
  • You can register without explicitly agreeing to the Terms of Service, but the first time you submit content, you’ll have to explicitly. (Sorry for the re-use of the word ‘explicit’.)

I’ve got no experience with this, and I welcome suggestions. In particular, if you can recommend another site that handles occasional explicit content well and/or has good Terms of Service, let me know.

Ratings Now in SpokenWord.org RSS Feeds

All RSS feeds generated by SpokenWord.org now include program rating data according to the conversationsNetwork namespace. If a program has been rated, the <item> for that program now includes the ratingAverage and ratingCount elements. If the feed was requested with authentication (identifying a particular SpokenWord.org member) it will also include the ratingIndividual element. The ratingTimestamp element has not yet been implemented. I’m still trying to figure out if it’s worthwhile and whether it should reflect the last rating by the authenticated individual or by anyone.

Users Tell Us What’s Wrong

You can always count on loyal website visitors to tell you what you really need to hear. I emailed a survey to the registered members of SpokenWord.org this morning and already have some great responses. Here’s a sample of the answers to “What do you NOT like about SpokenWord.org?”

  • collections
  • No Comment
  • Not sure I totally understand how it works.
  • I do not understand how I am supposed to use SW, and the web pages don’t make it manifest. What is “collect” and what does “subscribe” imply? Unclear. Yes, I can spend a lot of time clicking “help” to “FAQ” to “Advanced: Collection” but I still feel that I’m using a Swiss Army Knife as a club. Awkward and unfamiliar.
  • – the search is annoying; I wish it would search both feeds and episodes, without having to go to a separate page. – the look is quite cluttered and visually messy – it’s not good for discovery; I don’t find the homepage content useful — it’s rarely something that I want to listen to, and doesn’t change frequently enough. I haven’t looked at other’s collections much, maybe that would be helpful.
  • Jason Ponten
  • Too early to tell
  • Too much data entry required to add a single program. The feed reader should be more forgiving. I’ve tried to add several feeds that failed, but I assume they work find with iTunes or other feed readers.
  • Removing individual programs from collection (that was added trough feed) was not working, but now that seems to be fixed.
  • Removing individual programs from collection (that was added trough feed) was not working, but now that seems to be fixed.
  • Nothing
  • I’m not so sure about the Stack Overflow-type badges and such, though I’m always a late adopter in the social media thing.
  • I used to get these emails from ITC, and I just manually downloaded each one and put it in a directory. Now, I don’t know where to find that stuff. there seems so much, it’s confusing to a simpleton like myself.
  • Same as above
  • not friendly for new users. not clear what should you do there..
  • I am not sure how to use it.
  • – No audio podcasts of video talks available – The time lag in updating my feed after making changes to my collection. – The strictness in parsing RSS feeds has not allowed me to move all of my podcasts over to Spoken Word.
  • to much mumbo jumbo and does not seem smooth
  • It was difficult to figure out at first.
  • Wasn’t obvious how to subscribe to a feed, although I just went back and found it.
  • Not live. New feeds sometimes take days to add programs to my collections. When a new feed is added you should give the ability to add a small number of older programs immediately to test the feed.
  • I just didn’t find a lot of podcasts that I hadn’t already found. It’s been a while since I checked. I’ll look again and maybe this opinion will change.
  • Cluttered UI.
  • Some of the feed parsing is pickier than I thought was necessary. If I make a collection on SpokenWord and subscribe to its feed on my PC its not always easy to differentiate which original feed an episode is from. So I don’t use this feature. it’s not really your fault but there isn’t an easy way to contribute to SpokenWord other than adding feeds. I download podcasts with gpodder on my PC and I don’t tag or rate podcasts because it’s a lot of extra effort.
  • can’t get to it ALL!
  • Off the top of my head? Nothing.
  • Too hard to find things. I am pretty new at this. I also visited TED. I felt it was easier to find interesting stuff there.
  • Can’t think of anything… Tue, 5/12/09 9:14 AM
  • Well, for one thing, the feedback link to tell you what didn’t work didn’t work. And here’s an obvious but unappreciated idea: I signed up to hear a progran that wasn’t there. Can you build some kind of machine that’ll delete busted or cancelled links? Also, I had a little too much difficulty finding the actual link to the program. In fact, more than a little too much.
  • Search is so broken! I search for my own podcast and it doesn’t show up in search results – even though I’ve submitted it. I have to type the exact url. Related keywords are useless. Also, I would really, really encourage you to create multiple lists of podcasts broken down by various categories, topics, niches, sub-sub niches, brand-new podcasts, etc. Even if these lists are in a separate section of the site (not taking up valuable home page real estate) these would be invaluable to finding/discovering podcasts I haven’t heard about previously.
  • there is an empty yellow popup area on the home page that just says “close window”
  • The layout and searching for new podcasts. Not much Canadian Content.

What do you think? Maybe we have a UI problem?