First we need an indexer, some code that is smart about ETags and Last-Modified headers, and redirects, and all that jazz, and we also need something to parse all that incoming data into something useful. Mark‘s Universal Feed Parser seems to be capable of all that. So, we set that up as the piece of the puzzle that grabs data, and parses it, and with a little more glue, stores that data.
Where to store that data? MySQL seems like a nice place. So, we use the Universal Feed Parser to fill up MySQL with all this raw data, and where do we go from there? Anywhere we want. Confused? Here’s what I’m thinking… Once we’ve got all this raw data, we can write code that does all sorts of crazy stuff with it, and that code can be Perl, Python, PHP, or even Java – anything that can talk to MySQL really.
Ideally it would be cool if MySQL had a base table with the raw data, and then different people could write different code in different languages that did different things. Think Different and all that.
Anyway, that’s what I’ve been thinking about. I think this would allow people to have a common base, that being data stored in a SQL database, and at that point it doesn’t matter if it came from RSS 0.91, RSS 2.0, Atom 0.3 or whatever, as long as we can figure out some sort of base data model we’re set.
I don’t know if this would work, or who else would be interested in such a thing, but figured I should throw it out there instead of just rolling it around in my head.