I just posted several example scraping sessions that may be of help to those starting out with screen-scraper: http://www.screen-scraper.com/support/examples/scrapbookfinds_examples.php.
Back when screen-scraper was just a babe in my arms I used to include scraping sessions in the download. The scraping sessions extracted stuff from Slashdot, Freshmeat, and Weather.com. The trouble was, the sites would change from time to time, and it was always a pain keeping up with them. What was worse, occasionally people would download screen-scraper, run the scraping sessions, and find that they didn’t work (because the sites had changed). They’d then report back that our software stunk because it didn’t even work with the very examples we provided.
After all of that I decided it simply wasn’t worth providing examples using sites we didn’t have control over. That’s why we set up this mock e-commerce web site on our server. We wanted to provide a “real world” example, but still needed to have control over the site so that we didn’t need to continually update it.
When we started doing ScrapbookFinds, it occurred to me that we could share those scraping sessions with others. We don’t control the sites, but we’re constantly monitoring the scraping sessions and updating as them as the sites change. The hope is that these scraping sessions will provide templates and examples to people that will both help them learn screen-scraper, as well as act as boiler plates people can tweak to create their own scraping sessions.
As a side-note, if it’s of interest, we probably average about 15 minutes of time updating scraping sessions per week, and we’re scraping about 15 sites (i.e., the sites either don’t change that often, or we’ve set up our scraping sessions to be fuzzy enough such that they don’t break when minor changes are made).