Or so insinuates Rupert Murdoch, who claims he’ll start blocking Google from indexing his paper’s web sites. Why? Because he thinks they’re stealing revenue – and we all know how much revenue there is to be stolen. (Hint: Not much.)
He has a point – if all you’re going to read is the first few grafs of a story, and they’re available for free or easily by doing a quick search, that will cut into your revenue. So blocking Google – and Bing, Yahoo and all the rest – makes perfect sense.
The flip side, though, is then you’re only going to have your dedicated readers, your subscribers, reading your pieces. Which is pretty much the way newspapers (and magazines, excepting those left in doctor’s offices for 13-27 months) have operated for years, giving up the casual reader or someone looking for a specific piece of information.
So is there a compromise? Maybe, and maybe it already exists. (Web gurus, fill in my gaps, if you will.) How about building some code that allows search engines to index page information (as they do now) but only display a specific bit of information, like a headline and subhead or a bit of a summary. All of the content gets used to find the page, but only info decided upon by the creator gets displayed in the search results.
An online equivalent to the sky box, anyone?