I wonder whether we'll soon see a breakthrough in AI enter the body of scientific literature not by being discovered by a AI researcher but by being dissected out of a captured bot by some sort of cyberzoologist, its principles becoming gradually revealed as its innards are studied, and the real discoverers remaining forever anonymous (if perhaps comfortably well-off) in the spam underworld.
There have been stories of "computers making use of human processing power", with spammers paying legions of teenagers to translate the distorted letters - a task computers are no good at - when signing up for scores of mail accounts.
I thought that was likely to happen too, with either teenagers or call centres in third-world countries dedicated to signing up for webmail/IM accounts. Though with a well-designed website, it should be possible to prevent the images from being sent separately from the form and used to fill in the form later, or at least to make it hard.
Baffletext? I'll say. It's not just bots who are going to have difficulty with Baffletext - I can't for the life of me figure out what it says in the second example given.
Ethics and AI, the game: http://sl4.org/bin/wiki.pl?GurpsFriendlyAI