A programmer named Nigel Leck got a little tired of arguing with global warming denialists. For years, he's been debating climate change with people on the Internet, and more recently, it dawned on him that he was making the same arguments, and referencing the same resources, over and over again.

So what did he do? He automated it.

He built a bot and gave it its own Twitter account. Every five minutes, the bot sweeps through Tweets, looking for keywords and phrases that are characteristic of denialists, and it auto-responds to them with links to articles that are relevant to the original Tweet.

Twitter users have been arguing with the bot, not knowing that they're arguing with a machine. (The 140-character limit in the Twittersphere makes everyone's thoughts seem stunted, so it's actually the perfect place for a machine to hide.) Other people have been getting wind of bot, and many have started playing with the account as a means of testing it. Its failures are attributable to the limits of natural language processing, or the parsing of meaning from human language by computers. It's a remarkably complicated area of artificial intelligence. One problem, for example: the bot cannot detect sarcasm, and so it responds to Tweets intended to be tongue-in-cheek as serious.

As funny as the project is, and impressive for a feat of programming, it still strikes me as having been created with the wrong idea in mind: that we can change peoples' minds by spewing facts at them. If they don't change their minds, then the solution seems to be to spew more facts, or repeat the facts again and again, until the person's will breaks down and they admit they were wrong.

I'm just waiting to see what happens when someone creates a bot to argue the other side of things. Then they can just argue with each other endlessly, back and forth, while everyone else watches. An annoying reminder of the myriad arguments I've witnessed at my jobs between people discussing politics.