To Truly Fake Intelligence, Chatbots Need To Be Able To Change Your Mind

Could you ever imagine yourself in a heated argument with a chatbot? Like, really passionate, deeply reasoned position-taking—argument, counterargument, countercounterargument, countercountercounterargument. Could you imagine a chatbot convincing a jury of a defendant’s guilt? Or even just trying? And if you could imagine it, what would that mean? Questions like these are at the core of a paper published recently in AI Matters by Samira Shaikh, a computer science-slash-psychology researcher at the University of North Carolina-Charlotte. In it, she considers the notion of social competence in artificial intelligence agents—that is, the ability of a machine to accomplish goals that are social in nature. A key example of a social goal is that ability to persuade others, to change minds. Thus, a bot capable of persuasion would represent a significant advance in artificial social intelligence. To that end, Shaikh set about automating “the very process of persuasive communication, by designing a system which can purposefully communicate, without any restrictions on domain or genre or task, and which has the clear intention of persuading the recipients of its messaging.” So: a lawyer-bot, basically. Or a politician-bot. (I’m not sure which is more frightening.) Shaikh’s task comes down to two core questions. First, can…


Link to Full Article: To Truly Fake Intelligence, Chatbots Need To Be Able To Change Your Mind

Pin It on Pinterest

Share This