I spoke to Noah Goodman and Michael Frank – cognitive scientists at Stanford University – about the first mathematical model of shared context between speakers and why it’s the key to making computers better at conversation.
“Our best AI systems right now tend to be employed by companies as phone-answering services,” says Michael Frank, head of Stanford University’s Language and Cognition Lab. “But try explaining to these things that you want to cancel a fraudulent charge on your credit card.” He laughs. “You can see there’s a long way to go. They’re weak at understanding what you said, but they’re weakest at going from what you said to what you actually meant.”
When communicating, context is king. Meaning is often conveyed as much by what isn’t said as by what is. For instance, the proposal “Would you like to go for a coffee?” can be turned down with the words “Ihave to return some library books”, even though, taken out of context, those words do not constitute a rejection. But computers can be literal to a fault, with such indirectness lost on them.
Frank and colleague Noah Goodman, also a cognitive scientist at Stanford, have developed a mathematical encoding of what they call “common knowledge” and “informativeness” in human conversation. “We have a vastly powerful predictive model of the world,” says Goodman. “When somebody goes to understand a statement that somebody else has made, they’re making the best guess about the meaning of that statement, incorporating all these factors like informativeness and context.”
Read the whole interview at Wired.co.uk.