Artificial Intelligence Challenge 4: Filling the gap between AI-agent and human behavior by having more social scientists working in the field.

Why is it difficult?

Humans interact with the environment and other humans in fundamental ways, which is hard to mimic by a machine. Can we train an AI agent to learn from human social behavior and develop a machine common sense of what is acceptable and what is not? Developing that cognitive architecture for an AI agent is hard as it requires a good model for human intelligence. In a social gathering, an AI agent has to not only predict a word from a corpus but also learn from social context, surrounding environment, and the type of people around. We don’t have models for training an AI agent to pay attention to all the information laid around it. For example, it should not speak an offensive language in specific contexts. Social scientists who understand human behavior don’t generally know AI technology, and an AI engineer may not be well versed in understanding human behavior. So it requires the collaboration of social scientists and engineers to solve this problem.

What is the impact?

AI is transforming every aspect of our life and work. From personal assistant devices to digital healthcare — many interactive applications benefit from its improved social behavior. For example, an AI-powered bot could treat patients with mental illness and become good conversational agents with humans.