Artificial Intellligence Challenge 6: Human-robot interaction raise important social and ethical questions

Why is it difficult?

Robots or autonomous agents interact with humans in multiple ways. They engage in meaningful conversation with us or even perform certain complex tasks such as driving a car. There has been a lot of focus on how to make these agents autonomous. Still, an important ethical question is: are these agents effective team players in scenarios when multiple agents interact with each other (many self-driving cars on the road) or interact with humans. One of the important factors for such interactions is that the agents should work towards meaningful long-term goals rather than short-term benefits. And at times, these goals can be renewed or updated to repair faulty mutual knowledge, beliefs, and assumptions when these are detected. Finally, these autonomous agents should be mutually predictable in their actions, directable, and maintain common ground. It is a long-term challenge to develop such effective team player autonomous agents.

What is the impact?

Autonomous agents can become more intelligent and effective team players like humans. They can work in tandem to solve even complex tasks that are not feasible by just one agent working alone. For instance, multiple drones may coordinate to tackle a forest fire, or many self-driving cars on the road could coordinate to avoid road accidents and save human life. These robots can also enhance interaction with humans and provide support such as elder care with a better understanding of the environment and the needs of the elderly. All in all, such well-coordinated autonomous agents could truly justify the saying, “Two heads are better than one.”