Imagine that you wake up and get ready for work. You exit your home and begin your commute in an autonomous vehicle. You are greeted by a virtual assistant, who asks you how you are feeling today, and where you’d like to travel. During this journey, both you and the autonomous vehicle are expected to be a team, and each of you may control the vehicle at different stages of the journey. Ultimately, you are both partially responsible for the vehicle’s safe operation – depending on who is in control and who holds liability (a tricky topic, and one best left for another blog post!).
What does this virtual assistant look like? How does it communicate? How emotionally connected are we to this technology? In an emergency how does the assistant handle the situation to keep you and others safe? Many of these questions are yet been answered, and the research community is divided over whether or not we can apply how humans naturally communicate with one another to answer these questions.

Key works such as ‘The Media Equation’ by Reeves and Nass (1996) suggest that we treat machines as social agents, and that we often exhibit feelings and behaviours analogous to those in our inter-personal relationships such as empathy, frustration, and politeness. There are others that argue that how we treat humans is fundamentally different to how we treat machines, both from a reduction of harm perspective (i.e., we are less protective of harming a machine than another human; Bartneck et al., 2005). Those on this side of the debate state that communication between humans cannot be readily replicated by technology. In the centre, many influential works began investigating interpersonal communication and were then repurposed for human-computer interaction as the benefits of this work were realised as technology developed (e.g., Clark, 1996; Klein et al., 2004; 2005).
We are fundamentally limited to current or past technology to guide our conversations. But what does this debate mean for the future of AI and autonomous technology? As virtual assistants become smarter, more efficient, and perhaps more aware, the proposition that inter-personal communication can be beneficial for the human-robot interaction community may become more prevalent – if not only to understand how we can improve etiquette, effective communication of information, and promote natural communication.
During my doctoral research, I investigated how humans communicate with one another when handing over safety-critical tasks in areas such as healthcare, aviation, and control-rooms (Clark et al., 2019a). I wanted to understand how professionals, such as ambulance staff handing over a patient to an intensive-care unit used language, what strategies they preferred, and ultimately, what information they thought was critical to operational safety. I replicated a handful of strategies in an autonomous vehicle simulation and found that the lessons I had learnt from human-human communication, specifically in healthcare, were beneficial not only to human-computer interaction, but to an entirely different domain-of-study (Clark et al., 2019b). The source material of human-communication had provided me with communication strategies that has now taken the form of an in-vehicle automation assistant.

(Clark et al., 2019b)
My new book: Human-Automation Interaction Design: Developing a Vehicle Automation Assistant, is expected to be published later this year. You’ll find the details of my journey from human-communication to an in-vehicle interface and all the steps in between including literature reviews, user-workshops, experiments.
Keep an eye on my work, and I look forward to bringing you more content soon!
References
Clark, H. H. (1996). Using language. Cambridge: Cambridge University Press.