Select Page

Autonomous Vehicle Communication
How might autonomous vehicles communicate with human beings in urban environments?

Human beings can wave, hold eye contact, and make all sorts of gestures to communicate their intentions on the road.

Autonomous vehicles can’t — at least not in the same way.

Given this, how might autonomous vehicles establish trust and clear communication with pedestrians in noisy & unpredictable urban environments? 

We used a variety of ethnographic studies, simulator studies, lab interviews, and Wizard-of-Oz prototyping to understand what people really do on urban roads, and to explore how to best design for human trust in autonomous vehicles. 

Who: Jim Hollan, Don Norman, Colleen Emmenegger, Ben Bergen, Malte Risto, Tavish Grade, Melissa Wright
When: November 2015 – March 2017
Where: 
The Design Lab, UC San Diego

In collaboration with the Nissan Silicon Valley Research Center.

Methods Used: Ethnography of urban road environments (downtown La Jolla, Pacific Beach, university campus intersections, senior centers), Wizard-of-Oz prototyping, multi-modal video coding, participant interviews, surveys

Tools: ChronoViz (~TechSmith Morae), iMovie, GoPro cameras + mounts

Using Ethnography to Build a Vocabulary of Road User Behavior

Challenge: There’s a rich layer of communication present in the road, much of it habitual and outside of conscious awareness. (e.g. making eye contact with other drivers, shifting your body position to indicate you are about to run, waiting for someone else to claim the right-of-way at an intersection). But there doesn’t seem to be an adequate or standardized language to describe these rich behaviors.

Goal: We wanted to understand how context plays a role in generating the meaning of a signal, and uncover what kinds of patterns can be seen through hours of collected traffic interaction footage. 

Method: To begin developing this vocabulary, we conducted an ethnographic study and set up video cameras all along urban environments and non-stoplight regulated intersections. Our presupposition is that in ambiguous traffic situations (including intersections without traffic lights), drivers and pedestrians are more likely to resort to overt and directed signaling to negotiate safe passage on the road. 

Role: My role as the research assistant was to accompany my research sensei on the recording trips & collecting the video data, and then sitting with the team to analyze the video data. I used the video data to make the figures.

Result: We synthesized the findings into a full poster for the Autonomous Vehicle Symposium (AVS 2016), pushing the need for standards in autonomous vehicle to human communication. Here is the brief 1-page paper associated with the poster. 

Figure 1, above: A pedestrian jaywalking & running across Voigt Dr. on the university campus. The pedestrian makes eye contact with the bus driver before proceeding to run across the road. 

Interviewing Road Users on Road User Behavior

After conducting our ethnographic research, we brought participants to our lab and gave semi-structured interviews.

Goal: We built this rich vocabulary, but we wanted to validate it and understand how everyday people talk about behavior and interaction on the road. And what would they notice that we haven’t?

Role: I conducted 20 lab interviews, between 30 minutes – 1 hour for each session. I was responsible for participant recruitment, setting up the lab equipment and study, and for transcribing the interviews with the team. 

Methods Used: talk-aloud procedure, stimulated recall, semi-structured interviewing, participant recruitment.

Using Driving Simulator Experiments to Quantify the effects of Advice + Downstream Information on Behavior

Scenario: You are a driver in an unfamiliar place, with seen and unseen dangers (construction zones, slippery road conditions, incoming ambulances). What role could an Intelligent Driver Support System play in helping you navigate the road in a safe and efficient manner? 

(This simulator experiment was a research collaboration with Toyota).

Approach: We repurposed an old police training simulator and created different traffic events within the simulator route. We made three experimental conditions, and one control. The experimental conditions affected the types of location and time-sensitive messages participants would hear as they drove through the simulator world:

1. Advice Only (e.g. “slow down”, “merge to the left lane”)
2. Information Only (e.g. “slippery road conditions ahead”)
3. Information + Advice
4. Control (no messages). 

Role: I recruited 40+ participants and ran 40+ simulator experiments. I was responsible for data collection, simulator troubleshooting (the machine was buggy), and synthesizing the survey data. After the experiment, I worked with the rest of the team to craft a narrative out of the raw data.

Result: A full report and presentation to Toyota, and an extended abstract based on the simulator experiment to HFES 2017.

Methods Used: qualitative + quantitative surveys, participant recruitment, Wizard of Oz prototyping

“The Self-Driving Golf Cart”

Using Wizard-Oz-Techniques to Explore AV Communication

Challenge: Autonomous vehicles won’t be able to gesture and maintain eye contact like human beings can, giving one less method of resolving ambiguous traffic situations on the road. In the absence of any direct communication from the autonomous vehicle, how will human pedestrians react? 

Approach: Since we did not have access to an autonomous vehicle, we decided to outfit a golf cart with one-way mirrors and asked the driver to drive in a “robotic manner” (no erratic movements, following a pre-determined track), thereby concealing fact that there is a human operator. 

Goal #1: A moveable art exhibit. Stir public interest in the design lab’s research, and showcase the diverse projects being explored in San Diego as part of the city’s first public Design Forward event. The event was attended by families, designers, and industry leaders in San Diego county. In this sense, the self-driving golf cart was “an art exhibit”.

Goal #2: Generate research questions. A Wizard-of-Oz pilot experiment like this can’t falsify a hypothesis, but initial observations can help generate new questions to pursue in future experiments. We didn’t want to waste this opportunity to gauge public reactions to “autonomous vehicles”. Would people even notice that the golf cart was “self-driving” in the first place? And if they did, how would they behave differently? Or not?

Result: Most people cross in front of the cart as if nothing unusual was going on. Perhaps direct communication between the driver and pedestrian is not as critical to negotiation of safe passage as we once thought it was? Our observations served as the inspiration for the next iteration of the self-driving golf cart — the “seat suit” project. 

“The Seat Suit”

Using Wizard-of-Oz Techniques to Explore AV Communication

After the self-driving golf cart experiment, we wanted to expand our pilot study beyond Broadway Pier (a closed environment with pedestrian traffic), and into public roads with cars, bicyclists, and pedestrians all in the same place.

Why: After conducting our ethnography of communication in urban road environments, we had a hunch that vehicle motion and position played a larger role in road user communication than we originally expected. Eye contact and direct signaling via gesture (e.g. hand waving) was not as common as we originally thought. Many people don’t even look at the driver or confirm that the vehicle has made a full stop when they determine that it’s safe to cross. 

Method: We made a homemade seat-suit with wire mesh and seat fabric, and put our lead researcher Colleen Emmenegger inside. (Malte and I wouldn’t fit anyway). Colleen would then drive around campus roads, downtown La Jolla, and Clairemont (suburban San Diego neighborhood) while wearing the seat suit. Malte Risto and I would sit in the back row of the car, watching for incoming traffic and pedestrians. We attached three cameras to the roof of the car to gauge public reactions — one camera facing the rear, and two cameras facing the left and right halves of the front bumper.

Result: What’s surprising is that most pedestrians were not surprised at all by the vehicle setup — they crossed the street as they normally would, even waving to the “empty” seat. Drivers in four-way stop intersections would do the same. Pedestrians that noticed there was “no one” in the front seat usually only noticed after they had crossed the street, or if they were in a position where negotiation of safe passage on the road was not critical. 

This exploratory study gave further evidence for the primacy of vehicle movement and position in communicating intent on the road, as opposed to human gesturing and eye contact. There was also a press release in the San Diego Union-Tribune featuring this particular part of the project, featuring interviews with Colleen Emmenegger and Don Norman.

Bonus: Only one pedestrian flipped the car off during the video recording.

Side Project: New York City – Bicyclist Communication (Nov. 2016)

What: Bicyclists are among the most vulnerable people in an urban road environment. They ride fast, have little armor, and often lack cultural and infrastructural protection in cities within the United States.

Given this, how does an expert bicyclist navigate and communicate with human drivers in New York City traffic? And how may those communication patterns differ from a novice bicyclist?

Special thanks to my dear friend and collaborator Brian Marron (Trinity University, Ireland) for participating as the expert bicyclist, and to Sean Marron for letting me borrow his bike.

Method: For a first-person perspective I attached a camera to the handlebar of the chase bicycle. I rode the chase bicycle and followed Brian as he went on his daily commute to work. He lived in the city for over a year at that point, and thus was the “expert bicyclist”. My only prompt to Brian: “Just bicycle as you normally would.”

Over 1 hour of cycling footage was collected. We started in Brooklyn, made our way over the Williamsburg Bridge into Manhattan, and ended in Central Park.  

Takeaways: In high-traffic areas and areas with multiple directions of movement, often there is not enough time for the bicyclist to make directed signals to drivers. Upon reviewing the collected footage, there were two scenarios when I saw direct signaling between the expert bicyclist and New York City drivers:

1. When the bicyclist was stopped at an intersection and needed to make a turn through oncoming traffic (e.g. making a left turn).

2. When the bicyclist broadcasted their intent to make a lane change by extending the respective arm. Sometimes the bicyclist would check their rear to confirm that the path is clear, but whether they were trying to make eye contact with drivers or only checking for the presence of a vehicle is unclear.

Summary: Signaling and eye contact with drivers is minimal, and it seems that expert bicyclists rely more on the motion and direction of the vehicle as a cue to proceed, rather than directed gesture and established eye contact. 

Future studies involving first-person eye tracking glasses — glasses that could capture both the bicyclist’s field of vision and wherever their eyes are focused — could address that question. I could also follow more bicyclists instead of only one if I’d like to do more than generate potential research questions.