Product Design

Alexa, Fake Laughter, and Real Fear

On Voice Interfaces and the Importance of Context

Lisanne Binhammer

March 9, 2018

It’s late in the evening. You’re home alone, warming up some leftovers, and all of a sudden you hear the sound of maniacal laughter trickling in from your bedroom, making the hairs on your neck stand on end. A gleeful burglar? A deranged murderer? You peek around the corner, heart-in-throat, to see: Alexa. Her blue lights are swirling and she’s pointlessly, oddly, creepily chuckling to herself.

For the past few days, Alexa owners have been reporting untriggered laughter emerging from their devices, documenting their experiences across social media. Of all of the emotions Alexa regularly provokes — from joy to rage — complete and utter fear is a new one. How did this happen?

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Lying in bed about to fall asleep when Alexa on my Amazon Echo Dot lets out a very loud and creepy laugh... there’s a good chance I get murdered tonight.</p>&mdash; Gavin Hightower (@GavinHightower) <a href="https://twitter.com/GavinHightower/status/967999257398702083?ref_src=twsrc%5Etfw">February 26, 2018</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

According to an Amazon spokesperson, the laughter is meant to be triggered exclusively by the command “Alexa, laugh,” but because it’s such a short utterance, it’s easy for the assistant to mistake a wide range of statements for the prompt to chuckle. And unfortunately, Alexa’s unasked-for merriment has come at rather inopportune times. One user reported her laughter erupt in the middle of a confidential conversation he was having about work-related issues.

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Having an office conversation about pretty confidential stuff and Alexa just laughed. Anybody else ever have that?<br><br>It didn&#39;t chime as if we had accidentally triggered her to wake. She simply just laughed. It was really creepy.</p>&mdash; David Woodland 🌴 (@DavidSven) <a href="https://twitter.com/DavidSven/status/969353683350667266?ref_src=twsrc%5Etfw">March 1, 2018</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

In response to this debacle, on March 7 Amazon issued a statement that they are changing the original utterance to “Alexa, can you laugh?” a prompt less likely to produce false positives. To put the nail in the coffin, they also added the response, “Sure, I can laugh” before any actual tee-hee-ing begins, so users won’t (presumably) jump out of their skin.

The solution seems like a straightforward one. We want the laughter to happen, well, only when we intend it. So making it more difficult for unasked laughter to happen makes sense, right?

Not exactly. Matching user intentions to explicit commands is a complex business, and simply changing the command after identifying a failure in the experience is indicative of a problem in the way we are designing conversational interfaces.

Because ultimately, the issue isn’t that Alexa isn’t hearing us correctly. The issue is that we aren’t designing conversations that people actually
to have with their assistants. And as a result we aren’t providing meaningful experiences for users.

The numbers speak for themselves: Despite the fact that, as of November 2017, 8.2 million people owned an Amazon Echo device, and despite the fact that there are currently almost 25,000 skills in the Alexa store, the fact remains that 97% of voice applications go unused after the first two weeks. If we don’t build desirability into these experiences from the bottom up, we can’t expect assistants to become embedded in our day-to-day lives.

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Lying in bed about to fall asleep when Alexa on my Amazon Echo Dot lets out a very loud and creepy laugh... there’s a good chance I get murdered tonight.</p>&mdash; Gavin Hightower (@GavinHightower) <a href="https://twitter.com/GavinHightower/status/967999257398702083?ref_src=twsrc%5Etfw">February 26, 2018</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

So, instead of troubleshooting how we can simply prevent laughter in the midst of a serious work conversation, how can we encourage Alexa to read the room, so to speak, and contribute accordingly? Can we program our devices to track changes in human pitch, volume, and speed (in addition to words and sentences) in order to better deduce the situation? How might these modulations trigger a better, more nuanced type of response? In short, how can we design contextually, so that a CUI’s responses start aligning closely with user needs?

If we don’t build desirability into these experiences from the bottom up, we can’t expect assistants to become embedded in our day-to-day lives.


This is a matter of beginning the design process with research and discovery activities—activities geared toward better understanding the human contexts in which various voice features may be of value. Ultimately, we can imagine our sensitivity to context reaching the point that users don’t have to ask for what they want — when you tell a joke, your assistant laughs.

When we think contextually, we begin to design voice assistants that are robust, thoughtful articulations of our expectations: we design assistants that actually assist.

We might be a long way off from designing contextually. We might be stuck in the Alexa-makes-some-tasks-marginally-better for a while, or treating our assistants as grab-bags of party tricks or frustrating anecdotes over the water cooler. But by acknowledging what is holding us back — not the correct or incorrect utterance, but the actual experience itself — we can start to make strides in the right direction. And hopefully our devices will stop laughing at us from the next room over. Hopefully.

Related Posts

Product I Love: Google Photos

From the CN Tower to beautiful nature shots, read why Solutions Engineer Kevin Bralten's favourite product is Google Photos!

At Connected we are obsessed with great products. We often talk about the great products we worked on and the ones we wished we worked on. We sat down with our Solutions Engineer, Kevin Bralten to ask him about his favourite product. Funnily enough, once you finish reading this post, you’ll understand how we found it ironic that he was wearing his Google Cloud sweatshirt during our interview.

Connector Spotlight: Jacky Li

Senior Product Designer, Eco-Friendly Citizen, Goalie to our Heart. Meet the one and only Jacky Li!

He shoots, but he doesn’t score past our hockey-loving designer, Jacky Li!

Top 12 Collision 2019 Talks Every Practitioner Should Check Out

If you're a Designer, Engineer, Product Manager, or Business Strategist, this guide is for you.

With so many great talks happening at this 3-day conference, we decided to narrow down the list to highlight some to look out for based on your practice.