Voice technologies are set to reach billions of people in the decades ahead. But what impact will they have on our lives?
Whenever we seek to understand the impact of an emerging technology on our shared future, we can’t look at it in isolation. Instead, we need to view the technology through the lens of human nature.
Human beings are motivated by a set of core needs and values that, at their most fundamental, don’t change over time. Think convenience, security, distraction, social connection and more. This shared, unchanging human nature provides a stable perspective via which we can assess the impact of emerging technologies.
One core truth can guide us in this project. That is, that powerful new trends in human behavior and mindset emerge when new technologies unlock new ways to serve age-old human needs.
Personalization and meaning
So when assessing the potential future impact of voice technologies, which fundamental human needs should we pay attention to?
Two stand out. First, voice technologies are unlocking new ways to serve the desire for personalization: that is, the desire for the world to serve us objects and experiences that are relevant, just right, a perfect fit. Second, and relatedly, voice technologies are unlocking new ways to serve the deep-rooted human need for meaningful relationships.
We can understand this journey via its three phases: let’s call them simple personalization, advanced personalization, and the quest for meaning.
Together, these three stages constitute a journey that will see a fundamental shift in the nature of our relationship with AI-fueled digital voice technologies. That relationship will shift from one that is primarily transactional – “Alexa, order me some washing powder” – to one founded in the desire for self-knowledge, companionship and meaningful relationships.
Given that by 2021, it’s estimated there will be more voice assistants on the planet than there are people (according to research firm Ovum), this will constitute a huge shift in the nature of our relationship with digital technology.
Let’s take a closer look at the three stages:
Across the last three decades, the story of digital goes hand in hand with the story of data-fueled personalization. Every day, billions of people are served a bespoke experience of the online space based on their tastes, preferences and past behaviour, all as embodied by their personal data.
The rise of AI-fueled voice digital assistants is now presenting clear opportunities to extend this kind of personalization.
A range of home assistants can now or will soon be able to distinguish between multiple regular users and serve them appropriately. So that if a specific user says, for example, “Play me some music”, that user will be served music for their personal playlist.
Home assistants can leverage other factors of relevance – time of day, local weather, and so on – to deliver a more relevant experience.
In more evolved versions of this stage, voice assistants will be able to leverage simple inferred information about any user – even one using the service for the first time – to offer a more relevant experience. For example, Amazon recently filed a patent for new features that will see its Alexa home assistant become able to offer a more personalized service based on the age, gender language accent and more of a user.
The simple personalization stage already raises clear concerns about the data privacy of voice users (more on this later). When it comes to privacy concerns, evidence suggests that transparency and accountability are key. According to Accenture Interactive, 83% of consumers are willing to share their data to enable a personalized experience, as long as businesses are transparent about how they will use it.
As voice assistants evolve, the kinds of personalization they are able to offer will become deeper and more complex. This will include personalization around the emotional state of users, and around some key aspects of and metrics relating to health and well-being.
For example, Pillo is a voice-activated home robot that helps users manage their health. The device can issue reminders to take medication and dispense pills on a regular schedule, offer healthy living advice, and sync to wearable devices such as the Apple Watch in order to help monitor a user’s personal health and well-being metrics.
Meanwhile, sensors that can detect human heart rate and respiration from across a room will enable health assistants to deliver even deeper, real time personalization around well-being. For example, alerting the user or a medical professional if heart rhythm becomes abnormal.
We all know that when a person speaks, a great deal of information is conveyed non-verbally, including by tone of voice. Chinese technology giant Huawei say they are developing technology that can accommodate this fact. They plan to leverage data on the emotional state of the user – gained via facial and voice recognition – to fuel an AI smartphone assistant that can relate to users on an emotional level.
We can expect to see the emergence of voice technologies with this kind of advanced personalization in a wide range of contexts. For example, an in-car voice assistant that can discern and respond to the emotional state of the driver: it looks as though this traffic is making you stressed; let’s try some breathing exercises.
The quest for meaning
As AI-backed voice assistants become able to understand their users in deeper and more complex ways, the nature of the relationship that users have with these technologies will change. This relationship will become about more than just functionality and transaction. Users will come to see virtual entities as companions, counsellors, even friends.
The idea that anyone would see a virtual entity as something akin to a friend might sound like something out of science-fiction. And we’re still a long way from that. But it’s already possible to see the first glimmerings of this shift.
Millions of people already attempt conversations with popular virtual assistants that go beyond the merely functional. Apple reportedly hired counsellors and psychologists for the Siri team after data revealed that users often talk to Siri about their personal problems.
A range of startups are also attempting to create virtual entities that act as counsellors or companions. Woebot is an AI-fuelled mental health chatbot intended to deliver a form of cognitive behavioral therapy. Evidence on the effectiveness of these kinds of chatbots is very limited, but a randomized controlled trial by Woebot and Stanford University found that the chatbot helped students dealing with depression.
Meanwhile, Replika is a chatbot billed as “the AI companion who cares”. Users can engage in conversation with the chatbot, which over time comes to know their personality, preferences and life story. The idea is that a highly evolved Replika can act as a conversational companion that allows the user new self-insight. Replika has been downloaded over 2.5 million times, and the makers have said they will add voice functionality to the next iteration.
For thousands of years, humans have dreamed about the sage or guru who via ordinary dialogue could offer deep and life-changing insight. Could the next piece of technology to rival the desirability, impact and adoption rate of the smartphone be a bespoke AI companion that acts as a personal guru, counsellor and friend?
The journey from here
This journey from simple to advanced personalization and then on to meaningful relationships comes with clear opportunities, but also risks.
At every stage there are clear risks around privacy and data misuse. There are also risks that personalization around age and gender, for example, will serve to consolidate pre-existing stereotypes.
Meanwhile, the idea that virtual entities might serve as companions takes us into uncharted territory. What could this mean for the mental and social well-being of users?
The collision between new technologies and human nature is a story as old as our species. When it comes to voice technologies, personalization and meaning, we need further work to develop frameworks of understanding, education and governance.
If we meet that challenge, we can help ensure that we capture the amazing potential of voice technologies in the decades ahead.
This article was written on behalf of the Global Future Council on Consumption.
- [LLODO] Michigan state Dem pepper-sprayed, charged with DUI, resisting arrest, weapons possession: report
- [LLODO] Head of NYC’s posh Dalton School leaving at the end of 2021
- [LLODO] Chilling video captures moment a love triangle erupts in murder, revenge in NYC
- [LLODO] NYPD officers hit with Molotov cocktail and liquid chemical in face, police say
- [LLODO] California group files federal civil rights complaint over San Diego school district’s ‘racist’ teachings
- [LLODO] Podcast helped in hunt for 1996 killer of California student
- [LLODO] National weather forecast: Parts of Northeast could see more than a foot of snow
- [LLODO] Cuomo boasts he ‘invented’ NYS-scented hand sanitizer, faces no questions over scandals
- [LLODO] Teacher who decried NYC school’s ‘indoctrination’ put on remote work: ‘Feels like punishment’