Skip to content

How older Americans are using AI

Elderly woman and young girl sitting on a sofa, using a tablet together at a wooden coffee table.

Artificial intelligence is being discussed enthusiastically in classrooms and offices, which can make it seem as though it is mainly the preserve of younger people.

Yet older Americans are adopting AI too, prompting obvious questions: how are they using it, and how do they feel about it?

I study ageing, disability and how people use technology. Working with the University of Michigan’s National Poll on Healthy Aging, I helped run a survey of nearly 3,000 Americans aged 50 and over. We asked whether they use AI, the ways they use it, and what worries they have.

Among the older adults who took part, 55% said they had used an AI tool they could interact with by voice, such as Amazon’s Alexa voice assistant, or by typing, such as OpenAI’s ChatGPT chatbot.

Voice-based tools were far more common than text-based chatbots: Half said they had used a voice assistant in the past year, while 1 in 4 reported using a chatbot.

Popular, among some

For many older Americans, continuing to live independently is a central aim-either because they do not want to move into long-term care communities or because the cost puts that option out of reach. AI may help support that goal. Our results suggest that older adults who use AI at home often find it useful for staying independent and feeling safe.

Most people reported using these tools for entertainment and for looking up information. Even so, some answers pointed to more imaginative applications, including generating text, producing images and organising holiday plans.

Almost 1 in 3 older adults said they used AI-enabled home security products, such as video doorbells, outdoor cameras and alarm systems. Of those users, 96% said the devices made them feel safer.

There has been concern about privacy when cameras are used inside the home to monitor older people. By contrast, cameras directed outdoors appear to offer reassurance for those ageing at home alone, or without family living nearby. Among the 35% of older adults who reported using AI-powered home security systems, 96% said they were beneficial.

Looking more closely at who was using AI showed that background characteristics matter. In particular, older adults in better health-and those with more education and higher incomes-were more likely to have used AI voice assistants and AI-powered home security devices over the past year. This mirrors how other technologies, such as smartphones, tend to be adopted.

Trusting AI is tricky

As more becomes known about how accurate AI can be, uncertainty about whether it should be trusted has grown as well. In our survey, older Americans were divided over trusting AI-generated material: 54% said they trust AI, while 46% said they do not. Those who expressed greater trust were also more likely to have used some form of AI technology within the last year.

At the same time, AI-generated output can appear plausible while still being wrong. Being able to spot errors matters when deciding whether-and how-to rely on AI search results or chatbots. However, only half of the older adults we surveyed said they were confident they could tell when AI content was incorrect.

Confidence in identifying inaccuracies was more common among respondents with higher levels of education. Meanwhile, older adults who reported poorer physical and mental health were less likely to trust AI-generated content.

What to do?

Taken together, these findings reflect a familiar pattern in technology uptake-seen even among younger groups-in which people who are healthier and more highly educated are typically earlier adopters and become aware of new tools sooner. That raises practical questions about how to reach all older adults with clear information on both the advantages and the downsides of AI.

How can older people who are not currently using AI get help to learn enough to make informed choices about whether to use it? And how can institutions build stronger training and awareness resources so that older adults who do trust AI do not trust it excessively, or use it inappropriately for important decisions without understanding the hazards?

Our results point to potential starting places for building AI literacy tools for older adults. Nine in 10 older people said they want to know when information has been generated by AI. Labels are beginning to appear in search results-for example, Google search’s AI snippets.

Michigan and other states have introduced requirements to disclose AI content in political adverts, but such notices could be made more prominent in other settings too, including non-political advertising and across social media. In addition, nearly 80% of older people said they wanted to understand more about AI risks-where it can fail and what steps to take when it does.

Policymakers could prioritise enforcing clear AI notices that indicate when content is AI-generated, especially at a time when the U.S. is considering revising its AI approach in the opposite direction-removing language about risk, discrimination and misinformation-under a new executive order.

Overall, our findings indicate that AI can play a role in supporting healthy ageing. However, both overtrust and mistrust could be reduced through better training and policies that make risks easier to see.

Robin Brewer, Associate Professor of Information, University of Michigan

This article is republished from The Conversation under a Creative Commons licence. Read the original article.

Comments

No comments yet. Be the first to comment!

Leave a Comment