Skip to main content

Most Americans Don’t Trust AI-Powered Election Information: AP-NORC/USAFacts Survey



 Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he’d never trust an artificial intelligence chatbot with any questions about the upcoming presidential election.

“I just don’t think AI produces truth,” the 68-year-old political conservative said in an interview. “Grammar and words, that’s something that’s concrete. Political thought, judgment, opinions aren’t.”


Duggan is part of the majority of Americans who do not trust artificial intelligence-powered chatbots or search results to give them accurate answers, according to a survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they are not very or not at all confident that these tools provide reliable and factual information, the poll shows.

Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information.

Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic — but it also makes them vulnerable to errors.

Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn’t know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn’t use them either, since he’s noticed that it’s possible to “basically just bully AI tools into giving you the answers that you want.”

The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he’s more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions.

A relatively small portion of Americans — 8% — think results produced by AI chatbots such as OpenAI’s ChatGPT or Anthropic’s Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts.

There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden’s voice to convince voters in New Hampshire’s January primary to stay home from the polls.

More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives — from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs.

Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election.

“I believe it to be more informative,” she said, adding that she also likes to look up candidate ads to hear their positions in their own words.

The poll of 1,019 adults was conducted July 29-Aug. 8, 2024, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.0 percentage points.

Comments

Popular posts from this blog

The entire staff of beloved game publisher Annapurna Interactive has reportedly resigned

  Annapurna Interactive, the game company famous for publishing indie hits like Stray, Outer Wilds, Gorogoa, Neon White, What Remains of Edith Finch, and many more, may not be the same company anymore. Bloomberg reports that the entire staff of Annapurna Interactive, the gaming division of Megan Ellison’s Annapurna, has resigned after failing to convince Ellison to let them spin off its games division into a new company. IGN is corroborating the report. Former president Nathan Gary, Annapurna Interactive executives, and “around two dozen” staffers have resigned, Bloomberg reports. An Annapurna spokesperson told Bloomberg that existing games and projects will remain under the company. Annapurna didn’t immediately reply to a request for comment from The Verge. Last week, The Hollywood Reporter said that Gary and the coheads of Annapurna Interactive, Deborah Mars and Nathan Vella, would be leaving. THR also reported that Annapurna planned to “integrate its in-house gaming operations with

The Art of Work: Valuing Time in the Age of AI

  Artificial intelligence isn't going away. As long as there's profit to be made, advancements in AI will shape the next wave of technology. This has led to a collective despair among the creative community. While some creators are heralding AI as a valuable tool, others are leaning into AI replacements for human efforts. In reading about authors who use AI for cover/character art, I have a hot take that comes with a side of nuance: "The act of spending time on artwork doesn't qualify you to get paid for it." I probably don't mean what you think I mean. Hear me out. Recently, an author posted on Threads about using AI images for a book cover. Her reasoning was twofold: she needed a quick turnaround, and she didn't expect the profits from the upcoming promotion to cover new artwork. She mentioned that her time was an investment: "my time does have value, no?" This led to caustic responses from many users who believed that using AI for creative pur

From Big Data to Small Data: The Next Frontier in AI Efficiency

The age of Big Data has brought immense transformations across industries, particularly in the realm of artificial intelligence (AI). With vast amounts of data, AI systems have become more powerful, providing incredible insights, automating processes, and driving decision-making. However, as technology evolves, there is growing interest in shifting from Big Data to Small Data for AI efficiency. This emerging focus represents the next frontier in AI, emphasizing the value of smaller, more relevant datasets that require less computational power but yield equally impactful insights. In this blog, we’ll explore how the transition from Big Data to Small Data is revolutionizing AI development, and why mastering the concepts of data analysis through a  data science course  is essential to understanding this shift. The Era of Big Data in AI For years, the growth of AI has been fueled by Big Data—massive datasets collected from various sources like social media, sensors, and transactions. These