Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he’d never trust an artificial intelligence chatbot with any questions about the upcoming presidential election.
“I just don’t think AI produces truth,” the 68-year-old political conservative said in an interview. “Grammar and words, that’s something that’s concrete. Political thought, judgment, opinions aren’t.”
Duggan is part of the majority of Americans who do not trust artificial intelligence-powered chatbots or search results to give them accurate answers, according to a survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they are not very or not at all confident that these tools provide reliable and factual information, the poll shows.
Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information.
Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn’t know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn’t use them either, since he’s noticed that it’s possible to “basically just bully AI tools into giving you the answers that you want.”
The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he’s more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions.
A relatively small portion of Americans — 8% — think results produced by AI chatbots such as OpenAI’s ChatGPT or Anthropic’s Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts.
There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden’s voice to convince voters in New Hampshire’s January primary to stay home from the polls.
More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives — from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs.
Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election.
“I believe it to be more informative,” she said, adding that she also likes to look up candidate ads to hear their positions in their own words.
The poll of 1,019 adults was conducted July 29-Aug. 8, 2024, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.0 percentage points.
Comments
Post a Comment