More Americans report using artificial intelligence tools yet fewer express confidence in the technology's outputs, according to a Quinnipiac University poll released Monday. Fifty-one percent of 1,399 respondents said they use AI for research, writing, school, work projects or data analysis up from prior surveys while 76% trust it only rarely or sometimes. Only 21% of respondents indicated that they have confidence in AI-generated data at least frequently.
The results reveal a large degree of experimentation with limited confidence and researchers have stated that there is deep concern about accuracy/reliability related to the use of AI-generated data. Users of most AI systems use them for to increase productivity with routine tasks, but use them for significantly less frequently for lower-risk applications such as medical advice or financial planning. Major concerns about the reliability of AI-generated data focus on three areas: hallucinations, biased data, and lack of transparency regarding decision-making.
There is a major generational divide in the way AI is used; Gen Z is the most likely generation to utilize AI (68%) but are the least likely to trust AI systems (15%) when compared to baby boomers (28%). City-based professionals are the most likely to utilize AI solutions to assist in coding and content development and rural respondents are the least likely to utilize AI for fear they may lose jobs through automation.
Broader surveys echo the disconnect. Verasight found 64% usage nationwide but 53% anxiety over societal impacts. TELUS's 2026 AI Trust Atlas reported 89% American adoption paired with 77% demands for pre-release harm reviews. YouGov pegged weekly usage at 35% against 41% outright distrust.
Business implications loom large as enterprises deploy AI amid employee skepticism. KPMG noted 70% workforce enthusiasm tempered by 75% downside worries and 44% unauthorized tool use. Harvard Business Review highlighted mere 6% corporate trust in autonomous agents for core processes.
Policymakers face pressure for transparency mandates as adoption accelerates. Respondents overwhelmingly back regulation, with 90% seeking accountability frameworks. Tech firms tout safeguards yet grapple with "black box" perceptions eroding user faith.
The adoption-trust gap signals maturation pains for generative AI, where utility clashes against reliability shortfalls. Near-term fixes hinge on verifiable outputs and ethical guardrails to convert casual users into confident adopters.