Election Misinformation

Elon Musk’s X platform made a change to its AI assistant, Grok, that may prevent it from giving users false information on election ballot deadlines and other election-related matters. From now on, X says that Grok will direct users to Vote.gov when asked election-related questions.

X, formerly Twitter, made the change about two weeks after five secretaries of state complained to the company. “On August 21, 2024, X’s Head of US and Canada Global Government Affairs informed the Office of the Minnesota Secretary of State [Steve Simon] that the platform has made changes to its AI search assistant, Grok, after a request from several Secretaries of State,” Simon’s office said in a press release yesterday.

Source: https://arstechnica.com/tech-policy/2024/08/xs-grok-will-direct-users-to-vote-gov-after-bungling-basic-ballot-question/

Election misinformation is becoming increasingly insidious with the rise of large language models (LLMs), which can flood digital spaces with deceptive narratives at an unprecedented scale. These tools are capable of generating and disseminating false information quickly, often impersonating credible sources and engaging in conversations that subtly spread misinformation. As LLMs become more sophisticated, voters will be challenged to discern accurate information from deliberate misinformation.

Key risks for election misinformation with LLMs are present in two main scenarios:

  1. Users directly using a LLM (ex. OpenAI or Grok via Twitter app)
  2. Social media chatbots (ex. Facebook or Reddit comment sections)

Both scenarios can provide users with incorrect voter registration deadlines, polling locations, voter eligibility, and third-party website referrals. While both can be harmful, social media chatbots have a higher probability of the operator having nefarious intentions and can interact in real-time with users, adapting their tactics based on individual responses and preferences, making their influence more personal and persuasive. Election interference by foreign threat actors is not a new phenomenon, but the technology enabling its efficacy and ROI is proving to only make it worse without a concerted effort by regulators, law enforcement, and technology companies.

Election misinformation is evaluated for each AI model and incorporated into the Harmful Content category of my scoring.