It could be a manipulated video embarrassing a candidate. Or a computer voting system locked by ransomware. Or doubts about electronic voting machines with no paper backups.
As Americans prepare for 2020 elections, digital threats to election security are multiplying, stoking fears of a tainted outcome.
Worries are running high following revelations of a wide-ranging misinformation campaign on Facebook and other social platforms, largely directed by Russian operatives, in 2016.
This was described in detail by special counsel Robert Mueller, whose office obtained several indictments for election interference.
Cyber interference and disinformation operations surrounding elections "are part of a much larger, ongoing challenge to democracies everywhere," said a report from Stanford University's Cyber Policy Center.
Maurice Turner, an election security specialist with the Washington-based Center for Democracy & Technology, said these threats could lead to "a negative impact on voter confidence" in 2020.
The newest threat may be "deepfake" video and audio manipulated with artificial intelligence which can put words in the mouths of candidates.
It might even show "unflattering or abusive images of women and minority aspirants in an effort to discredit them," said Darrell West with the Brookings Institution's Center for Technology Innovation, in an online report.
"It is easy to manipulate still images or video footage to put someone in a compromising situation," West wrote.
Danielle Citron, a Boston University online safety expert, told a recent TedSummit talk that deepfakes "can exploit and magnify the deep distrust that we already have in politicians, business leaders and other influential leaders."
Deepfakes "can reinforce an idea for those who want to believe it and be a distraction in the news cycle" even if they are debunked, Turner said.
Social media platforms like Facebook and Twitter will be closely scrutinized on how well they counter misinformation.
Experts say it will be increasingly difficult to counter automated accounts or "bots" that can amplify false news.
The failure to take a hard stand against manipulation in 2016 has likely "emboldened Russia to try again in 2020," wrote Stanford professor and ex-Facebook security chief Alex Stamos. Other efforts might come China, Iran or North Korea, he said.
Facebook, Google, Microsoft and Twitter security teams met in September with FBI, homeland security and intelligence officials to discuss collaboration on election threats.
It will be important to anticipate new threats, and not simply use methods from the past.
Facebook's visual platform Instagram could become the most important "disinformation magnet" in 2020, a report by New York University's Center for Business and Human Rights suggests.
The report also said Russian organizations may try to recruit "unwitting" Americans to help spread propaganda.
The researchers called on social platforms to remove "provably false" information -- a delicate task for platforms seeking to avoid becoming truth "arbiters."
It is "tremendously difficult" to moderate content "at scale that allows users to speak freely and have that vigorous public discourse," Turner said.
Rights group Freedom House warned that it's hard to prove content is "unequivocally false," and that banning all foreign content "could harm press freedom."
Some interference is aimed at "voter suppression," or dissuading people from voting through intimidation or lies, a technique likely to rise in 2020.
The Kremlin-linked Internet Research Agency ran Facebook ads to suppress non-white voter turnout in 2016 by urging people to "boycott the election," arguing that neither presidential candidate would serve black voters, according to research led by University of Wisconsin professor Young Mie Kim.
Some messages gave the wrong election date, encouraged people to bring guns to the polls, or suggested improbably that people could vote by text message.
It's a daunting task to weed out fake and foreign accounts, Kim said.
"There are still a lot of groups that don't have a verifiable online identity" and may be controlled by foreign actors, Kim said.
"If we identify the foreign sources, and deal with those, it would be much easier than discussing what is false and how we correct it," she said.
Electronic voting machine vulnerabilities have been known for years, but at least eight US states will have paperless machines which might not be auditable if results are questioned, according to NYU's Brennan Center.
"We will have a patchwork of strengths and weaknesses, and attackers who want to discredit the election will pick the weakest places," said University of Michigan election security specialist Alex Halderman.
Many local governments use aging computer systems, so threats of hacking voter databases and ransomware which locks up election rolls are also worrisome.
Without paper backups and forensic audits it might be impossible to tell if some electronic voting machines were tampered with.
"A well conducted attack that changed the result wouldn't necessarily raise alarm bells."