- TXLege News
- Posts
- The Rise of AI in Politics: Risks, Regulation, and Reward
The Rise of AI in Politics: Risks, Regulation, and Reward


It’s called the Liar’s Dividend, and it’s a growing problem. This term, coined by Bobby Chesney, Dean of the University of Texas Law School, describes how bad actors erode trust online. Their goal isn’t just to influence you about a single candidate or issue, but to make you question everything you see online—even (and sometimes, especially) the truth.
Think of it as the Schrödinger’s Cat of political communication. Today, everything online is both true and false, and without a universally accepted standard for truth, there’s a significant opportunity for bad actors to spread falsehoods without consequence.
Consider this example: In 2024, Google’s Pixel 9 introduced a feature called “reimagine” for photos, allowing users to manipulate images—adding or removing elements at will. While this could be used optimistically to remove unwanted photo-bombers or include someone who was left out, the darker reality looms. As The Verge writer Sarah Jeong bluntly states: “We are fucked.”
Jeong explains:
“Everyone reading this article in 2024 grew up in an era when a photograph was, by default, a representation of truth. A staged scene with movie effects, a digital photo manipulation, or more recently a deepfake - these were potential deception to take into account, but they were outliers in the realm of possibility.”
But now with this tool and countless others like it, photographs no longer offer a realistic look at what took place at a particular moment in time. If photographs can’t be trusted, then what do we have left as objective historical reality? They say history is written by the victors; now it can be rewritten by anyone with a smartphone.
Regulating AI in Elections and Lobbying
These are the questions that state legislatures across the country are grappling with as they undertake the daunting task of regulating AI. But that begs the question…why isn’t the federal government doing something about it? Why is it being left to each state to decide?
“The Cavalry Isn’t Coming”
A former Federal Election Commission (FEC) attorney recently warned that the federal government is not prepared to regulate AI in elections. When the fake Biden robocall surfaced during this year’s primaries, the FCC tried to create rules to limit voice manipulation in robocalls. However, the FEC intervened, claiming jurisdiction over election-related issues. While the FEC’s involvement makes sense, it has yet to issue any rules, and none appear forthcoming. The reason? First Amendment concerns.
Many argue that regulations governing AI in elections would not withstand judicial scrutiny due to the expansive protections offered by the First Amendment. The constitution protects anonymous speech, false speech, and parody—all forms of expression implicated in AI-generated content for campaign ads. With a presidential election approaching, it’s also unlikely that Congress will act on AI regulation in time.
State Legislatures Taking Action
In contrast, state legislatures introduced as many as 50 AI-related bills a week in 2024, many focused on synthetic media.
Most states are still in the early stages of addressing AI’s impact. Their standard approach follows a pattern:
Step 1 is to establish study committees to offer recommendations.
Step 2 involves defining AI.
Step 3 typically addresses criminal acts like deepfakes related to child pornography, revenge porn, or other illicit activities.
Step 4 finally begins to address AI in the context of campaigning or lobbying.
The Step 4 approach involves requiring political campaigns to disclose when they use synthetic media in advertising. In response, some tech companies have started adding digital watermarks to AI-generated political ads, making them easier to identify.

Texas and Minnesota currently have the strictest deepfake laws in the country, but these also have limitations. For example, Texas law states:
Sec. 255.004. TRUE SOURCE OF COMMUNICATION.
(c) An offense under this section is a Class A misdemeanor.
(d) A person commits an offense if the person, with intent to injure a candidate or influence the result of an election:
(1) creates a deepfake video; and
(2) causes the deepfake video to be published or distributed within 30 days of an election.
(e) In this section, "deep fake video" means a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.
Here we see that deep fake is limited to videos (does not include photos, voice or other sound manipulation, or depictions of fictitious events), and it is limited to the final month before election day. While innovative when passed five years ago, advances in AI have since exposed loopholes, enabling campaigns and opponents to easily circumvent these restrictions.
AI’s Role in Campaigns
AI is already transforming campaign operations in several key ways:
AI Voice-Calling: AI can serve as a virtual campaign volunteer, making calls and engaging in natural-sounding conversations instead of following a rigid script. While this could revolutionize outreach, overuse may backfire if voters are bombarded with too many calls or texts.
Campaign Strategy: Given basic race information (e.g., whether the race is competitive or under-the-radar, fundraising totals, etc.), AI tools can generate comprehensive strategy memos and update them as conditions change.
Hyper-personalized Messaging: AI mines voter data, tailoring ads and outreach messages to individuals based on behavior, interests, and location.
These tools could revolutionize how campaigns connect with voters and could level the playing field between fledging campaigns and well-funded incumbents, but it also risks deepening the divide particularly if AI technology becomes too expensive for smaller campaigns to afford.
AI’s Impact on Lobbying
AI is also revolutionizing lobbying, where tools are being used for:
Targeted Communication: AI assists in pinpointing key decision-makers and delivering tailored messages.
Policy Monitoring and Prediction: AI tracks legislative changes and predicts the outcomes of pending bills.
Influence Mapping: Algorithms can identify influential players in policy-making, helping lobbyists better target their efforts.
Automation: AI automates administrative tasks, allowing lobbyists to focus on relationship-building—a fundamental aspect of lobbying.
However, the use of AI could also lead to laziness in lobbying, resulting in poorly drafted bills, inaccurate bill analyses, or even fabricated data that mislead legislators when pushing legislation.
USLege, Inc. is an example of how companies can use AI responsibly, by automating mundane, repetitive tasks while focusing on ethical and effective tools to help you work.
Conclusion: AI as a Tool, Not a Threat
They say AI won’t replace jobs— instead, those who use AI will replace those who don’t. Nowhere is this truer than in the high-stakes world of politics. But AI isn’t the enemy. With responsible regulation and ethical practices, AI can help refocus political communication on its core purpose—relationship-building and meaningful dialogue, which should always form the foundation of our democratic process.

Andrew Cates is the Owner of Cates Legal Group PLLC, specializing in legal counsel for candidates, political action committees, and nonprofits in election and campaign law. He authored Texas Ethics Laws Annotated, the only comprehensive legal annotation of Texas campaign finance and lobby laws, now in its 8th edition. One of fewer than twenty U.S. attorneys with a certification in Legislative & Campaign Law, Andrew contributes frequently to news publications and serves as a founding faculty member at the Healing Politics Campaign School at Duke University. He also leads the Professional Advocacy Association of Texas. Previously, Andrew was General Counsel and Director of Government Affairs for the Texas Nurses Association, securing over $25 million in state funding to address the nurse shortage. He also served as legislative attorney for the Texas Association of REALTORS® and lead attorney for its PAC. Before his current role, Andrew worked at the Texas Capitol, lobbied for solar energy and healthcare, and practiced in areas such as mergers & acquisitions, healthcare, and criminal law. Andrew is a Founding Member and Board Member of the State Bar of Texas Legislative & Campaign Law Section and was instrumental in establishing the nation's first legal specialization in this field. He holds a B.A. in International Politics from Trinity University and a J.D. from Texas Tech School of Law (2007). Andrew is the General Counsel for USLege.
We hope you enjoyed today’s read!
Stay connected with TXLege on X @TXLege_
Find what you need in committee hearings, floor debates, and state agency meetings faster with USLege. Say goodbye to tedious tasks!
Seeking impartial news? Meet 1440.
Every day, 3.5 million readers turn to 1440 for their factual news. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture, all in a brief 5-minute email. Enjoy an impartial news experience.