Connect with us

Business

Election ads are using AI. Tech companies are figuring out how to disclose what’s real. on November 16, 2023 at 11:00 am Business News | The Hill

Published

on

Meta and YouTube are crafting disclosure policies for use of generative artificial intelligence (AI) in political ads as the debate over how the government should regulate the technology stretches toward the 2024 election.

The use of generative AI tools, which can create text, audio and video content, has been on the rise over the past year since the explosive public release of OpenAI’s ChatGPT.  

Lawmakers on both sides of the aisle have shared concerns about how AI could amplify the spread of misinformation, especially regarding critical current events or elections.  

The Senate held its fifth AI Insight Forum last week, covering the impact of AI on elections and democracy.  

Advertisement

As Congress considers proposals to regulate AI, leading tech companies are crafting their own policies that aim to police the use of generative AI in political ads.

In September, Google announced a policy that requires campaigns and political committees to disclose when their ads have been digitally altered, including through AI.

Google CEO Sundar Pichai speaks to college students about strengthening the cybersecurity workforce during a workshop at the Google office in Washington, D.C., on Thursday, June 22, 2023. (AP Photo/Jose Luis Magana)

What do campaigns and advertisers have to disclose?

Election advertisers are required to “prominently disclose” if an ad contains synthetic content that has been digitally altered or generated and “depicts real or realistic-looking people or events,” according to Google’s policy, which went into effect this month. 

Advertisement

Meta, the parent company of Facebook and Instagram, announced a similar policy that requires political advertisers to disclose the use of AI whenever an ad contains a “photorealistic image or video, or realistic sounding audio” that was digitally created or altered for seemingly deceptive means. 

Such cases include if the ad was altered to depict a real person saying or doing something they did not, or a realistic-looking event that did not happen.

Meta said its policy will go into effect in the new year.

Robert Weissman, president of the consumer advocacy group Public Citizen, said the policies are “good steps” but are “not enough from the companies and not a substitute for government action.”

Advertisement

“The platforms can obviously only cover themselves; they can’t cover all outlets,” Weissman said.

Senate Majority Leader Chuck Schumer (D-N.Y.), who launched the AI Insight Forum series, has echoed calls for government action.

Schumer said the self-imposed guardrails by tech companies, or voluntary commitments like the ones the White House secured by Meta, Google and other leading companies on AI, don’t account for the outlier companies that could drag the industry down to meet the lowest threshold of regulation.

Weissman said the policies also fail to address the use of deceptive AI in organic posts that are not political ads.

Advertisement

Several 2024 Republican presidential candidates have already used AI in high-profile videos posted on social media.

How is Congress regulating artificial intelligence in political ads?

Several proposals have been introduced in Congress to address the use of AI in ads.  

A bill from Sens. Amy Klobuchar (D-Minn.), Josh Hawley (R-Mo.), Chris Coons (D-Del.), and Susan Collins (R-Maine) introduced in September would aim to ban the use of deceptive AI-generated audio, images or video in political ads to influence a federal election or fundraise.  

Another measure, introduced in May by Klobuchar, Sens. Cory Booker (D-N.J.) and Michael Bennet (D-Colo.), and Rep. Yvette Clarke (D-N.Y.), would require a disclaimer on political ads that use AI-generated images or video.  

Advertisement

Jennifer Huddleston, a technology policy research fellow at Cato Insitute who attended last week’s AI Insight Forum, said the requirement of disclaimers or watermarks was raised during the closed-door meeting.

Huddleston, however, said those requirements could run into roadblocks in instances where generative AI is used for beneficial reasons, such as adding closed captions or translating ads into different languages.  

“Are we going to see legislation constructed in such a way that we wouldn’t see fatigue from warning labels? Is it going to be that everything is labeled AI the same [way] everything is labeled as a risk under certain other labeling laws in a way that it’s not really improving that consumer education?” Huddleston said. 

Senate Rules and Administration Committee Chair Amy Klobuchar (D-Minn.) speaks during a business meeting to consider S.R. 444, providing for the en bloc consideration of military nominations.

Advertisement

Misleading AI remains a major worry after last two presidential elections

Meta and Google have crafted their policies to target the use of misleading AI.  

The companies said the advertisers would not need to disclose the use of AI tools to adjust the size or color of images. Some critics of dominant tech companies questioned how the platforms will enforce the policies.

Kyle Morse, deputy executive director of the Tech Oversight Project, a nonprofit that advocates for reining in tech giants’ market power, said the policies are “nothing more than press releases from Google and Meta.” He said the policies are “voluntary systems” that lack meaningful enforcement mechanisms.  

Meta said ads without proper disclosures will be rejected, and accounts with repeated nondisclosures may be penalized. The company did not share what the penalties may be or how many repeated offenses would warrant one. 

Advertisement

Google said it will not approve ads that violate the policy and may suspend advertisers with repeated policy violations but did not detail how many policy violations would lead to a suspension.

Weissman said concerns about enforcing rules against misleading AI are “secondary” to establishing those rules in the first place.

“As important as the enforcement questions are, they are secondary to establishing the rules. Because right now, the rules don’t exist to prohibit or even dissuade political deepfakes — with the exception of these actions from the platforms — and now more importantly action from the states,” he said.  

Sen. Josh Hawley (R-Mo.) ranking member of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, speaks during a hearing on artificial intelligence, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky, File)

Advertisement

Consumer groups are pushing for more regulation

As Congress mulls regulation, the Federal Election Commission (FEC) is considering clarifying a rule to address the use of AI in campaigns after facing a petition from Public Citizen. 

Jessica Furst Johnson, a partner at Holtzman and Vogel and general counsel to the Republican Governors Association, said the policies Meta and Google “probably feels like a good middle ground for them at this point.”

“And that sort of prohibition can get really messy, and especially in light of the fact that we don’t yet have federal guidelines and legislation. And frankly, the way our Congress is functioning, I don’t really know when that will happen,” Furst Johnson said.

“They probably feel the pressure to do something, and I’m not entirely surprised. I think this is probably a sensible middle ground to them,” she added.

Advertisement

​Technology, Administration, Business, News, Policy, 2024 presidential election, advertising, AI, Artificial Intelligence, campaign advertising, ChatGPT, OpenAI Meta and YouTube are crafting disclosure policies for use of generative artificial intelligence (AI) in political ads as the debate over how the government should regulate the technology stretches toward the 2024 election. The use of generative AI tools, which can create text, audio and video content, has been on the rise over the past…  

Continue Reading
Advertisement
2 Comments

2 Comments

  1. dyskont online

    March 27, 2024 at 2:47 am

    I see You’re actually a good webmaster. This site loading speed is incredible.
    It seems that you are doing any unique trick. Moreover, the
    contents are masterwork. you’ve performed a wonderful task in this subject!
    Similar here: zakupy online and
    also here: Najtańszy sklep

  2. Backlink Building

    April 4, 2024 at 2:03 pm

    Hi there! Do you know if they make any plugins to help
    with SEO? I’m trying to get my blog to rank for some targeted keywords but I’m not seeing very good gains.
    If you know of any please share. Thank you! You can read similar art here: Backlink Building

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

New DOJ Files Reveal Naomi Campbell’s Deep Ties to Jeffrey Epstein

Published

on

In early 2026, the global conversation surrounding the “Epstein files” has reached a fever pitch as the Department of Justice continues to un-redact millions of pages of internal records. Among the most explosive revelations are detailed email exchanges between Ghislaine Maxwell and Jeffrey Epstein that directly name supermodel Naomi Campbell. While Campbell has long maintained she was a peripheral figure in Epstein’s world, the latest documents—including an explicit message where Maxwell allegedly offered “two playmates” for the model—have forced a national re-evaluation of her proximity to the criminal enterprise.

The Logistics of a High-Fashion Connection

The declassified files provide a rare look into the operational relationship between the supermodel and the financier. Flight logs and internal staff emails from as late as 2016 show that Campbell’s travel was frequently subsidized by Epstein’s private fleet. In one exchange, Epstein’s assistants discussed the urgency of her travel requests, noting she had “no backup plan” and was reliant on his jet to reach international events.

Screenshot

This level of logistical coordination suggests a relationship built on significant mutual favors, contrasting with Campbell’s previous descriptions of him as just another face in the crowd.

In Her Own Words: The “Sickened” Response

Campbell has not remained silent as these files have surfaced, though her defense has been consistent for years. In a widely cited 2019 video response that has been recirculated amid the 2026 leaks, she stated, “What he’s done is indefensible. I’m as sickened as everyone else is by it.” When confronted with photos of herself at parties alongside Epstein and Maxwell, she has argued against the concept of “guilt by association,” telling the press:

“I’ve always said that I knew him, as I knew many other people… I was introduced to him on my 31st birthday by my ex-boyfriend. He was always at the Victoria’s Secret shows.”

She has further emphasized her stance by aligning herself with those Epstein harmed, stating,

“I stand with the victims. I’m not a person who wants to see anyone abused, and I never have been.””

The Mystery of the “Two Playmates”

The most damaging piece of evidence in the recent 2026 release is an email where Maxwell reportedly tells Epstein she has “two playmates” ready for Campbell.

While the context of this “offer” remains a subject of intense debate—with some investigators suggesting it refers to the procurement of young women for social or sexual purposes—Campbell’s legal team has historically dismissed such claims as speculative. However, for a public already wary of elite power brokers, the specific wording used in these private DOJ records has created a “stop-the-scroll” moment that is proving difficult for the fashion icon to move past.

A Reputation at a Crossroads

As a trailblazer in the fashion industry, Campbell is now navigating a period where her professional achievements are being weighed against her presence in some of history’s most notorious social circles. The 2026 files don’t just name her; they place her within a broader system where modeling agents and scouts allegedly groomed young women under the guise of high-fashion opportunities. Whether these records prove a deeper complicity or simply illustrate the unavoidable overlap of the 1% remains the central question of the ongoing DOJ investigation.

Advertisement
Continue Reading

Business

Google Accused Of Favoring White, Asian Staff As It Reaches $28 Million Deal That Excludes Black Workers

Published

on

Google has tentatively agreed to a $28 million settlement in a California class‑action lawsuit alleging that white and Asian employees were routinely paid more and placed on faster career tracks than colleagues from other racial and ethnic backgrounds.

How The Discrimination Claims Emerged

The lawsuit was brought by former Google employee Ana Cantu, who identifies as Mexican and racially Indigenous and worked in people operations and cloud departments for about seven years. Cantu alleges that despite strong performance, she remained stuck at the same level while white and Asian colleagues doing similar work received higher pay, higher “levels,” and more frequent promotions.

Cantu’s complaint claims that Latino, Indigenous, Native American, Native Hawaiian, Pacific Islander, and Alaska Native employees were systematically underpaid compared with white and Asian coworkers performing substantially similar roles. The suit also says employees who raised concerns about pay and leveling saw raises and promotions withheld, reinforcing what plaintiffs describe as a two‑tiered system inside the company.

Why Black Employees Were Left Out

Cantu’s legal team ultimately agreed to narrow the class to employees whose race and ethnicity were “most closely aligned” with hers, a condition that cleared the path to the current settlement.

The judge noted that Black employees were explicitly excluded from the settlement class after negotiations, meaning they will not share in the $28 million payout even though they were named in earlier versions of the case. Separate litigation on behalf of Black Google employees alleging racial bias in pay and promotions remains pending, leaving their claims to be resolved in a different forum.

What The Settlement Provides

Of the $28 million total, about $20.4 million is expected to be distributed to eligible class members after legal fees and penalties are deducted. Eligible workers include those in California who self‑identified as Hispanic, Latinx, Indigenous, Native American, American Indian, Native Hawaiian, Pacific Islander, and/or Alaska Native during the covered period.

Beyond cash payments, Google has also agreed to take steps aimed at addressing the alleged disparities, including reviewing pay and leveling practices for racial and ethnic gaps. The settlement still needs final court approval at a hearing scheduled for later this year, and affected employees will have a chance to opt out or object before any money is distributed.

H2: Google’s Response And The Broader Stakes

A Google spokesperson has said the company disputes the allegations but chose to settle in order to move forward, while reiterating its public commitment to fair pay, hiring, and advancement for all employees. The company has emphasized ongoing internal audits and equity initiatives, though plaintiffs argue those efforts did not prevent or correct the disparities outlined in the lawsuit.

For many observers, the exclusion of Black workers from the settlement highlights the legal and strategic complexities of class‑action discrimination cases, especially in large, diverse workplaces. The outcome of the remaining lawsuit brought on behalf of Black employees, alongside this $28 million deal, will help define how one of the world’s most powerful tech companies is held accountable for alleged racial inequities in pay and promotion.

Advertisement
Continue Reading

Business

Luana Lopes Lara: How a 29‑Year‑Old Became the Youngest Self‑Made Woman Billionaire

Published

on


At just 29, Luana Lopes Lara has taken a title that usually belongs to pop stars and consumer‑app founders.

Multiple business outlets now recognize her as the world’s youngest self‑made woman billionaire, after her company Kalshi hit an 11 billion dollar valuation in a new funding round.

That round, a 1 billion dollar Series E led by Paradigm with Sequoia Capital, Andreessen Horowitz, CapitalG and others participating, instantly pushed both co‑founders into the three‑comma club. Estimates place Luana’s personal stake at roughly 12 percent of Kalshi, valuing her net worth at about 1.3 billion dollars—wealth tied directly to equity she helped create rather than inheritance.

Via Facebook

Kalshi itself is a big part of why her ascent matters.

Founded in 2019, the New York–based company runs a federally regulated prediction‑market exchange where users trade yes‑or‑no contracts on real‑world events, from inflation reports to elections and sports outcomes.

As of late 2025, the platform has reached around 50 billion dollars in annualized trading volume, a thousand‑fold jump from roughly 300 million the year before, according to figures cited in TechCrunch and other financial press. That hyper‑growth convinced investors that event contracts are more than a niche curiosity, and it is this conviction—expressed in billions of dollars of new capital—that turned Luana’s share of Kalshi into a billion‑dollar fortune almost overnight.

Her path to that point is unusually demanding even by founder standards. Luana grew up in Brazil and trained at the Bolshoi Theater School’s Brazilian campus, where reports say she spent up to 13 hours a day in class and rehearsal, competing for places in a program that accepts fewer than 3 percent of applicants. After a stint dancing professionally in Austria, she pivoted into academics, enrolling at the Massachusetts Institute of Technology to study computer science and mathematics and later completing a master’s in engineering.

During summers she interned at major firms including Bridgewater Associates and Citadel, gaining a front‑row view of how global macro traders constantly bet on future events—but without a simple, regulated way for ordinary people to do the same.

submit your film

That realization shaped Kalshi’s founding thesis and ultimately her billionaire status. Together with co‑founder Tarek Mansour, whom she met at MIT, Luana spent years persuading lawyers and U.S. regulators that a fully legal event‑trading exchange could exist under commodities law. Reports say more than 60 law firms turned them down before one agreed to help, and the company then spent roughly three years in licensing discussions with the Commodity Futures Trading Commission before gaining approval. The payoff is visible in 2025’s numbers: an 11‑billion‑dollar valuation, a 1‑billion‑dollar fresh capital injection, and a founder’s stake that makes Luana Lopes Lara not just a compelling story but a data point in how fast wealth can now be created at the intersection of finance, regulation, and software.

Advertisement
Continue Reading

Trending