Connect with us

Business

Election ads are using AI. Tech companies are figuring out how to disclose what’s real. on November 16, 2023 at 11:00 am Business News | The Hill

Published

on

Meta and YouTube are crafting disclosure policies for use of generative artificial intelligence (AI) in political ads as the debate over how the government should regulate the technology stretches toward the 2024 election.

The use of generative AI tools, which can create text, audio and video content, has been on the rise over the past year since the explosive public release of OpenAI’s ChatGPT.  

Lawmakers on both sides of the aisle have shared concerns about how AI could amplify the spread of misinformation, especially regarding critical current events or elections.  

The Senate held its fifth AI Insight Forum last week, covering the impact of AI on elections and democracy.  

Advertisement

As Congress considers proposals to regulate AI, leading tech companies are crafting their own policies that aim to police the use of generative AI in political ads.

In September, Google announced a policy that requires campaigns and political committees to disclose when their ads have been digitally altered, including through AI.

Google CEO Sundar Pichai speaks to college students about strengthening the cybersecurity workforce during a workshop at the Google office in Washington, D.C., on Thursday, June 22, 2023. (AP Photo/Jose Luis Magana)

What do campaigns and advertisers have to disclose?

Election advertisers are required to “prominently disclose” if an ad contains synthetic content that has been digitally altered or generated and “depicts real or realistic-looking people or events,” according to Google’s policy, which went into effect this month. 

Advertisement

Meta, the parent company of Facebook and Instagram, announced a similar policy that requires political advertisers to disclose the use of AI whenever an ad contains a “photorealistic image or video, or realistic sounding audio” that was digitally created or altered for seemingly deceptive means. 

Such cases include if the ad was altered to depict a real person saying or doing something they did not, or a realistic-looking event that did not happen.

Meta said its policy will go into effect in the new year.

Robert Weissman, president of the consumer advocacy group Public Citizen, said the policies are “good steps” but are “not enough from the companies and not a substitute for government action.”

Advertisement

“The platforms can obviously only cover themselves; they can’t cover all outlets,” Weissman said.

Senate Majority Leader Chuck Schumer (D-N.Y.), who launched the AI Insight Forum series, has echoed calls for government action.

Schumer said the self-imposed guardrails by tech companies, or voluntary commitments like the ones the White House secured by Meta, Google and other leading companies on AI, don’t account for the outlier companies that could drag the industry down to meet the lowest threshold of regulation.

Weissman said the policies also fail to address the use of deceptive AI in organic posts that are not political ads.

Advertisement

Several 2024 Republican presidential candidates have already used AI in high-profile videos posted on social media.

How is Congress regulating artificial intelligence in political ads?

Several proposals have been introduced in Congress to address the use of AI in ads.  

A bill from Sens. Amy Klobuchar (D-Minn.), Josh Hawley (R-Mo.), Chris Coons (D-Del.), and Susan Collins (R-Maine) introduced in September would aim to ban the use of deceptive AI-generated audio, images or video in political ads to influence a federal election or fundraise.  

Another measure, introduced in May by Klobuchar, Sens. Cory Booker (D-N.J.) and Michael Bennet (D-Colo.), and Rep. Yvette Clarke (D-N.Y.), would require a disclaimer on political ads that use AI-generated images or video.  

Advertisement

Jennifer Huddleston, a technology policy research fellow at Cato Insitute who attended last week’s AI Insight Forum, said the requirement of disclaimers or watermarks was raised during the closed-door meeting.

Huddleston, however, said those requirements could run into roadblocks in instances where generative AI is used for beneficial reasons, such as adding closed captions or translating ads into different languages.  

“Are we going to see legislation constructed in such a way that we wouldn’t see fatigue from warning labels? Is it going to be that everything is labeled AI the same [way] everything is labeled as a risk under certain other labeling laws in a way that it’s not really improving that consumer education?” Huddleston said. 

Senate Rules and Administration Committee Chair Amy Klobuchar (D-Minn.) speaks during a business meeting to consider S.R. 444, providing for the en bloc consideration of military nominations.

Advertisement

Misleading AI remains a major worry after last two presidential elections

Meta and Google have crafted their policies to target the use of misleading AI.  

The companies said the advertisers would not need to disclose the use of AI tools to adjust the size or color of images. Some critics of dominant tech companies questioned how the platforms will enforce the policies.

Kyle Morse, deputy executive director of the Tech Oversight Project, a nonprofit that advocates for reining in tech giants’ market power, said the policies are “nothing more than press releases from Google and Meta.” He said the policies are “voluntary systems” that lack meaningful enforcement mechanisms.  

Meta said ads without proper disclosures will be rejected, and accounts with repeated nondisclosures may be penalized. The company did not share what the penalties may be or how many repeated offenses would warrant one. 

Advertisement

Google said it will not approve ads that violate the policy and may suspend advertisers with repeated policy violations but did not detail how many policy violations would lead to a suspension.

Weissman said concerns about enforcing rules against misleading AI are “secondary” to establishing those rules in the first place.

“As important as the enforcement questions are, they are secondary to establishing the rules. Because right now, the rules don’t exist to prohibit or even dissuade political deepfakes — with the exception of these actions from the platforms — and now more importantly action from the states,” he said.  

Sen. Josh Hawley (R-Mo.) ranking member of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, speaks during a hearing on artificial intelligence, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky, File)

Advertisement

Consumer groups are pushing for more regulation

As Congress mulls regulation, the Federal Election Commission (FEC) is considering clarifying a rule to address the use of AI in campaigns after facing a petition from Public Citizen. 

Jessica Furst Johnson, a partner at Holtzman and Vogel and general counsel to the Republican Governors Association, said the policies Meta and Google “probably feels like a good middle ground for them at this point.”

“And that sort of prohibition can get really messy, and especially in light of the fact that we don’t yet have federal guidelines and legislation. And frankly, the way our Congress is functioning, I don’t really know when that will happen,” Furst Johnson said.

“They probably feel the pressure to do something, and I’m not entirely surprised. I think this is probably a sensible middle ground to them,” she added.

Advertisement

​Technology, Administration, Business, News, Policy, 2024 presidential election, advertising, AI, Artificial Intelligence, campaign advertising, ChatGPT, OpenAI Meta and YouTube are crafting disclosure policies for use of generative artificial intelligence (AI) in political ads as the debate over how the government should regulate the technology stretches toward the 2024 election. The use of generative AI tools, which can create text, audio and video content, has been on the rise over the past…  

Continue Reading
Advertisement
2 Comments

2 Comments

  1. dyskont online

    March 27, 2024 at 2:47 am

    I see You’re actually a good webmaster. This site loading speed is incredible.
    It seems that you are doing any unique trick. Moreover, the
    contents are masterwork. you’ve performed a wonderful task in this subject!
    Similar here: zakupy online and
    also here: Najtańszy sklep

  2. Backlink Building

    April 4, 2024 at 2:03 pm

    Hi there! Do you know if they make any plugins to help
    with SEO? I’m trying to get my blog to rank for some targeted keywords but I’m not seeing very good gains.
    If you know of any please share. Thank you! You can read similar art here: Backlink Building

Leave a Reply

Cancel reply

Your email address will not be published. Required fields are marked *

Business

Google Accused Of Favoring White, Asian Staff As It Reaches $28 Million Deal That Excludes Black Workers

Published

on

Google has tentatively agreed to a $28 million settlement in a California class‑action lawsuit alleging that white and Asian employees were routinely paid more and placed on faster career tracks than colleagues from other racial and ethnic backgrounds.

How The Discrimination Claims Emerged

The lawsuit was brought by former Google employee Ana Cantu, who identifies as Mexican and racially Indigenous and worked in people operations and cloud departments for about seven years. Cantu alleges that despite strong performance, she remained stuck at the same level while white and Asian colleagues doing similar work received higher pay, higher “levels,” and more frequent promotions.

Cantu’s complaint claims that Latino, Indigenous, Native American, Native Hawaiian, Pacific Islander, and Alaska Native employees were systematically underpaid compared with white and Asian coworkers performing substantially similar roles. The suit also says employees who raised concerns about pay and leveling saw raises and promotions withheld, reinforcing what plaintiffs describe as a two‑tiered system inside the company.

Why Black Employees Were Left Out

Cantu’s legal team ultimately agreed to narrow the class to employees whose race and ethnicity were “most closely aligned” with hers, a condition that cleared the path to the current settlement.

The judge noted that Black employees were explicitly excluded from the settlement class after negotiations, meaning they will not share in the $28 million payout even though they were named in earlier versions of the case. Separate litigation on behalf of Black Google employees alleging racial bias in pay and promotions remains pending, leaving their claims to be resolved in a different forum.

What The Settlement Provides

Of the $28 million total, about $20.4 million is expected to be distributed to eligible class members after legal fees and penalties are deducted. Eligible workers include those in California who self‑identified as Hispanic, Latinx, Indigenous, Native American, American Indian, Native Hawaiian, Pacific Islander, and/or Alaska Native during the covered period.

Beyond cash payments, Google has also agreed to take steps aimed at addressing the alleged disparities, including reviewing pay and leveling practices for racial and ethnic gaps. The settlement still needs final court approval at a hearing scheduled for later this year, and affected employees will have a chance to opt out or object before any money is distributed.

H2: Google’s Response And The Broader Stakes

A Google spokesperson has said the company disputes the allegations but chose to settle in order to move forward, while reiterating its public commitment to fair pay, hiring, and advancement for all employees. The company has emphasized ongoing internal audits and equity initiatives, though plaintiffs argue those efforts did not prevent or correct the disparities outlined in the lawsuit.

For many observers, the exclusion of Black workers from the settlement highlights the legal and strategic complexities of class‑action discrimination cases, especially in large, diverse workplaces. The outcome of the remaining lawsuit brought on behalf of Black employees, alongside this $28 million deal, will help define how one of the world’s most powerful tech companies is held accountable for alleged racial inequities in pay and promotion.

Advertisement
Continue Reading

Business

Luana Lopes Lara: How a 29‑Year‑Old Became the Youngest Self‑Made Woman Billionaire

Published

on


At just 29, Luana Lopes Lara has taken a title that usually belongs to pop stars and consumer‑app founders.

Multiple business outlets now recognize her as the world’s youngest self‑made woman billionaire, after her company Kalshi hit an 11 billion dollar valuation in a new funding round.

That round, a 1 billion dollar Series E led by Paradigm with Sequoia Capital, Andreessen Horowitz, CapitalG and others participating, instantly pushed both co‑founders into the three‑comma club. Estimates place Luana’s personal stake at roughly 12 percent of Kalshi, valuing her net worth at about 1.3 billion dollars—wealth tied directly to equity she helped create rather than inheritance.

Via Facebook

Kalshi itself is a big part of why her ascent matters.

Founded in 2019, the New York–based company runs a federally regulated prediction‑market exchange where users trade yes‑or‑no contracts on real‑world events, from inflation reports to elections and sports outcomes.

As of late 2025, the platform has reached around 50 billion dollars in annualized trading volume, a thousand‑fold jump from roughly 300 million the year before, according to figures cited in TechCrunch and other financial press. That hyper‑growth convinced investors that event contracts are more than a niche curiosity, and it is this conviction—expressed in billions of dollars of new capital—that turned Luana’s share of Kalshi into a billion‑dollar fortune almost overnight.

Her path to that point is unusually demanding even by founder standards. Luana grew up in Brazil and trained at the Bolshoi Theater School’s Brazilian campus, where reports say she spent up to 13 hours a day in class and rehearsal, competing for places in a program that accepts fewer than 3 percent of applicants. After a stint dancing professionally in Austria, she pivoted into academics, enrolling at the Massachusetts Institute of Technology to study computer science and mathematics and later completing a master’s in engineering.

During summers she interned at major firms including Bridgewater Associates and Citadel, gaining a front‑row view of how global macro traders constantly bet on future events—but without a simple, regulated way for ordinary people to do the same.

submit your film

That realization shaped Kalshi’s founding thesis and ultimately her billionaire status. Together with co‑founder Tarek Mansour, whom she met at MIT, Luana spent years persuading lawyers and U.S. regulators that a fully legal event‑trading exchange could exist under commodities law. Reports say more than 60 law firms turned them down before one agreed to help, and the company then spent roughly three years in licensing discussions with the Commodity Futures Trading Commission before gaining approval. The payoff is visible in 2025’s numbers: an 11‑billion‑dollar valuation, a 1‑billion‑dollar fresh capital injection, and a founder’s stake that makes Luana Lopes Lara not just a compelling story but a data point in how fast wealth can now be created at the intersection of finance, regulation, and software.

Advertisement
Continue Reading

Business

Harvard Grads Jobless? How AI & Ghost Jobs Broke Hiring

Published

on

America’s job market is facing an unprecedented crisis—and nowhere is this more painfully obvious than at Harvard, the world’s gold standard for elite education. A stunning 25% of Harvard’s MBA class of 2025 remains unemployed months after graduation, the highest rate recorded in university history. The Ivy League dream has become a harsh wakeup call, and it’s sending shockwaves across the professional landscape.

Jobless at the Top: Why Graduates Can’t Find Work

For decades, a Harvard diploma was considered a golden ticket. Now, graduates send out hundreds of résumés, often from their parents’ homes, only to get ghosted or auto-rejected by machines. Only 30% of all 2025 graduates nationally have found full-time work in their field, and nearly half feel unprepared for the workforce. Go to college, get a good job“—that promise is slipping away, even for the smartest and most driven.​

Tech’s Iron Grip: ATS and AI Gatekeepers

Applicant tracking systems (ATS) and AI algorithms have become ruthless gatekeepers. If a résumé doesn’t perfectly match the keywords or formatting demanded by the bots, it never reaches human eyes. The age of human connection is gone—now, you’re just a data point to be sorted and discarded.

AI screening has gone beyond basic qualifications. New tools “read” for inferred personality and tone, rejecting candidates for reasons they never see. Worse, up to half of online job listings may be fake—created simply to collect résumés, pad company metrics, or fulfill compliance without ever intending to fill the role.

The Experience Trap: Entry-Level Jobs Require Years

It’s not just Harvard grads who are hurting. Entry-level roles demand years of experience, unpaid internships, and portfolios that resemble a seasoned professional, not a fresh graduate. A bachelor’s degree, once the key to entry, is now just the price of admission. Overqualified candidates compete for underpaid jobs, often just to survive.

One Harvard MBA described applying to 1,000 jobs with no results. Companies, inundated by applications, are now so selective that only those who precisely “game the system” have a shot. This has fundamentally flipped the hiring pyramid: enormous demand for experience, shrinking chances for new entrants, and a brutal gauntlet for anyone not perfectly groomed by internships and coaching.

Advertisement

Burnout Before Day One

The cost is more than financial—mental health and optimism are collapsing among the newest generation of workers. Many come out of elite programs and immediately end up in jobs that don’t require degrees, or take positions far below their qualifications just to pay the bills. There’s a sense of burnout before careers even begin, trapping talent in a cycle of exhaustion, frustration, and disillusionment.

Cultural Collapse: From Relationships to Algorithms

What’s really broken? The culture of hiring itself. Companies have traded trust, mentorship, and relationships for metrics, optimizations, and cost-cutting. Managers no longer hire on potential—they rely on machines, rankings, and personality tests that filter out individuality and reward those who play the algorithmic game best.

AI has automated the very entry-level work that used to build careers—research, drafting, and analysis—and erased the first rung of the professional ladder for thousands of new graduates. The result is a workforce filled with people who know how to pass tests, not necessarily solve problems or drive innovation.

The Ghost Job Phenomenon

Up to half of all listings for entry-level jobs may be “ghost jobs”—positions posted online for optics, compliance, or future needs, but never intended for real hiring. This means millions of job seekers spend hours on applications destined for digital purgatory, further fueling exhaustion and cynicism.

Not Lazy—Just Locked Out

Despite the headlines, the new class of unemployed graduates is not lazy or entitled—they are overqualified, underleveraged, and battered by a broken process. Harvard’s brand means less to AI and ATS systems than the right keyword or résumé format. Human judgment has been sidelined; individuality is filtered out.

Advertisement

What’s Next? Back to Human Connection

Unless companies rediscover the value of human potential, mentorship, and relationships, the job search will remain a brutal numbers game—one that even the “best and brightest” struggle to win. The current system doesn’t just hurt workers—it holds companies back from hiring bold, creative talent who don’t fit perfect digital boxes.

Key Facts:

  • 25% of Harvard MBAs unemployed, highest on record
  • Only 30% of 2025 grads nationwide have jobs in their field
  • Nearly half of grads feel unprepared for real work
  • Up to 50% of entry-level listings are “ghost jobs”
  • AI and ATS have replaced human judgment at most companies

If you’ve felt this struggle—or see it happening around you—share your story in the comments. And make sure to subscribe for more deep dives on the reality of today’s economy and job market.

This is not just a Harvard problem. It’s a sign that America’s job engine is running on empty, and it’s time to reboot—before another generation is locked out.

Continue Reading

Trending