Business
New York Times-ChatGPT lawsuit poses new legal threats to artificial intelligence on January 9, 2024 at 11:00 am Business News | The Hill
After a year of explosive growth, generative artificial intelligence (AI) may be facing its most significant legal threat yet from The New York Times.
The Times sued Microsoft and OpenAI, the company behind the popular ChatGPT tool, for copyright infringement shortly before the new year, alleging the companies impermissibly used millions of its articles to train their AI models.
The newspaper joins scores of writers and artists who have sued major technology companies in recent months for training AI on their copyrighted work without permission. Many of these lawsuits have hit road bumps in court.
However, experts believe The Times’s complaint is sharper than earlier AI-related copyright suits.
“I think they have learned from some of the previous losses,” Robert Brauneis, a professor of intellectual property law at the George Washington University Law School, told The Hill.
The Times lawsuit is “a little bit less scattershot in their causes of action,” Brauneis said.
“The attorneys here for the New York Times are careful to avoid just kind of throwing up everything against the wall and seeing what sticks there,” he added. “They’re really concentrated on what they think will stick.”
Transformation vs. Reproduction
Generative AI models require mass amounts of material for training. Large language models, like OpenAI’s ChatGPT and Microsoft’s Copilot, use the material they are trained on to predict what words are likely to follow a string of text to produce human-like responses.
Typically, these AI models are transformative in nature, said Shabbi Khan, co-chair for the Artificial Intelligence, Automation, and Robotics group at law firm Foley & Lardner.
“If you asked it a general query …. it doesn’t do a search and find the right passage and just reproduce the passage,” Kahn explained. “It will try to probabilistically create its own version of what needs to be said based on a pattern that it sort of picks up through parsing billions of words of content.”
However, in its suit against OpenAI and Microsoft, the Times alleges the AI models developed by the companies have “memorized” and can sometimes reproduce chunks of the newspaper’s articles.
“If individuals can access The Times’s highly valuable content through Defendants’ own products without having to pay for it and without having to navigate through The Times’s paywall, many will likely do so,” the lawsuit reads.
“Defendants’ unlawful conduct threatens to divert readers, including current and potential subscribers, away from The Times, thereby reducing the subscription, advertising, licensing, and affiliate revenues that fund The Times’s ability to continue producing its current level of groundbreaking journalism,” it adds.
In response to the lawsuit, an OpenAI spokesperson said in a statement that the company respects “the rights of content creators and owners” and is “committed to working with them to ensure they benefit from AI technology and new revenue models.”
Brauneis said some of the “most impressive” portions of the Times case are its repeated examples of the AI models simply regurgitating its material, nearly verbatim.
Earlier copyright lawsuits haven’t been able to show such direct reproductions of their material by the models, Khan noted.
In recent months, courts have dismissed claims from plaintiffs in similar lawsuits who argued that the outputs of particular AI models infringed on their copyright because they failed to show outputs that were substantially similar to their copyrighted work.
“I think [the Times] did a good job relative to what maybe other complaints have been put out in the past,” Khan told The Hill. “They provided multiple examples of basically snippets and quite frankly more than snippets, passages of the New York Times as reproductions.”
Khan suggested the court could decide that particular use cases of generative AI are not transformative enough and require companies to limit certain prompts or outputs to prevent AI models from reproducing copyrighted content.
While Brauneis similarly noted the issue could result in an injunction against the tech companies or damages for the Times, he also emphasized it is not an unsolvable issue for generative AI.
“I think that the companies will respond to that and develop filters that dramatically reproduce and reduce the incidence of that kind of output,” he said. “So, I don’t think that’s a long-term, huge problem for these companies.”
In an October response to an inquiry from the U.S. Copyright Office, OpenAI said it had developed measures to reduce the likelihood of “memorization” or verbatim repetition by its AI models, including removing duplications from its training data and teaching its models to decline prompts aimed at reproducing copyrighted works.
The company noted, however, “Because of the multitude of ways a user may ask questions, ChatGPT may not be perfect at understanding and declining every request aimed at getting outputs that may include some part of content the model was trained on.”
The AI model is also equipped with output filters that can block potentially violative content that is generated despite other safeguards, OpenAI said.
OpenAI also emphasized in a statement on Monday that memorization is a “rare bug” and alleged that the Times “intentionally manipulated prompts” in order to get ChatGPT to regurgitate its articles.
“Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” the company said.
“Despite their claims, this misuse is not typical or allowed user activity, and is not a substitute for The New York Times,” it added. “Regardless, we are continually making our systems more resistant to adversarial attacks to regurgitate training data, and have already made much progress in our recent models.”
How the media, AI can shape each other
Carl Szabo, the vice president and general counsel of the tech industry group NetChoice, warned that lawsuits like the Times’ could stifle the industry.
“You’re gonna see a bunch of these efforts to kind of shakedown AI developers for money in a way that harms the public, that harms public access to information and kind of undermines the purpose of the Copyright Act, which is to promote human knowledge at the end of the day,” Szabo told The Hill.
Eventually, Khan said he thinks there will be a mechanism in place through which tech companies can obtain licenses to content, such as articles from the Times, for training their AI models.
OpenAI has already struck deals with The Associated Press and Axel Springer — a German media company that owns Politico, Business Insider and other publications — to use their content.
The Times also noted in its lawsuit that it reached out to Microsoft and OpenAI in April to raise intellectual property concerns and the possibility of an agreement, which OpenAI acknowledged in its statement about the case.
“Our ongoing conversations with the New York Times have been productive and moving forward constructively, so we are surprised and disappointed with this development,” a spokesperson said.
The OpenAI spokesperson added that the company is “hopeful that we will find a mutually beneficial way to work together.”
“I think most publishers will adopt that model because it provides for additional revenue to the company,” Khan told The Hill. “And we can see that because New York Times tried to enter into [an agreement]. So, there is a price that they’re willing to accept.”
Technology, Business After a year of explosive growth, generative artificial intelligence (AI) may be facing its most significant legal threat yet from The New York Times. The Times sued Microsoft and OpenAI, the company behind the popular ChatGPT tool, for copyright infringement shortly before the new year, alleging the companies impermissibly used millions of its articles to…
Business
Google Accused Of Favoring White, Asian Staff As It Reaches $28 Million Deal That Excludes Black Workers

Google has tentatively agreed to a $28 million settlement in a California class‑action lawsuit alleging that white and Asian employees were routinely paid more and placed on faster career tracks than colleagues from other racial and ethnic backgrounds.
- A Santa Clara County Superior Court judge has granted preliminary approval, calling the deal “fair” and noting that it could cover more than 6,600 current and former Google workers employed in the state between 2018 and 2024.

How The Discrimination Claims Emerged
The lawsuit was brought by former Google employee Ana Cantu, who identifies as Mexican and racially Indigenous and worked in people operations and cloud departments for about seven years. Cantu alleges that despite strong performance, she remained stuck at the same level while white and Asian colleagues doing similar work received higher pay, higher “levels,” and more frequent promotions.
Cantu’s complaint claims that Latino, Indigenous, Native American, Native Hawaiian, Pacific Islander, and Alaska Native employees were systematically underpaid compared with white and Asian coworkers performing substantially similar roles. The suit also says employees who raised concerns about pay and leveling saw raises and promotions withheld, reinforcing what plaintiffs describe as a two‑tiered system inside the company.
Why Black Employees Were Left Out
Cantu’s legal team ultimately agreed to narrow the class to employees whose race and ethnicity were “most closely aligned” with hers, a condition that cleared the path to the current settlement.

The judge noted that Black employees were explicitly excluded from the settlement class after negotiations, meaning they will not share in the $28 million payout even though they were named in earlier versions of the case. Separate litigation on behalf of Black Google employees alleging racial bias in pay and promotions remains pending, leaving their claims to be resolved in a different forum.
What The Settlement Provides
Of the $28 million total, about $20.4 million is expected to be distributed to eligible class members after legal fees and penalties are deducted. Eligible workers include those in California who self‑identified as Hispanic, Latinx, Indigenous, Native American, American Indian, Native Hawaiian, Pacific Islander, and/or Alaska Native during the covered period.
Beyond cash payments, Google has also agreed to take steps aimed at addressing the alleged disparities, including reviewing pay and leveling practices for racial and ethnic gaps. The settlement still needs final court approval at a hearing scheduled for later this year, and affected employees will have a chance to opt out or object before any money is distributed.
H2: Google’s Response And The Broader Stakes
A Google spokesperson has said the company disputes the allegations but chose to settle in order to move forward, while reiterating its public commitment to fair pay, hiring, and advancement for all employees. The company has emphasized ongoing internal audits and equity initiatives, though plaintiffs argue those efforts did not prevent or correct the disparities outlined in the lawsuit.
For many observers, the exclusion of Black workers from the settlement highlights the legal and strategic complexities of class‑action discrimination cases, especially in large, diverse workplaces. The outcome of the remaining lawsuit brought on behalf of Black employees, alongside this $28 million deal, will help define how one of the world’s most powerful tech companies is held accountable for alleged racial inequities in pay and promotion.
Business
Luana Lopes Lara: How a 29‑Year‑Old Became the Youngest Self‑Made Woman Billionaire

At just 29, Luana Lopes Lara has taken a title that usually belongs to pop stars and consumer‑app founders.
Multiple business outlets now recognize her as the world’s youngest self‑made woman billionaire, after her company Kalshi hit an 11 billion dollar valuation in a new funding round.
That round, a 1 billion dollar Series E led by Paradigm with Sequoia Capital, Andreessen Horowitz, CapitalG and others participating, instantly pushed both co‑founders into the three‑comma club. Estimates place Luana’s personal stake at roughly 12 percent of Kalshi, valuing her net worth at about 1.3 billion dollars—wealth tied directly to equity she helped create rather than inheritance.

Kalshi itself is a big part of why her ascent matters.
Founded in 2019, the New York–based company runs a federally regulated prediction‑market exchange where users trade yes‑or‑no contracts on real‑world events, from inflation reports to elections and sports outcomes.
As of late 2025, the platform has reached around 50 billion dollars in annualized trading volume, a thousand‑fold jump from roughly 300 million the year before, according to figures cited in TechCrunch and other financial press. That hyper‑growth convinced investors that event contracts are more than a niche curiosity, and it is this conviction—expressed in billions of dollars of new capital—that turned Luana’s share of Kalshi into a billion‑dollar fortune almost overnight.
Her path to that point is unusually demanding even by founder standards. Luana grew up in Brazil and trained at the Bolshoi Theater School’s Brazilian campus, where reports say she spent up to 13 hours a day in class and rehearsal, competing for places in a program that accepts fewer than 3 percent of applicants. After a stint dancing professionally in Austria, she pivoted into academics, enrolling at the Massachusetts Institute of Technology to study computer science and mathematics and later completing a master’s in engineering.
During summers she interned at major firms including Bridgewater Associates and Citadel, gaining a front‑row view of how global macro traders constantly bet on future events—but without a simple, regulated way for ordinary people to do the same.

That realization shaped Kalshi’s founding thesis and ultimately her billionaire status. Together with co‑founder Tarek Mansour, whom she met at MIT, Luana spent years persuading lawyers and U.S. regulators that a fully legal event‑trading exchange could exist under commodities law. Reports say more than 60 law firms turned them down before one agreed to help, and the company then spent roughly three years in licensing discussions with the Commodity Futures Trading Commission before gaining approval. The payoff is visible in 2025’s numbers: an 11‑billion‑dollar valuation, a 1‑billion‑dollar fresh capital injection, and a founder’s stake that makes Luana Lopes Lara not just a compelling story but a data point in how fast wealth can now be created at the intersection of finance, regulation, and software.
Business
Harvard Grads Jobless? How AI & Ghost Jobs Broke Hiring

America’s job market is facing an unprecedented crisis—and nowhere is this more painfully obvious than at Harvard, the world’s gold standard for elite education. A stunning 25% of Harvard’s MBA class of 2025 remains unemployed months after graduation, the highest rate recorded in university history. The Ivy League dream has become a harsh wakeup call, and it’s sending shockwaves across the professional landscape.

Jobless at the Top: Why Graduates Can’t Find Work
For decades, a Harvard diploma was considered a golden ticket. Now, graduates send out hundreds of résumés, often from their parents’ homes, only to get ghosted or auto-rejected by machines. Only 30% of all 2025 graduates nationally have found full-time work in their field, and nearly half feel unprepared for the workforce. “Go to college, get a good job“—that promise is slipping away, even for the smartest and most driven.
Tech’s Iron Grip: ATS and AI Gatekeepers
Applicant tracking systems (ATS) and AI algorithms have become ruthless gatekeepers. If a résumé doesn’t perfectly match the keywords or formatting demanded by the bots, it never reaches human eyes. The age of human connection is gone—now, you’re just a data point to be sorted and discarded.
AI screening has gone beyond basic qualifications. New tools “read” for inferred personality and tone, rejecting candidates for reasons they never see. Worse, up to half of online job listings may be fake—created simply to collect résumés, pad company metrics, or fulfill compliance without ever intending to fill the role.
The Experience Trap: Entry-Level Jobs Require Years
It’s not just Harvard grads who are hurting. Entry-level roles demand years of experience, unpaid internships, and portfolios that resemble a seasoned professional, not a fresh graduate. A bachelor’s degree, once the key to entry, is now just the price of admission. Overqualified candidates compete for underpaid jobs, often just to survive.
One Harvard MBA described applying to 1,000 jobs with no results. Companies, inundated by applications, are now so selective that only those who precisely “game the system” have a shot. This has fundamentally flipped the hiring pyramid: enormous demand for experience, shrinking chances for new entrants, and a brutal gauntlet for anyone not perfectly groomed by internships and coaching.
Burnout Before Day One
The cost is more than financial—mental health and optimism are collapsing among the newest generation of workers. Many come out of elite programs and immediately end up in jobs that don’t require degrees, or take positions far below their qualifications just to pay the bills. There’s a sense of burnout before careers even begin, trapping talent in a cycle of exhaustion, frustration, and disillusionment.
Cultural Collapse: From Relationships to Algorithms
What’s really broken? The culture of hiring itself. Companies have traded trust, mentorship, and relationships for metrics, optimizations, and cost-cutting. Managers no longer hire on potential—they rely on machines, rankings, and personality tests that filter out individuality and reward those who play the algorithmic game best.
AI has automated the very entry-level work that used to build careers—research, drafting, and analysis—and erased the first rung of the professional ladder for thousands of new graduates. The result is a workforce filled with people who know how to pass tests, not necessarily solve problems or drive innovation.
The Ghost Job Phenomenon
Up to half of all listings for entry-level jobs may be “ghost jobs”—positions posted online for optics, compliance, or future needs, but never intended for real hiring. This means millions of job seekers spend hours on applications destined for digital purgatory, further fueling exhaustion and cynicism.
Not Lazy—Just Locked Out
Despite the headlines, the new class of unemployed graduates is not lazy or entitled—they are overqualified, underleveraged, and battered by a broken process. Harvard’s brand means less to AI and ATS systems than the right keyword or résumé format. Human judgment has been sidelined; individuality is filtered out.

What’s Next? Back to Human Connection
Unless companies rediscover the value of human potential, mentorship, and relationships, the job search will remain a brutal numbers game—one that even the “best and brightest” struggle to win. The current system doesn’t just hurt workers—it holds companies back from hiring bold, creative talent who don’t fit perfect digital boxes.
Key Facts:
- 25% of Harvard MBAs unemployed, highest on record
- Only 30% of 2025 grads nationwide have jobs in their field
- Nearly half of grads feel unprepared for real work
- Up to 50% of entry-level listings are “ghost jobs”
- AI and ATS have replaced human judgment at most companies
If you’ve felt this struggle—or see it happening around you—share your story in the comments. And make sure to subscribe for more deep dives on the reality of today’s economy and job market.
This is not just a Harvard problem. It’s a sign that America’s job engine is running on empty, and it’s time to reboot—before another generation is locked out.
Entertainment4 weeks agoColombia’s ‘Doll’ Arrest: Police Say a 23-Year-Old Orchestrated Hits, Including Her Ex’s Murder
Entertainment4 weeks agoHow The Grinch Became The Richest Christmas Movie Ever
Entertainment4 weeks agoMiley Cyrus Is Engaged to Maxx Morando
Film Industry3 weeks agoDisney Brings Beloved Characters to ChatGPT After $1 Billion OpenAI Deal
Business4 weeks agoLuana Lopes Lara: How a 29‑Year‑Old Became the Youngest Self‑Made Woman Billionaire
Entertainment4 weeks agoMariah Carey’s One Holiday Hit Pays her $3.3 Million a Year
Film Industry3 weeks agoNetflix Got Outbid: Paramount Drops a $108 Billion Cash Bomb on Warner Bros.
Entertainment4 weeks agoAnne Hathaway Just Turned Her Instagram Bio Into a 2026 Release Calendar




















