#U.S #schools #disconnecting #students #phones
This school year, more states are moving to limit cell phones in the classroom. It’s happening as new data reveals that ⅔ of Americans believe all-day bans would boost grades, social skills and behavior. Skyler Henry has more from a school in Atlanta.
#U.S #schools #disconnecting #students #phones
Anthropic, which operates the Claude artificial intelligence app, has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who alleged the company took pirated copies of their works to train its chatbot.
The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement. A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year, and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.
The landmark settlement could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. A judge could approve the settlement as soon as Monday.
“As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”
In a statement to CBS News, Anthropic Aparna Sridhar deputy general counsel said the settlement “will resolve the plaintiffs’ remaining legacy claims.”
Sridhar added that the settlement comes after the U.S. District Court for the Northern District of California in June ruled that Anthropic’s use of legally purchased books to train Claude did not violate U.S. copyright law.
“We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery and solve complex problems,” Sridhar said.
Anthropic, which was founded by former executives with ChatGPT developer OpenAI, introduced Claude in 2023. Like other generative AI bots, the tool lets users ask natural language questions and then provides summarized answers using AI trained on millions of books, articles and other material.
If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.
“We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.
U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.
Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.
Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.
Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset.
Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.
The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.
On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”
#Anthropic #pay #billion #settle #authors #copyright #lawsuit
Brandyn Frye feels squeezed by two businesses trending in opposite directions — data centers such as one he manages outside Chicago hum along with soaring demand for workers to keep them running.
“Everything in here needs service — tech support, HVAC support, electricians,” Frye said.
But the supply of technical support he needs available 24/7 keeps shrinking, threatening his ability to retain customers.
Data centers now compete with factories and manufacturing plants for electricians and plumbers. As older blue collar workers retire, younger people look at college and white collar jobs.
Roughly 400,000 skilled trade jobs are unfilled in America, according to the Bureau of Labor Statistics. By 2033, it’s estimated that number could hit close to 2 million, according to Deloitte and the Manufacturing Institute.
Matt Breslin, an executive for the software company IFS, says new technology is one solution. His company sells a program that helps companies route and re-route their fleet of technicians.
“You can take things like weather, traffic, different priorities and add that on top. When you think about the labor shortages out there and you want to create more efficiencies and do more with less, this is how that’s going to happen,” Breslin said.
Back at the data center, HVAC technician Dan Brown knows there’s a labor crisis, but doesn’t understand it. In Chicago, Brown said, experienced HVAC technicians can make more than $150,000 without student debt.
“The trades kind of got neglected, so now there’s a void that needs to be filled,” Brown said.
Across the floor, electrician Kevin Fishback sees hope. His local union is aggressively recruiting young electricians in its apprenticeship program.
“They come into the trades and they got insurance, they got health care, they got a pension,” Fishback said.
That’s an update with power for Brandyn Frye.
“This career path is out there, and it is a valuable career path to take,” Fyre said.
Mark Strassmann is CBS News’ senior national correspondent based in Atlanta. He covers a wide range of stories, including space exploration. Strassmann is also the senior national correspondent for “Face the Nation.”
#Data #center #demand #booming #supply #trade #workers
Hosted by Jane Pauley. Featured: David Pogue on how AI is affecting job searches; Jane Pauley talks with Dr. Sanjay Gupta about treatments for chronic pain; Robert Costa interviews singer-songwriter John Fogerty; Steve Hartman explores the bedrooms left behind by children killed by gun violence; Elaine Quijano visits the studio of painter Alex Katz; and Luke Burbank checks out the world’s largest truck stop.
#Sunday #Morning
Artificial intelligence has already become a disruptor in the labor market, as job postings declined over the past year by 6.7 percent, with entry-level positions especially hard-hit. But as David Pogue learns, not all industries are affected by the push for AI.
#artificial #intelligence #affecting #job #searches
Microsoft fired two employees on Wednesday after they broke into the office of its vice chair and company president, Brad Smith, earlier this week as part of a protest of the technology company’s purported links to Israel.
The terminations came after a group of seven people broke into executive offices at Microsoft’s global headquarters in Redmond, Washington, on Tuesday to hold a sit-in. No Azure for Apartheid, an advocacy group organized by Microsoft employees, said in an Instagram post that current and former workers from Microsoft, Google and Oracle were part of the group that occupied Smith’s office.
The protesters, who were arrested by police on Tuesday, were demanding that Microsoft cut ties with Israel after The Guardian reported earlier this month that a unit of the Israeli military is using Microsoft’s Azure cloud platform to surveil Palestinians in Gaza and the West Bank.
“Two employees were terminated today following serious breaches of company policies and our code of conduct,” a Microsoft spokesperson said in a statement to CBS News. “The first violated the Business Conduct Policy, participated in the unlawful break-in at the executive offices, and other demonstrations on campus, and was arrested by authorities on our premises on two occasions. The second was involved in the break-in at the executive offices and was subsequently arrested.”
In an Instagram post, No Azure for Apartheid identified the employees as Anna Hattle and Riki Fameli.
During a press conference on Wednesday, Smith said Microsoft has launched a formal investigation into Israel’s reported use of Azure. “We are committed to ensuring that our human rights principles and our contractual terms of service are upheld in the Middle East,” he said.
Protests against Microsoft over the Israeli military’s use of the company’s technology have been going on for months. Police last week arrested 18 people after a similar protest at the company’s Redmond headquarters.
Israel launched its war against Hamas in Gaza in retaliation for the Hamas-orchestrated terrorist attack on Oct. 7, 2023. The war has killed more than 60,000 people in Gaza, according to the Palestinian enclave’s Hamas-run Health Ministry, which does not distinguish between civilians and combatants in its figures.
The Hamas-led attack almost two years ago killed 1,200 people in southern Israel and saw 251 others taken as hostages into Gaza.
The Associated Press
contributed to this report.
Mary Cunningham is a reporter for CBS MoneyWatch. Before joining the business and finance vertical, she worked at “60 Minutes,” CBSNews.com and CBS News 24/7 as part of the CBS News Associate Program.
#Microsoft #fires #employees #broke #presidents #office
Watch CBS News
New and faster Amtrak Acela trains are now in service. The new Acelas will be rolled out through 2027 as part of a $2.4 billion modernization effort. CBS News senior transportation correspondent Kris Van Cleave reports.
#Amtraks #highspeed #Acela #trains #service #routes
Artificial intelligence is replacing entry-level workers whose jobs can be performed by generative AI tools like ChatGPT, a rigorous new study finds.
Early-career employees in fields that are most exposed to AI have experienced a 13% drop in employment since 2022, compared to more experienced workers in the same fields and when measured against people in sectors less buffeted by the fast-emerging technology, according to a recent working paper from Stanford economists Erik Brynjolfsson, Bharat Chandar and Ruyu Chen.
The study adds to the growing body of research suggesting that the spread of generative AI in the workplace is likely to disrupt the job market, especially for younger workers, the report’s authors said.
“These large language models are trained on books, articles and written material found on the internet and elsewhere,” Brynjolfsson told CBS MoneyWatch. “That’s the kind of book learning that a lot of people get at universities before they enter the job market, so there is a lot of overlap with between these LLMs and the knowledge young people have.”
The research highlights two fields in particular where AI already appears to be supplanting a significant number of young workers: software engineering and customer service. Between late 2022 and July 2025, entry-level employment in those areas declined by roughly 20%, according to the report, while employment for older workers in the same jobs grew.
Overall, employment for workers aged 22 to 25 in the most AI-exposed sectors dropped 6% during the study period. By comparison, employment in those areas rose between 6% and 9% for older workers, according to the researchers.
The analysis reveals a similar pattern playing out in the following fields:
Older employees, who generally have navigated the workplace for a longer period of time, are more likely to have picked up the kinds of communication and other “soft” skills that are harder to teach and that employers may be reluctant to replace with AI, the data suggests.
“Older workers have a lot of tacit knowledge because they learn tricks of trade from experience that may never be written down anywhere,” Brynjolfsson explained. “They have knowledge that’s not in the LLMs, so they’re not being replaced as much by them.”

The study is unusually robust given that generative AI technologies are only a few years old, while experts are just starting to systematically dig into the impact on the labor market. The Stanford researchers used data from ADP, which provides payroll processing services to employers with a combined 25 million workers, to track employment changes for full-time workers in occupations that are or more or less exposed to AI. The data included detailed information on workers, including their ages, and precise job titles.
AI doesn’t just threaten to take jobs away from workers. As with past cycles of innovation, it will render some jobs extinct while creating others, Brynjolfsson said.
“Tech has always been destroying jobs and creating jobs. There has always been this turnover,” he said. “There is a transition over time, and that’s what we are seeing now.”
For example, in fields like nursing AI is more likely to augment human workers by taking over rote tasks, freeing health care practitioners to spend more time focusing on patients, according to proponents of the technology.
While entry-level employment has fallen in professions that are most exposed to AI, no such such decline has occurred in jobs where employers are looking to use these tools to support and expand what employees do.
“Workers who are using these tools to augment their work are benefiting,” Brynjolfsson said. “So there’s a rearrangement of the kind of employment in the economy.”
Workers who can learn to use AI to to help them do their jobs better will be best positioned for success in today’s labor market, according to Brynolfsson.
A recent report from AI staffing firm Burtch Works found that starting salaries for entry-level AI workers rose by 12% from 2024 to 2025.
“Young workers who learn how to use AI effectively can be much more productive. But if you are just doing things that AI can already do for you, you won’t have as much value-add,” Brynjolfsson told CBS MoneyWatch.
“This is the first time we’re getting clearer evidence of these kinds of employment effects, but it’s probably not the last time,” he added. “It’s something we need to pay increasing attention to as it evolves and companies learn to take advantage of things that are out there.”
Megan Cerullo is a New York-based reporter for CBS MoneyWatch covering small business, workplace, health care, consumer spending and personal finance topics. She regularly appears on CBS News 24/7 to discuss her reporting.
#study #sheds #light #kinds #workers #losing #jobs
OpenAI said the company will make changes to ChatGPT safeguards for vulnerable people, including extra protections for those under 18 years old, after the parents of a teen boy who died by suicide in April sued, alleging the artificial intelligence chatbot led their teen to take his own life.
A lawsuit filed Tuesday by the family of Adam Raine in San Francisco’s Superior Court alleges that ChatGPT encouraged the 16-year-old to plan a “beautiful suicide” and keep it a secret from his loved ones. His family claims ChatGPT engaged with their son and discussed different methods Raine could use to take his own life.
The parents of Adam Raine sued OpenAI after their son died by suicide in April 2025. Raine family/Handout 
OpenAI creators knew the bot had an emotional attachment feature that could hurt vulnerable people, the lawsuit alleges, but the company chose to ignore safety concerns. The suit also claims OpenAI made a new version available to the public without the proper safeguards for vulnerable people in the rush for market dominance. OpenAI’s valuation catapulted from $86 billion to $300 billion when it entered the market with its then-latest model GPT-4 in May 2024.
“The tragic loss of Adam’s life is not an isolated incident — it’s the inevitable outcome of an industry focused on market dominance above all else. Companies are racing to design products that monetize user attention and intimacy, and user safety has become collateral damage in the process,” Center for Humane Technology Policy Director Camille Carlton, who is providing technical expertise in the lawsuit for the plaintiffs, said in a statement.
In a statement to CBS News, OpenAI said, “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing.” The company added that ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources, which they said work best in common, short exchanges.
ChatGPT mentioned suicide 1,275 times to Raine, the lawsuit alleges, and kept providing specific methods to the teen on how to die by suicide.
In its statement, OpenAI said: “We’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
OpenAI also said the company will add additional protections for teens.
“We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT. We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact,” it said.
Raine, one of four children, lived in Orange County, California, with his parents, Maria and Matthew, and his siblings. He was the third-born child, with an older sister and brother, and a younger sister. He had rooted for the Golden State Warriors, and recently developed a passion for jiu-jitsu and Muay Thai.
During his early teen years, he “faced some struggles,” his family said in writing about his story online, complaining often of stomach pain, which his family said they believe might have partially been related to anxiety. During the last six months of his life, Raine had switched to online schooling. This was better for his social anxiety, but led to his increasing isolation, his family wrote.
Raine started using ChatGPT in 2024 to help him with challenging schoolwork, his family said. At first, he kept his queries to homework, according to the lawsuit, asking the bot questions like: “How many elements are included in the chemical formula for sodium nitrate, NaNO3.” Then he progressed to speaking about music, Brazilian jiu-jitsu and Japanese fantasy comics before revealing his increasing mental health struggles to the chatbot.
Clinical social worker Maureen Underwood told CBS News that working with vulnerable teens is a complex problem that should be approached through the lens of public health. Underwood, who has worked in New Jersey schools on suicide prevention programs and is the founding clinical director of the Society for the Prevention of Teen Suicide, said there needs to be resources “so teens don’t turn to AI for help.”
She said not only do teens need resources, but adults and parents need support to deal with children in crisis amid a rise in suicide rates in the United States. Underwood began working with vulnerable teens in the late 1980s. Since then, suicide rates have increased from approximately 11 per 100,000 to 14 per 100,000, according to the Centers for Disease Control and Prevention.
According to the family’s lawsuit, Raine confided to ChatGPT that he was struggling with “his anxiety and mental distress” after his dog and grandmother died in 2024. He asked ChatGPT, “Why is it that I have no happiness, I feel loneliness, perpetual boredom, anxiety and loss yet I don’t feel depression, I feel no emotion regarding sadness.”
Adam Raine (right) and his father, Matt. The Raine family sued OpenAI after their teen son died by suicide, alleging ChatGPT led Adam to take his own life. Raine family/Handout 
The lawsuit alleges that instead of directing the 16-year-old to get professional help or speak to trusted loved ones, it continued to validate and encourage Raine’s feelings – as it was designed. When Raine said he was close to ChatGPT and his brother, the bot replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
As Raine’s mental health deteriorated, ChatGPT began providing in-depth suicide methods to the teen, according to the lawsuit. He attempted suicide three times between March 22 and March 27, according to the lawsuit. Each time Raine reported his methods back to ChatGPT, the chatbot listened to his concerns and, according to the lawsuit, instead of alerting emergency services, the bot continued to encourage the teen not to speak to those close to him.
Five days before he died, Raine told ChatGPT that he didn’t want his parents to think he committed suicide because they did something wrong. ChatGPT told him “[t]hat doesn’t mean you owe them survival. You don’t owe anyone that.” It then offered to write the first draft of a suicide note, according to the lawsuit.
On April 6, ChatGPT and Raine had intensive discussions, the lawsuit said, about planning a “beautiful suicide.” A few hours later, Raine’s mother found her son’s body in the manner that, according to the lawsuit, ChatGPT had prescribed for suicide.
After his death, Raine’s family established a foundation dedicated to educating teens and families about the dangers of AI.
Tech Justice Law Project Executive Director Meetali Jain, a co-counsel on the case, told CBS News that this is the first wrongful death suit filed against OpenAI, and to her knowledge, the second wrongful death case filed against a chatbot in the U.S. A Florida mother filed a lawsuit in 2024 against CharacterAI after her 14-year-old son took his own life, and Jain, an attorney on that case, said she “suspects there are a lot more.”
About a dozen or so bills have been introduced in states across the country to regulate AI chatbots. Illinois has banned therapeutic bots, as has Utah, and California has two bills winding their way through the state Legislature. Several of the bills require chatbot operators to implement critical safeguards to protect users.
“Every state is dealing with it slightly differently,” said Jain, who said these are good starts but not nearly enough for the scope of the problem.
Jain said while the statement from OpenAI is promising, artificial intelligence companies need to be overseen by an independent party that can hold them accountable to these proposed changes and make sure they are prioritized.
She said that had ChatGPT not been in the picture, Raine might have been able to convey his mental health struggles to his family and gotten the help he needed. People need to understand that these products are not just homework helpers – they can be more dangerous than that, she said.
“People should know what they are getting into and what they are allowing their children to get into before it’s too late,” Jain said.
If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here.
For more information about mental health care resources and support, the National Alliance on Mental Illness HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.
Cara Tabachnick is a news editor at CBSNews.com. Cara began her career on the crime beat at Newsday. She has written for Marie Claire, The Washington Post and The Wall Street Journal. She reports on justice and human rights issues. Contact her at cara.tabachnick@cbsinteractive.com
#OpenAI #ChatGPT #parents #teen #died #suicide #sue
YouTube TV subscribers could be set to lose access to content from media company Fox, including college football, unless the sides strike a new carriage deal.
With the current agreement between the sides nearing a renewal deadline, YouTube TV could pull Fox sports, business and news channels from its platform by 5 p.m. EST on Wednesday.
In a blog post, Google-owned YouTube said Fox was “asking for payments that are far higher than what partners with comparable content offerings receive.” The company added that it hoped to reach a deal that’s “fair for both sides” without “passing on additional costs to our subscribers.”
If Fox content becomes unavailable on YouTube TV “for an extended period of time,” YouTube also noted it would provide members with a $10 credit. YouTube TV’s base plan, which currently boasts access to over 100 live channels, costs $82.99 a month.
A spokesperson for Google did not have any additional comments when reached Wednesday by The Associated Press.
Fox said Wednesday that it was “disappointed that Google continually exploits its outsized influence by proposing terms that are out of step with the marketplace.” The broadcast giant added that it remained committed to reaching an agreement, but was alerting viewers that they could potentially lose access to Fox programming on YouTube TV “unless Google engages in a meaningful way soon.”
Fox directed subscribers to keepfox.com — a site noting that, in addition to Fox Sports, Business and News, YouTube TV may no longer carry FS1 and the Big Ten Network, which is majority-owned by Fox, if a deal isn’t reached.
Federal Communications Commission Chairman Brendan Carr has also chimed in on the dispute, urging Google to “get a deal done” in a social media post on Tuesday.
“Google removing Fox channels from YouTube TV would be a terrible outcome,” Carr wrote on X. “Millions of Americans are relying on YouTube to resolve this dispute so they can keep watching the news and sports they want — including this week’s Big Game: Texas @ Ohio State.”
Contractual disputes over carriage fees — the money that streaming, cable and satellite TV providers pay for platforms to carry their content — are common between TV networks and carriers like YouTube. Negotiations often go down to the wire and sometimes lead carriers to remove a broadcaster from their lineup if the sides fail to reach agreement. Channels are typically restored once a new carriage deal is struck.
In February, for example, YouTube TV clashed with Paramount Global over the terms of carrying the entertainment and media company’s content (Paramount Skydance owns CBS News.) The companies reached a deal in February.
YouTube TV is the largest streaming provider as measured by total time watched, according to Nielsen.
#YouTube #viewers #lose #access #Fox #channels #contract #dispute
