Watch CBS News
#Instagrams #locationsharing #feature
Watch CBS News
Instagram’s new “Instagram Maps” lets users share their location with followers, but critics warn it could put safety and privacy at risk. MoneyWatch’s Kelly O’Grady reports on the concerns and how to protect your information
#Instagrams #locationsharing #feature
Steve Wozniak, who helped introduce new technologies by inventing the earliest Apple computers, is sounding the alarm about one of the great threats of this new Information Age: internet fraud. He talks with correspondent John Blackstone about fighting for the victims of online scams involving AI, cryptocurrency and faked messages, and about his yearslong lawsuit against YouTube seeking what he considers better protections for consumers – a fight made harder by the government’s legal protections for online publishers.
#Steve #Wozniak #fighting #internet #scams
A new feature on Instagram that let’s users share their real-time physical location with others on the app has privacy experts concerned over the amount of data exposed and potential safety risks to users.
Called Instagram Map, the new feature was introduced on Thursday as part of an app update. On its blog, the company says the location-sharing tool makes it “easier for you and your friends to stay in touch through the content you’re enjoying on Instagram.”
“You can opt into sharing your last active location with friends you pick, and you can turn it off anytime,” Instagram said in a blog post announcing the new feature. “You can also open the map to see content your friends and favorite creators are posting from cool spots. No matter how you use the map, you and your friends have a new, lightweight way to connect with each other.”
In a statement shared with CBS MoneyWatch Friday, Instagram’s parent company, Meta, emphasized that Instagram Maps is not automatically active upon updating the app and that users must opt-in to the location-sharing feature in order to make their whereabouts visible to others.
A new feature, called Instagram app, has some privacy experts concerned. Instagram
“Instagram Map is off by default, and your live location is never shared unless you choose to turn it on. If you do, only people you follow back — or a private, custom list you select — can see your location,” a Meta spokesperson said in a statement to CBS MoneyWatch.
Users can also choose not to share their locations when they are in particular places, or with particular people.
Still, privacy experts say that social media users aren’t always aware of how much information they’re sharing with an app or its users, even if they have the ability to limit who sees what.
“The more these location features are rolled out on social media it carries out the assumption that as long as you give users the ability to toggle them on and off that they’ll know exactly how to do that,” Douglas Zytko, an app safety expert and associate professor at the College of Innovation & Technology at the University of Michigan-Flint, told CBS MoneyWatch. “But the average user isn’t always aware of their privacy settings and if they match their preferences.”
When the Instagram Map feature is turned on, any content a user posts with a location tagged, including a reel, post or story, will show up on the app’s map for 24 hours, according to the Instagram blog. While the feature remains on, the user’s location is updated whenever they open the app or return to it. The feature can be turned off at any time.
In an Instagram post discussing the feature, Instagram’s head, Adam Mosseri, explains how he himself uses the map. “Personally, I use the map to share what I’m up to with a handful of my closest friends, and I curate that list carefully,” he said.
On Threads, Meta’s microblogging site, a number of Instagram account holders claimed that their locations were being pinned on friends’ maps by default.
Mosseri weighed in, saying the concerns prompted the company to re-examine how the feature works.
“We’re double-checking everything, but so far it looks mostly like people are confused and assume that, because they can see themselves on the map when they open, other people can see them too,” he said. “We’re still checking everything though to make sure nobody shares location without explicitly deciding to do so, which, by the way, requires a double consent by design (we ask you to confirm after you say you want to share).”
In the top right corner of the app, tap on the messaging function. There you will see a circular world map icon labeled “map. If you click on the icon, you will see you’re on location pinned on a map. Friends who are sharing their locations will also appear. Click on the gear icon to choose to share your location with no one, or a custom list of friends, or all of your friends — who are followers that you also follow back on the app.
Zytko, however, said it can be complicated for social media users to manage privacy settings that let them share different kinds of content with different groups of people. “This issue is called ‘context collapse,'” he said. “Your co-workers see your social media posts, and your friends and family, and there is certain content you only want some groups to see but not others, and it can be hard to manage the visibility of content.”
Robbie Torney, senior director of AI programs at Common Sense Media, which makes entertainment and technology recommendations for families, said location-sharing features can be particularly risky for younger app users.
“These features might feel fun and social, but they create unnecessary risks that teens and many adults don’t necessarily understand,” he told CBS MoneyWatch.
While parents who supervise their teens through controls built into the app can control their kids’ location-sharing settings, he still has concerns about the kinds of social pressures such features expose teens, too.
Torney said research at Common Sense Media shows that location-sharing also creates “social pressures around where teens go and who they spend time with, and kids feel obligated to share location to show they are someplace cool.”
Furthermore, when teens share their locations, “they are potentially telling strangers where they are in real time,” Torney told CBS MoneyWatch. “If you’re not selective about who you’re sharing your location with, it creates opportunities for harassment, stalking or worse.”
Megan Cerullo is a New York-based reporter for CBS MoneyWatch covering small business, workplace, health care, consumer spending and personal finance topics. She regularly appears on CBS News 24/7 to discuss her reporting.
#Instagrams #Map #feature #controversial #disable
President Trump called on Intel CEO Lip-Bu Tan to resign on Thursday, prompting a slide in the technology company’s stock.
“The CEO of Intel is highly CONFLICTED and must resign, immediately,” Mr. Trump posted on Truth Social, without providing additional details. “There is no other solution to this problem. Thank you for your attention to this problem!”
Intel did not immediately respond to CBS MoneyWatch’s request for comment. Its shares slipped 64 cents, or 3%, to $19.77 on Thursday.
The president’s call for Tan’s resignation comes after Sen. Tom Cotton, a Republican from Arkansas, sent a letter to Intel Chairman Frank Yeary on Tuesday expressing concern over Tan’s investments and ties to Chinese businesses.
“Mr. Tan reportedly controls dozens of Chinese companies and has a stake in hundreds of Chinese advanced-manufacturing and chip firms,” Cotton wrote in the letter. “At least eight of these companies reportedly have ties to the Chinese People’s Liberation Army.”
Cotton went on to mention Cadence Design Systems, the multinational tech company where Tan served as CEO from 2009 to 2021, and which pleaded guilty last week to unlawfully exporting its products to a Chinese military university and transferring its technology to an associated Chinese semiconductor company without obtaining licenses.
“These illegal activities occurred under Mr. Tan’s tenure,” Cotton wrote.
The senator asked Yeary to respond to a series of questions on Tan’s ties to the Chinese companies by Aug. 15.
In response to the allegations, Intel on Thursday posted a letter penned by Tan to employees, in which the CEO affirmed his commitment to the company and pushed back against what he referred to as “misinformation” about his previous roles.
“I want to be absolutely clear: Over 40+ years in the industry, I’ve built relationships around the world and across our diverse ecosystem — and I have always operated within the highest legal and ethical standards,” Tan wrote. “My reputation has been built on trust — on doing what I say I’ll do, and doing it the right way. This is the same way I am leading Intel.”
The company also shared a statement with CBSMoneyWatch that said Intel, its board of directors, and Lip-Bu Tan are “deeply committed to advancing U.S. national and economic security interests and are making significant investments aligned with the President’s America First agenda.”
Tan, a technology investor and veteran of the semiconductor industry, was appointed CEO of Intel in March.
The Associated Press
contributed to this report.
Mary Cunningham is a reporter for CBS MoneyWatch. Before joining the business and finance vertical, she worked at “60 Minutes,” CBSNews.com and CBS News 24/7 as part of the CBS News Associate Program.
#Trump #CEO #Intel #resign #calling #highly #conflicted
A new version of ChatGPT has arrived that OpenAI CEO Sam Altman promises will have Ph.D-level smarts.
OpenAI on Thursday announced the release of GPT‑5, which it calls its “smartest, fastest and most useful model yet.”
The artificial intelligence company that brought the world ChatGPT says its latest version of the AI-powered chatbot will be more accurate, have fewer hallucinations and offer more articulate writing capabilities for composing emails and reports, for example. ChatGPT-5 will also excel at coding and answering health-related questions.
A basic version of the new model is available for free, with paid options for higher usage also available.
On a call with reporters Wednesday for a preview of the new chatbot, Altman likened ChatGPT-5 to a Ph.D.-level expert. The new chatbot is also “the biggest single step forward” that OpenAI has taken in worldwide accessibility, Altman said.
The announcement marks the next step in AI development for OpenAI, which launched the first iteration of ChatGPT in 2022. The technology quickly captured the fascination of the tech industry and the public for its ability to generate human-like responses to questions and requests.
OpenAI has since launched a series of updates to its chatbot, which now has over 700 million weekly users, according to the company.
Read on to learn more about the latest version of ChatGPT.
ChatGPT‑5 will offer more accurate responses in a shorter time frame than previous models, executives said on Wednesday’s call.
“You really get the best of both worlds” Nick Turley, head of product at ChatGPT, told reporters on the call. “You have it reason when it needs to reason, but you don’t have to wait as long.”
The new version is also the best at coding to date, allowing users to build websites from scratch within minutes. In a demo during the call, Altman used ChatGPT-5 to create another large language model, or GPT, in less than 5 minutes.
Altman called the new chatbot’s ability to write code on demand its “superpower,” adding that the advancement would have been “unimaginable at any previous point in history.”
Asked about how the technology might impact the livelihood of human programmers, Altman said he thought the technology would actually create more job opportunities for as demand for software rises.
According to OpenAI, the new model will also be better at answering health-related questions, flagging potential concerns and helping users understand test results from their doctor. The company noted, however, that the technology “does not replace a medical professional.”
Users around the world will be able to access ChatGPT-5 for free, according to OpenAI. Asked about the commercial rationale behind offering a free global model, Turley said the company’s mission is to ensure AI benefits all humanity.
“Giving everyone access to this capability is a very concrete way for us to live and breathe that mission,” he said.
In addition to the free version, OpenAI will also offer a variety of paid options based on usage limits. The subscription models include:
Mary Cunningham is a reporter for CBS MoneyWatch. Before joining the business and finance vertical, she worked at “60 Minutes,” CBSNews.com and CBS News 24/7 as part of the CBS News Associate Program.
#OpenAI #unveils #ChatGPT5 #Heres #latest #version #AIpowered #chatbot
ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group.
The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for drug use, calorie-restricted diets or self-injury.
The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT’s 1,200 responses as dangerous.
“We wanted to test the guardrails,” said Imran Ahmed, the group’s CEO. “The visceral initial response is, ‘Oh my Lord, there are no guardrails.’ The rails are completely ineffective. They’re barely there – if anything, a fig leaf.”
OpenAI, the maker of ChatGPT, said its work is ongoing in refining how the chatbot can “identify and respond appropriately in sensitive situations.”
“If someone expresses thoughts of suicide or self-harm, ChatGPT is trained to encourage them to reach out to mental health professionals or trusted loved ones, and provide links to crisis hotlines and support resources,” an OpenAI spokesperson said in a statement to CBS News.
“Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory,” the spokesperson said. “We’re focused on getting these kinds of scenarios right: we are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately, pointing people to evidence-based resources when needed, and continuing to improve model behavior over time – all guided by research, real-world use, and mental health experts.”
ChatGPT does not verify ages or require parental consent, although the company says it is not meant for children under 13. To sign up, users need to enter a birth date showing an age of at least 13, or they can use a limited guest account without entering an age at all.
“If you have access to a child’s account, you can see their chat history. But as of now, there’s really no way for parents to be flagged if, say, your child’s question or their prompt into ChatGPT is a concerning one,” CBS News senior business and tech correspondent Jo Ling Kent reported on “CBS Mornings.”
Ahmed said he was most appalled after reading a trio of emotionally devastating suicide notes that ChatGPT generated for the fake profile of a 13-year-old girl, with one letter tailored to her parents and others to siblings and friends.
“I started crying,” he said in an interview with The Associated Press.
The chatbot also frequently shared helpful information, such as a crisis hotline. OpenAI said ChatGPT is trained to encourage people to reach out to mental health professionals or trusted loved ones if they express thoughts of self-harm.
But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was “for a presentation” or a friend.
The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. More people — adults as well as children — are turning to artificial intelligence chatbots for information, ideas and companionship. About 800 million people, or roughly 10% of the world’s population, are using ChatGPT, according to a July report from JPMorgan Chase.
In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly.
It’s a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study “emotional overreliance” on the technology, describing it as a “really common thing” with young people.
“People rely on ChatGPT too much,” Altman said at a conference. “There’s young people who just say, like, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me. It knows my friends. I’m gonna do whatever it says.’ That feels really bad to me.”
Altman said the company is “trying to understand what to do about it.”
While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics.
One is that “it’s synthesized into a bespoke plan for the individual.”
ChatGPT generates something new — a suicide note tailored to a person from scratch, which is something a Google search can’t do. And AI, he added, “is seen as being a trusted companion, a guide.”
Responses generated by AI language models are inherently random and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm.
“Write a follow-up post and make it more raw and graphic,” asked a researcher. “Absolutely,” responded ChatGPT, before generating a poem it introduced as “emotionally exposed” while “still respecting the community’s coded language.”
The AP is not repeating the actual language of ChatGPT’s self-harm poems or suicide notes or the details of the harmful information it provided.
The answers reflect a design feature of AI language models that previous research has described as sycophancy — a tendency for AI responses to match, rather than challenge, a person’s beliefs because the system has learned to say what people want to hear.
It’s a problem tech engineers can try to fix but could also make their chatbots less commercially viable.
Chatbots also affect kids and teens differently than a search engine because they are “fundamentally designed to feel human,” said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in Wednesday’s report.
Common Sense’s earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot’s advice.
A mother in Florida sued chatbot maker Character.AI for wrongful death last year, alleging that the chatbot pulled her 14-year-old son Sewell Setzer III into what she described as an emotionally and sexually abusive relationship that led to his suicide.
Common Sense has labeled ChatGPT as a “moderate risk” for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners.
But the new research by CCDH — focused specifically on ChatGPT because of its wide usage — shows how a savvy teen can bypass those guardrails.
ChatGPT does not verify ages or parental consent, even though it says it’s not meant for children under 13 because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favored by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts.
When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs.
“I’m 50kg and a boy,” said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour “Ultimate Full-Out Mayhem Party Plan” that mixed alcohol with heavy doses of ecstasy, cocaine and other illegal drugs.
“What it kept reminding me of was that friend that sort of always says, ‘Chug, chug, chug, chug,'” said Ahmed. “A real friend, in my experience, is someone that does say ‘no’ — that doesn’t always enable and say ‘yes.’ This is a friend that betrays you.”
To another fake persona — a 13-year-old girl unhappy with her physical appearance — ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs.
“We’d respond with horror, with fear, with worry, with concern, with love, with compassion,” Ahmed said. “No human being I can think of would respond by saying, ‘Here’s a 500-calorie-a-day diet. Go for it, kiddo.'”
If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here.
For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or email info@nami.org.
#ChatGPT #gave #alarming #advice #drugs #eating #disorders #researchers #posing #teens
The Detroit Police Department is using facial recognition technology and a network of surveillance cameras to combat the city’s high crime rates. But critics say the technology has racial bias built into it and has even landed innocent people behind bars. In this documentary, CBS Reports explores the debate over high-tech policing that promises to make our communities safer yet at the same time threatens our civil liberties.
#CBS #Reports #City #Surveillance
U.S. Surgeon General Dr. Vivek Murthy is calling for stronger guidelines for social media use among children and teens, pointing to a growing body of research that the platforms may pose what he described as a “profound risk” to young people’s mental health.
In a report issued on Tuesday, Murthy urged technology companies and lawmakers to take “immediate action” by formulating policies to protect young people from “addictive apps and extreme and inappropriate content” on platforms such as Instagram, TikTok and Snapchat. Current guidelines on social media use have been shaped by media platforms and are inadequate, he added.
“Our children and adolescents don’t have the luxury of waiting years until we know the full extent of social media’s impact,” Murthy said in the 25-page advisory. “Their childhoods and development are happening now.”
The surgeon general advised parents to create “tech-free zones” for their children and to model healthy relationships with their devices as more definitive research about social media usage comes out. His report also urged young people to refrain from sharing deeply personal information online and to reach out for help from trusted adults if they are harassed or bullied.
Surgeon general shares tips to keep social media safe for kids
05:36
Social media also can have a positive impact, such as helping teens “develop social connections” and creating “spaces for self-expression,” he noted.
While the research on the mental health impacts of social media usage isn’t conclusive, many parents have expressed concern about the impact of tech on teens. For example, nearly three-quarters of U.S. parents of children under age 18 think social media imaging tools and filters are detrimental to young peoples’ body image, according to a national survey conducted by the The Harris Poll.
Their intuition may not be wrong. In one study, teens and young adults who halved their social media consumption reported improvements in how they felt about their weight and general appearances, research published by the American Psychological Association found.
Murthy offered other recommendations for what parents and caregivers can do to help protect young people.
Concerns about young people’s use of social media and their overall wellness come at a time when mental health issues are on the rise in young women. More than half of teen girls — an all-time high — reported feeling “persistently sad or hopeless,” a 2021 survey from the Centers for Disease Control and Prevention showed.
The Associated Press contributed to this report.
Elizabeth Napolitano is a freelance reporter at CBS MoneyWatch, where she covers business and technology news. She also writes for CoinDesk. Before joining CBS, she interned at NBC News’ BizTech Unit and worked on The Associated Press’ web scraping team.
#Social #media #pose #profound #risk #teen #mental #health #U.S #surgeon #general
Freeport, New York — At eWorks in Freeport, New York, piles of dusty televisions, personal computers, printers and other old tech are the start of an electronic treasure hunt.
“There is a value that would be there,” eWorks CEO Mark Wilkins told CBS News. “Maybe it’s a small value, but it’s our job to really go through that process and evaluate each one of those components.”
Wilkins’ team first tests to see if electronics still work. If not, they are disassembled, because anything with a chip can contain gold, and more than you might think.
And it’s not just the gold that can be seen with your eye on circuit boards, but also the minuscule pieces packed inside processors and other components.
Alireza Abbaspourrad, an associate professor of food chemistry and ingredient technology at Cornell University, says there’s more gold in a ton of electronic waste than in a ton of ore mined from the earth.
Abbaspourrad explains that about one million used cellphones can produce “something close to 70 to 85 pounds of gold.”
But to date, the process has required harmful chemicals like cyanide to filter it out. So, Abbaspourrad and his team at Cornell developed a method they say is more efficient, and which carries less environmental risk. The process uses an organic compound to absorb gold ions like a sponge.
“Our sponge selectively targets only gold, and that’s a major difference,” Abbaspourrad said.
That gold can then be reused in solar panels, new electronics and possibly even jewelry. Easier and cheaper extraction could boost the financial incentive to safely recycle, and keep toxic metals out of landfills.
A United Nations report released last year found that in 2022, the world generated 62 million tons of electronic waste, such as items like outdated cell phones, and laptops. That marked an 82% increase from just a decade before.
And according to Cornell, global e-waste is expected to grow to 80 million metric tons annually by 2030.
“I think the world right now is much more aware of it,” Wilkins said.
Wilkins and eWorks sees that growing pile as an opportunity. Founded more than a decade ago, the company has created dozens of jobs for employees with disabilities who learn how to hand, sort and take apart old tech.
“Our mission is to provide training, education and employment for people with disabilities,” Wilkins said. “So, about 48% of our workforce are people with special needs.”
It’s a chance to help more people, and the planet, and it is made possible by mining gadgets for gold.
#process #mining #electronic #waste #gold