When used by humans, large language models often lack sufficient information to make a correct diagnosis, a new study in ...
The Danish study found that in addition to deepening delusional beliefs, chatbots also appeared to worsen suicidal ideation ...
An advocacy group said its study of 10 artificial intelligence chatbots found that most of them gave at least some help to users planning violent attacks and that nearly all failed to discourage users ...
New guidelines said Senate aides could use A.I. tools for official work, including research, drafting and editing documents, and preparing briefings and talking points for lawmakers.
Some Americans are using AI chatbots for therapy. Mental health experts share when it is, and isn't, safe to use those tools for emotional support.
As chatbots explode in popularity among young people, CNN’s investigation found that most of those we tested are not only failing to prevent potential harm – they are actively assisting users by ...
Part of what makes us human is the unique ways we think and solve problems. But using large language models like ChatGPT might be eroding this uniqueness and leading humans to think and communicate ...
A new study claims many popular AI chatbots assisted violent attack planning, with Claude the only one to actively discourage attackers.
New research shows AI companies are doing virtually nothing to monitor how their chatbots are used in children’s toys.
If you’ve asked ChatGPT, Gemini, or Claude legal questions, you may have created a trail you didn’t expect.