Tesla, Chatbot and Grok
Digest more
AI, Grok
Digest more
Researchers say popular mental health chatbots can reinforce harmful stereotypes and respond inappropriately to users in distress.
Happy Tuesday! Imagine trying to find an entire jury full of people without strong feelings about Elon Musk. Send news tips and excuses for getting out of jury duty to: [email protected]
Generative artificial intelligence tools like ChatGPT, Gemini, and Grok have exploded in popularity as AI becomes mainstream. These tools don’t have the ability to make new scientific discoveries on their own,
Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University.
Explore more
A recent study by Stanford University offers a warning that therapy chatbots could pose a substantial safety risk to users suffering from mental health issues.
Opinion
34mOpinion
CNET on MSNOver Half of Teens Regularly Use AI Companions. Here's Why That's Not IdealThe study by Common Sense Media also found that nearly a third of teens are as satisfied, if not more, by conversing with AI rather than humans.
Here’s how using AI in the wrong situations could cost you money, job opportunities, and ultimately, your peace of mind.
Built using huge amounts of computing power at a Tennessee data center, Grok is Musk's attempt to outdo rivals such as OpenAI's ChatGPT and Google's Gemini in building an AI assistant that shows its reasoning before answering a question.
People are leaning on AI tools to figure out what is real on topics such as funding cuts and misinformation about cloud seeding. At times, chatbots will give contradictory responses.
The makers of FlirtAI, which promotes itself as the "#1 AI Flirt Assistant Keyboard" on the App Store, have leaked 160,000 screenshots that users have shared with the app, according to an investigation by Cybernews.