AI

Secretaries of state urge X to stop its Grok chatbot from spreading election misinformation

Grok, not to be confused with the homophonic AI startup Groq that this morning raised over $600 million, spread false information about Vice President Kamala Harris on X, the social network formerly known as Twitter. That’s according to an open letter penned by five secretaries of state and addressed to Tesla, SpaceX and X CEO […]

Secretaries of state urge X to stop its Grok chatbot from spreading election misinformation Read More »

OpenAI tempers expectations with less bombastic, GPT-5-less DevDay this fall

Last year, OpenAI held a splashy press event in San Francisco during which the company announced a bevy of new products and tools, including the ill-fated App Store-like GPT Store. This year will be a quieter affair, however. On Monday, OpenAI said it’s changing the format of its DevDay conference from a tentpole event into

OpenAI tempers expectations with less bombastic, GPT-5-less DevDay this fall Read More »

YouTuber files class action suit over OpenAI’s scrape of creators’ transcripts

A YouTube creator is seeking to bring a class action lawsuit against OpenAI, alleging that the company trained its generative AI models on millions of transcripts from YouTube videos without notifying or compensating the videos’ owners. In a complaint filed Friday in the U.S. District Court for the Northern District of California, attorneys for David

YouTuber files class action suit over OpenAI’s scrape of creators’ transcripts Read More »

Many safety evaluations for AI models have significant limitations

Despite increasing demand for AI safety and accountability, today’s tests and benchmarks may fall short, according to a new report. Generative AI models — models that can analyze and output text, images, music, videos and so on — are coming under increased scrutiny for their tendency to make mistakes and generally behave unpredictably. Now, organizations

Many safety evaluations for AI models have significant limitations Read More »

OpenAI pledges to give U.S. AI Safety Institute early access to its next model

OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on an agreement to provide early access to its next major generative AI model for safety testing. The announcement, which Altman made in a post on

OpenAI pledges to give U.S. AI Safety Institute early access to its next model Read More »

Google releases new ‘open’ AI models with a focus on safety

Google has released a trio of new, “open” generative AI models that it’s calling “safer,” “smaller” and “more transparent” than most — a bold claim, to be sure. They’re additions to Google’s Gemma 2 family of generative models, which debuted back in May. The new models, Gemma 2 2B, ShieldGemma and Gemma Scope, are designed

Google releases new ‘open’ AI models with a focus on safety Read More »

This Week in AI: Companies are growing skeptical of AI’s ROI

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. This week in AI, Gartner released a report suggesting that around a third of generative AI projects in the enterprise will be abandoned after the proof-of-concept phase by year-end 2025. The reasons are many — poor data quality, inadequate risk controls, escalating infrastructure costs and so on.

This Week in AI: Companies are growing skeptical of AI’s ROI Read More »

Canva acquires Leonardo.ai to boost its generative AI efforts

Canva has acquired Leonardo.ai, a generative AI content and research startup, as the company looks to deepen its investments in its AI tech stack. The financial terms of the deal weren’t disclosed, but Canva co-founder and chief product officer Cameron Adams said it’s a mix of cash and stock. All of Leonardo.ai’s 120 employees will

Canva acquires Leonardo.ai to boost its generative AI efforts Read More »

Making AI models ‘forget’ undesirable data hurts their performance

So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material. But current unlearning techniques are a double-edged sword: They could make a model like OpenAI’s GPT-4o or Meta’s Llama 3.1 405B much less capable of answering

Making AI models ‘forget’ undesirable data hurts their performance Read More »