OpenAI

OpenAI pursued Cursor maker before entering into talks to buy Windsurf for $3B

When news broke that OpenAI was in talks to acquire AI coding company Windsurf for $3 billion, one of the first questions on the mind of anyone following the space was likely: “Why not buy Cursor creator Anysphere instead?” After all, OpenAI Startup Fund has been an investor in Anysphere, the maker of Cursor, since the

OpenAI pursued Cursor maker before entering into talks to buy Windsurf for $3B Read More »

OpenAI launches Flex processing for cheaper, slower AI tasks

In a bid to more aggressively compete with rival AI companies like Google, OpenAI is launching Flex processing, an API option that provides lower AI model usage prices in exchange for slower response times and “occasional resource unavailability.” Flex processing, which is available in beta for OpenAI’s recently released o3 and o4-mini reasoning models, is

OpenAI launches Flex processing for cheaper, slower AI tasks Read More »

OpenAI’s Stargate project sets its sights on international expansion

Stargate, a $500 billion project headed up by OpenAI, Oracle, and SoftBank to build AI data centers and other AI infrastructure in the U.S., is considering investments in the U.K. and elsewhere overseas, according to a Financial Times report.  While Stargate was initially launched as a way to boost U.S. AI infrastructure, the project is

OpenAI’s Stargate project sets its sights on international expansion Read More »

OpenAI’s latest AI models have a new safeguard to prevent biorisks

OpenAI says that it deployed a new system to monitor its latest AI reasoning models, o3 and o4-mini, for prompts related to biological and chemical threats. The system aims to prevent the models from offering advice that could instruct someone on carrying out potentially harmful attacks, according to OpenAI’s safety report. O3 and o4-mini represent

OpenAI’s latest AI models have a new safeguard to prevent biorisks Read More »

OpenAI partner says it had relatively little time to test the company’s o3 AI model

An organization OpenAI frequently partners with to probe the capabilities of its AI models and evaluate them for safety, Metr, suggests that it wasn’t given much time to test one of the company’s highly capable new releases, o3. In a blog post published Wednesday, Metr writes that one red teaming benchmark of o3 was “conducted

OpenAI partner says it had relatively little time to test the company’s o3 AI model Read More »

OpenAI launches a pair of AI reasoning models, o3 and o4-mini

OpenAI announced on Thursday the launch of o3 and o4-mini, new AI reasoning models designed to pause and work through questions before responding. The company calls o3 its most advanced reasoning model ever, outperforming the company’s previous models on tests measuring math, coding, reasoning, science, and visual understanding capabilities. Meanwhile, o4-mini offers what OpenAI says

OpenAI launches a pair of AI reasoning models, o3 and o4-mini Read More »

OpenAI debuts Codex CLI, an open source coding tool for terminals

In a bid to inject AI into more of the programming process, OpenAI is launching Codex CLI, a coding “agent” designed to run locally from terminal software. Announced on Wednesday alongside OpenAI’s newest AI models, o3 and o4-mini, Codex CLI links OpenAI’s models with local code and computing tasks, OpenAI says. Via Codex CLI, OpenAI’s

OpenAI debuts Codex CLI, an open source coding tool for terminals Read More »

A dev built a test to see how AI chatbots respond to controversial topics

A pseudonymous developer has created what they’re calling a “free speech eval,” SpeechMap, for the AI models powering chatbots like OpenAI’s ChatGPT and X’s Grok. The goal is to compare how different models treat sensitive and controversial subjects, the developer told TechCrunch, including political criticism and questions about civil rights and protest. AI companies have

A dev built a test to see how AI chatbots respond to controversial topics Read More »

OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI

In an update to its Preparedness Framework, the internal framework OpenAI uses to decide whether AI models are safe and what safeguards, if any, are needed during development and release, OpenAI said that it may “adjust” its requirements if a rival AI lab releases a “high-risk” system without comparable safeguards. The change reflects the increasing

OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI Read More »