This Week in AI: Seeking balance in the deluge of news



Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

Longtime readers of the newsletter might’ve noticed that we skipped a week last week. That wasn’t our intent, and we do apologize.

The reason was we’ve reached an inflection point in the AI news cycle. We’re a small team, and we’re stretched thin. It’s becoming next to impossible to cover every announcement, controversy, academic paper, trend, open model release, lawsuit, and so on.

Take this month, for example. OpenAI is in the midst of what’s effectively a 12-day press engagement. Google is gearing up to launch major new AI products, as is Elon Musk’s company, xAI. And that’s only the news from the biggest AI players.

To better deal with the deluge, we’re making a small change to This Week in AI. Going forward, the newsletter’s word count will be a little shorter. It won’t be a drastic reduction — you might not even notice it — but the idea is to make This Week in AI more concise while ensuring it reaches your inbox on a regular cadence.

We hope you find the improved newsletter more digestible. As always, we’re open to feedback — drop me a line with thoughts anytime.

News

Robot sitting on a bunch of books
Image Credits:Kirillm / Getty Images

A test for AGI: A well-known test for artificial general intelligence (AGI) is getting close to being solved, but the test’s creators say this points to flaws in the test’s design rather than a bonafide breakthrough in research.

Amazon’s new lab: Amazon says that it’s establishing a new R&D lab in San Francisco, the Amazon AGI SF Lab, to focus on building “foundational” capabilities for AI agents.

OpenAI’s video generator launches: Most subscribers to OpenAI’s ChatGPT Pro and Plus plans got access to Sora, OpenAI’s video generator, starting Monday. Folks in Europe were out of luck, however.

China investigates Nvidia: China’s market regulator has reportedly opened an antitrust probe into Nvidia’s acquisition of Mellanox, an Israel-based company working on high-performance chips for supercomputers.

Yelp adds AI: Yelp released several new features this week, including AI-powered review insights. The platform’s AI tries to analyze the sentiment of reviews and highlight them by category (e.g., food quality).

Google’s renewable energy spree: Google has signed a deal to spin up enough carbon-free power to drive several gigawatt-scale data centers. Altogether, the investment in renewable power will run about $20 billion.

Reddit debuts conversational AI: Reddit’s newest AI-powered feature, Reddit Answers, lets users ask questions and receive curated summaries of relevant responses and threads across the platform.

X’s new image generator: X (formerly Twitter) has gained a new image generator courtesy of xAI, Elon Musk’s AI startup. It’s called Aurora, and it’s tuned for “photorealistic rendering.” You’ll find it in X’s Grok assistant.

Research paper of the week

white clouds in blue sky
Image Credits:Bryce Durbin / TechCrunch

A team of computer scientists from Ai2 and UC San Diego say they’ve created an AI model that can predict 100 years of climate patterns in 25 hours.

The model, called Spherical Dyffusion, starts off with knowledge of basic climate science and then applies a series of transformations to predict future patterns. Unlike many state-of-the-art climate prediction models, Spherical Dyffusion can run on relatively modest hardware, the team claims.

The model has limitations. But the researchers plan to continue refining it. The next version will simulate how the atmosphere responds to carbon dioxide, they say.

Separately, Ai2 released the second generation of its climate-modeling AI, Climate Emulator.

Model of the week

CausVid
Image Credits:Yang et al.

Sora might be getting all the attention, but a new video-generating model out of MIT CSAIL and Adobe Research is potentially more exciting.

The model, called CausVid, can start playing videos the moment it begins to generate them — providing a sort of preview of the finished clip. That’s in contrast to models like Sora, which can’t show clips in progress.

The researchers plan to release an open source implementation soon.

Grab bag

The group of artists who leaked access to Sora last November has published a series of essays explaining why they did it.

The essays are well worth the read, but the gist is, the group wanted to denounce what they saw as the exploitation of creatives for R&D and public relations.

“We called on artists to think beyond proprietary systems,” the group wrote in a post, “and the limitations of prompting a model mediated by big tech.”




Source