What’s new: OpenAI is testing a new way to expose the complicated processes at work inside large language models. Researchers at the company can make an LLM produce what they call a confession, in which the model explains how it carried out a task and (most of the time) own up to any bad behavior.
Why it matters: Figuring out why large language models do what they do—and in particular why they sometimes appear to lie, cheat, and deceive—is one of the hottest topics in AI right now. If this multitrillion-dollar technology is to be deployed as widely as its makers hope it will be, it must be made more trustworthy. OpenAI sees confessions as one step toward that goal. Read the full story.
—Will Douglas Heaven
How AI is uncovering hidden geothermal energy resources
Sometimes geothermal hot spots are obvious, marked by geysers and hot springs on Earth’s surface. But in other places, they’re obscured thousands of feet underground. Now AI could help uncover these hidden pockets of potential power.
A startup company called Zanskar announced today that it’s used AI and other advanced computational methods to uncover a blind geothermal system—meaning there aren’t signs of it on the surface—in the western Nevada desert. The company says it’s the first blind system that’s been identified and confirmed to be a commercial prospect in over 30 years. Read the full story.
—Casey Crownhart
Why the grid relies on nuclear reactors in the winter
In the US, nuclear reactors follow predictable seasonal trends. Summer and winter tend to see the highest electricity demand, so plant operators schedule maintenance and refueling for other parts of the year.
This scheduled regularity might seem mundane, but it’s quite the feat that operational reactors are as reliable and predictable as they are. Now we’re seeing a growing pool of companies aiming to bring new technologies to the nuclear industry. Read the full story.