The latest update to Google DeepMind’s top Gemini AI model introduces a dial that allows developers to control the level of reasoning the system applies to responses. This feature aims to save money for developers by addressing the issue of overthinking in reasoning models, which can increase costs and energy consumption. Reasoning models, a focus of AI development since late last year, can cost up to $200 to complete a single task. Outputs from these models are about six times more expensive to generate when reasoning is enabled. The shift towards reasoning models reflects a change in the AI industry, moving away from scaling laws and towards longer thinking times for better responses. AI companies now spend more on inferencing than training, a trend expected to grow with the rise of reasoning models. This increased inferencing also contributes to a larger environmental footprint. Despite the dominance of reasoning models, alternatives like the open-weight DeepSeek model, which caused a nearly $1 trillion stock market dip upon its announcement, offer cheaper options. However, for tasks requiring high accuracy and precision, such as coding, math, and finance, proprietary models may still be preferred.
Source: www.technologyreview.com

Related Links
Related Videos
Related X Posts
AshutoshShrivastava
@ai_for_success
·
Mar 22
Advanced reasoning AI is “dirt cheap” compared to a top human doing the same work.
Noam Brown (Research Scientist at OpenAI)
Haider.
@slow_developer
·
Apr 11
“reasoning models are dirt cheap compared to humans”OpenAI Research Scientist, Noam Brown:the reasoning models may seem costly compared to GPT-4o…but they are more cost-effective than human experts, especially as they start to surpass human performance in certain areas.
Brian Chau (SF April 15th-20th)
@psychosort
·
Apr 16
They way they measure compute cost is even more insane – counting cumulative compute cost of a *company* instead of the compute cost of a model. This means a company that trained twenty 5.1 million dollar models would be liable under the law.
Colin Fraser
@colin_fraser
·
Mar 6
Like the $200/month OpenAI Deep Research agent can’t figure out which NBA players are on which teams with better than about 15% accuracy but in 2 years it’s supposed to be a billion times smarter than Albert Einstein? Seems hard to believe.
Jason Whaling
@Jason_Whaling
·
Apr 11
AI was built on a lie.If OpenAI had announced:“We’re scraping your work, training an AI on it, and charging $200/month for access…”People would’ve lost their minds. But that’s not what happened.
Anil Kumar Tulsiram
@Anil_Tulsiram
·
Apr 17
Many individuals continue to rely on the free version and then criticize AI for providing inaccurate or suboptimal responses. As full-time investors, it is essential that we recognize the value of investing in our own operations. If we are unwilling to make such investments