Clear logo

Will AI scaling continue?

12 Mar 2025
Ankit Solanki
Co-founder at Clear. Exploring all possibilities of AI.

The scaling hypothesis is the observation that AI capabilities continue to grow as the model size, input data and compute used to train the model grows. That it’s possible to predict the capabilities of a model given the resources used to train it.

If you follow developments about AI, this is one of the biggest questions on people’s mind: will scaling continue or will we hit a plateau? So far, it has proven to be the case that bigger models, trained with more data & compute, perform better. That the ROI of training bigger models is worth it.

This is the reason why companies are buying up GPUs and doing huge training runs, and model sizes are continuously increasing.

Epoch AI's chart showing the growth of model sizes over time

We have two possible worlds coming up:

  • Scaling continues, and AI gets better the more resources we throw behind it.

    • In this world, we will soon have the first billion dollar training run, with gigawatt datacenters.
    • Model capabilities keep growing faster than we can figure out use cases for.
    • Intelligence becomes ‘too cheap to meter’.
  • Scaling hits a wall, and the ROI for bigger models isn’t worth the cost.

    • Future AI capabilities grow slower than the past 2-3 years.
    • We still have a huge amount of capabilities unleashed with current generation frontier models though. Each bubble plants the seeds for the next S-curve of innvocation.

This is a crazy moment in history. I wouldn’t bet against scaling — but I can’t predict what happens next.

Some resources to read further: