Is it expected that LLMs will continue to improve over time? All the recent articles like this one just seem to describe this technology's faults as fixed and permanent. Basically saying "turn around and go no further". Honestly asking because their arguments seem to be dependent on improvement never happening and never overcoming any faults. It feels shortsighted.
On one hand, recent models seem to be less useful than the previous generation of them, the scale needed for training improved networks seems to be following the expected quadratic curve, and we don't have more data to train larger models.
On the other hand, many people claim that what tooling integration is the bottleneck, and that the next generation of LLMs are much better than anything we have seen up to now.