Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it expected that LLMs will continue to improve over time? All the recent articles like this one just seem to describe this technology's faults as fixed and permanent. Basically saying "turn around and go no further". Honestly asking because their arguments seem to be dependent on improvement never happening and never overcoming any faults. It feels shortsighted.


> Is it expected that LLMs will continue to improve over time?

By whom?

Your expectations aren't the same everybody has.


I dont have any expectations. Thats why I was asking


Well, expectations vary widely.

On one hand, recent models seem to be less useful than the previous generation of them, the scale needed for training improved networks seems to be following the expected quadratic curve, and we don't have more data to train larger models.

On the other hand, many people claim that what tooling integration is the bottleneck, and that the next generation of LLMs are much better than anything we have seen up to now.


The LLM can't actually use the product and realise that the description is wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: