Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These models are clearly capable of doing this. There is no theoretical reason why you should expect them to to fail at this. One day they will be able to do this perfectly and nobody gets the silly idea of generating a program to do it anymore. There is no need for another bitter lesson where "clever" AI researchers and engineers waste their career adding a hundred different workarounds to these minor problems.


I don't know... the ability to write code to solve an otherwise ill-suited problem seems pretty general to me. It seems like a big step in a concrete direction, as opposed to a lot of Goedelian navel-gazing about arithmetic and Peano axioms and whatnot.

Agreed that generalized architectures will ultimately win out over hand-tweaked ones. But the patent wars that will eventually be fought over this stuff are where the real bitter lessons will come into play. At some point, we'll be forced back into the hand-optimization business because someone like OpenAI (or another Microsoft proxy) will have locked down a lot of powerful basic techniques.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: