Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Real people might not speak every dialect of English. They may not be well versed in local grammatical oddities.


Doesn't seem to be much of a problem in practice?

If someone knew all the math and science in Wikipedia, for example, I think they'd probably be forgiven for not knowing every regionalism.


Unfortunately models aren't always good at knowing what they don't know ("out of distribution data") so it could lead to confidently wrong answers if you leave something out.

And if you want it to be superhuman then you're by definition not capable of knowing what's important, I guess.


Btw, models like GPT 4 can express that they are not confident.

But that looks like a 'smeared' out probability distribution on the next token. Not like the text produced by an unsure human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: