Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A Stack Overflow question matching my problem cleared that up for me.

Perhaps if there was no question already available you'd have had a different experience. Getting clearly written and specific questions promptly closed as duplicates of related, yet distinct issues, was part of the fun.

I find that AI hallucinates in the same way that someone can be very confident and wrong at the same time, with the difference that the feedback is almost instant and there are no difficult personalities to deal with.



> someone can be very confident and wrong at the same time

And sometimes that someone can be you, and AI is notoriously bad at telling you that you're wrong (because it has to please people)


I've found recent claude code to be surprisingly good at dispelling false assumptions and incorrect framing. I say this as someone who experimented with it last summer and found it to be kinda stupid; since December last year it's turned the curve - it's not the sycophantic nonsense it used to be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: