> A Stack Overflow question matching my problem cleared that up for me.
Perhaps if there was no question already available you'd have had a different experience. Getting clearly written and specific questions promptly closed as duplicates of related, yet distinct issues, was part of the fun.
I find that AI hallucinates in the same way that someone can be very confident and wrong at the same time, with the difference that the feedback is almost instant and there are no difficult personalities to deal with.
I've found recent claude code to be surprisingly good at dispelling false assumptions and incorrect framing. I say this as someone who experimented with it last summer and found it to be kinda stupid; since December last year it's turned the curve - it's not the sycophantic nonsense it used to be.
Perhaps if there was no question already available you'd have had a different experience. Getting clearly written and specific questions promptly closed as duplicates of related, yet distinct issues, was part of the fun.
I find that AI hallucinates in the same way that someone can be very confident and wrong at the same time, with the difference that the feedback is almost instant and there are no difficult personalities to deal with.