> designed to look correct more than it's trying to actually be correct
This might not quite be true, strictly speaking, but a very similar statement definitely is. LLMs are highly prone to hallucinations, a term you've probably heard a lot in this context. One reason for this is that they are trained to predict the next word in a sequence. In this game, it's almost always better to guess than to output 'I'm not sure,' when you might be wrong. LLMs therefore don't really build up a model of the limits of their own 'knowledge,' they just guess until their guesses get better.
These hallucinations are often hard to catch, in part because the LLM will sound confident regardless of whether it is hallucinating or not. It's this tendency that makes me nervous about your use case. I asked an LLM about world energy consumption recently, and when it couldn't find an answer online in the units I asked for, it just gave a number from a website and changed (not converted) the units. I almost missed it, because the source website had the number!
Stepping back, I actually agree that you can learn new things like this from LLMs, but you either need to be able to verify the output or the stakes need to be low enough that it doesn't matter if you can't. In this case, even if you can verify the math, can you be sure that it's doing the right calculation in the right way? Did it point out the common mistakes that beginners make? Did it notice that you're attaching the support beam incorrectly?
Chances are, you've built everything correctly and it will be fine. But the chances of a mistake are clearly much higher than if you talked to an experienced human (professional or otherwise).
This might not quite be true, strictly speaking, but a very similar statement definitely is. LLMs are highly prone to hallucinations, a term you've probably heard a lot in this context. One reason for this is that they are trained to predict the next word in a sequence. In this game, it's almost always better to guess than to output 'I'm not sure,' when you might be wrong. LLMs therefore don't really build up a model of the limits of their own 'knowledge,' they just guess until their guesses get better.
These hallucinations are often hard to catch, in part because the LLM will sound confident regardless of whether it is hallucinating or not. It's this tendency that makes me nervous about your use case. I asked an LLM about world energy consumption recently, and when it couldn't find an answer online in the units I asked for, it just gave a number from a website and changed (not converted) the units. I almost missed it, because the source website had the number!
Stepping back, I actually agree that you can learn new things like this from LLMs, but you either need to be able to verify the output or the stakes need to be low enough that it doesn't matter if you can't. In this case, even if you can verify the math, can you be sure that it's doing the right calculation in the right way? Did it point out the common mistakes that beginners make? Did it notice that you're attaching the support beam incorrectly?
Chances are, you've built everything correctly and it will be fine. But the chances of a mistake are clearly much higher than if you talked to an experienced human (professional or otherwise).