Unfortunately models aren't always good at knowing what they don't know ("out of distribution data") so it could lead to confidently wrong answers if you leave something out.
And if you want it to be superhuman then you're by definition not capable of knowing what's important, I guess.