I feel like the non-stop handwringing about ChatGPT centers around people's expectations for machines vs people. We expect machines to have consitent, predictable output. We expect humans to inconsistent and messy.
Now we have a machine that is inconsistent and messy (and helpful!) and nobody seems to know what to think. Maybe we stop applying machine notions to this sort of machine? Stop expecting certain, consistant output. Understand that it's sometimes messy. We have these expectations already when working with humans.
Humans have biases, if you ask a human a loaded question you can expect a loaded response. If you train a LMM on a dataset that contains those human biases, why should you expect the result to be anything other than similarly biased?
That's exactly what the post you're replying to is saying. It's saying that ChatGPT _would_ respond a certain way but has a bunch of schoolmarm filters written by upper middle class liberals that encode a specific value structure highly representative of those people's education and backgrounds, and that using it as a tool for information generation and synthesis will lead to a type of intellectual bottlenecking that is highly coupled with the type of people who work at OpenAI.
For all the talk of it replacing Google, sometimes I want a Korean joke (I'm Korean, damn it!) and not to be scolded by the digital personification of a thirty year old HR worker who took a couple of sociology classes (but not history, apparently) and happens to take up the cause of being offended for all people at all times throughout all of history. The take on ethics being a vague "non-offensiveness" while avoiding all of the real, major questions about ethics (like replacing human workers) with these kind of banal answers about "how we need to think seriously about it as a society" tells pretty much everything there is to know about what the ethical process at OpenAI looks like which is basically "let's not be in the news for having a racist chatbot".
Now we have a machine that is inconsistent and messy (and helpful!) and nobody seems to know what to think. Maybe we stop applying machine notions to this sort of machine? Stop expecting certain, consistant output. Understand that it's sometimes messy. We have these expectations already when working with humans.
Humans have biases, if you ask a human a loaded question you can expect a loaded response. If you train a LMM on a dataset that contains those human biases, why should you expect the result to be anything other than similarly biased?