> Just because something can communicate in a way that you can interpret, doesnt mean something is conscious
The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.
I think these ideas are orthogonal. I do not think that conciousness is defined by human experience at all - in fact, I think humans do a profound disservice to animals in our current lack of appreciation for their clear displays of conciousness.
That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.
In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding
Why does a neuron, which is simply a cell that takes in chemicals and electricity, and shits out neurotransmitters; why does 90 billion of those give rise to human intelligence? Neurons are just next chemical state machines. We can model individual ones on a computer. Yet 90 billion of them together make up a human brain, and gives rise to consciousness and intelligence. If you get stuck on the next word prediction part, and ignore the ridiculous scale that's involved with training a model, you miss the forest for the trees.
Great progress came from inverting things that were believed to be self evident. Earth being the center of the world appear to be self evident when you look up at a night sky. But what was the truth?
Right now humans think it is self evident that physical laws give rise to consciousness. Arguments such as yours arise from this implicit assumption that premeditate all our thoughts and reasoning. But this is a dead end. Like how the earth centeric model reached a dead end and run out of steam before it can explain all the observations.
So to progress I think we should turn this down on its head and ask what if consciousness is fundamental? And the cosmos (or the experience of inhabiting one) arises from it? May be some recent advances in quantum mechanics and hypothesis like MUH are already in that direction...
What makes you certain that human thought is more than pattern matching?
As I understand it neuroscience hasn’t come up with a clear explanation of thought, much less a mind or consciousness. It seems to me complex pattern matching is a reasonable a cause of consciousness as anything else.
A lot of the comments in this thread are ignoring his primary point. He's not saying pattern matching doesn't equal consciousness. He's actually saying something more fundamental. He's saying there's no reason to believe that language pattern matching/algorithms are more, or less, conscious than other similarly complex algorithms.
The stance being presented here isn't that LLMs aren't conscious but that we as humans are much more willing to assign consciousness to language algorithms than to pathing or other ones.
Replace the word chimpanzee with human in your own argument and realize that the same logic applies to other humans.
When another human smiles you assume he is happy and not just baring his teeth at you because that’s what you do when you smile. You are “anthropomorphizing” other people. You fall for the same category error in a daily basis when you interact with people; it is not just chimpanzees.
> In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour?
First we don’t know whether LLMs are conscious. People speaking here are talking about the realistic possibility that it is conscious.
Second the algorithm is much more than a next word predictor. The intelligence that goes into choosing the next word such that it constructs arguments and answers that are correct involves a lot more then simple prediction. We know this because the LLM regularly answers questions that require extreme understanding of the topic at hand. It cannot token predict working code in my companies code base without understanding the code.
Third, we do not know what drives human consciousness but we do know it is model-able in a very complex mathematical algorithm. We know this because we have pretty complete mathematical models for lower resolutions of reality. For example we can models atoms mathematically. We know brains are made of atoms and because atoms are mathematically model-able we know that human brains and thus consciousness is mathematically model-able.
The sheer complexity of the LLM model is the problem we cannot have high level understanding of it because conceptual understanding cannot be simplified into a few concepts.
To understand the LLM requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the LLM.
What you are missing with your analysis is that this is the same reason why we don’t understand the human brains. The foundational math already exists as we can models atoms in math and thus since the brain is made out of atoms we should be able to model the brain… but we can’t. We can’t because it is too complex.
To understand the human brains requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the human brain.
I italicized two sentences here to help you understand the logic. Our thinking is more foundational then anthropomorphization. The argument has moved far beyond that. You need to think deeper.
The key here is that we don’t understand human brains and we don’t understand LLMs. But since the output LLMs produce are very similar to the output produced by the human brain… and since for no logical reason we assume human brains are conscious… what is stopping us from assuming the LLM is conscious?
> If everyone is running the same models, does this not favour white hat / defense?
The landscape is turbulent (so this comment might be outdated by the time I submit it), but one thing I’m catching between the lines is a resistance to provide defensive coding patterns because (guessing) they make the flaw they’re defending against obvious. When the flaw is widespread - those patterns effectively make it cheap to attack for observant eyes.
After seeing the enhanced capabilities recently, my conspiracy theory is that models do indeed traverse the pathways containing ideal mitigations, but they fall back to common anti-patterns when they hit the guardrails. Some of the things I’ve seen are baffling, and registered as adversarial on my radar.
> should know there are countless technical and procedural ways to help prevent that sort of thing
Sometimes when I look at code it feels like I was led into a weird surprise party celebrating structure and correctness, only for everyone to jump out as soon as I get past the door to shout, “Just kidding - it’s the same old bullshit!” All that to say, we’re about as good or worse as anyone else, at our respective jobs.
Do you have any justification in mind for the “free service” being funded by tax payers? Why should it be free for the people who need it, and why should tax payers fund it?
Reading back through my comments, I recognize that omitting the quotes around “free services” would have made my comment here more palatable, and less provocative. That was poor form on my part.
We should be making sure everyone has internet access, but hosting some basic pages is about 1000x cheaper, so no I don't think free internet access should come before that.
Converted to dollars, the value is far greater than the cost of a single bomb dropped on strangers that aren't a threat to me, so I don't need to justify it until someone can justify to me the bombs, the oil and gas subsidies, the bailouts, the...
My point is I don't want bombs dropped on strangers, so, in terms of things the government spends money on, there's nothing of less value to me that a single bomb on a stranger. Of all things the government spends its money on, I'd rather any one of those things to take 100% of the budget, than even a penny to go to dropping a bomb on a stranger, even if that significantly decreases my quality of life.
I just really don't like my government killing people far away that pose no threat to me.
> Do you have any justification in mind for the “free service” being funded by tax payers? Why should it be free for the people who need it, and why should tax payers fund it?
Because the government should provide useful services. It should be funded by tax dollars because I'm tried of libertarians, and it's well-demonstrated that the free market has consumer hostile incentives that I'm sick of.
Forgive me for assuming that the government owned service would be more transparent/serve the people better than a privately owned, closed source, platform that's explicitly funded by ads and so is transparently corrupt. Even your worst case scenario for this would be equivalent to what we already have.
> Your assuming the local government employed webmaster won't favor his friends restaurants.
Oh my! Mic drop! You got me! Corporate owned sites would have to be unbiased, right? It's not like a business would ever do something as disreputable favoring a restaurant that paid for the favored treatment, or try to steer you to affiliated businesses. Inconceivable!
But seriously now: a government-run site would be way better and have less biases. In the US, there's a good chance it'd be run by civically-minded people, and there's about zero chance that conflict of interest would be baked into its "business" model.
Agreed. Imo, it is important to distinguish which parts of “everything” carry the weight of the concern. By doing that - we may be able to remove “LLM” from that equation entirely.
The direct problem isn’t that people are using LLMs for everything - it’s that some people can’t be bothered to provide reasonable diligence. Phrasing that concern by blaming LLMs implies that these were perfectly diligent human workers before LLMs came along. Do we really believe that to be the case?
Do you think they were submitting as many PRs, or do you think maybe the LLMs are enabling them to vastly over submit to these projects, meaning that in this case, LLMs are the actual, whole problem?
I find it odd how people will refuse to think about context when defending their toys.
Imagine a future classroom defined by elaborate plays performed by curious parents, all on advanced adjacent learning paths themselves. An intertwined learning structure that just keeps going up. At higher levels, instead of having the researcher with their head in the books communicating, they’ll have a whole team of people translating their knowledge into a production fit for antiquity - directors, diverse range of talents, charismatic performers, etc.
Assuming we have time to do this in some post-having-jobs world, of course.
To be fair, the English language is the real victim here.
While “essential” cleanly maps to “can’t go without” - it doesn’t map to “bare minimum”.
For instance, let’s assume you’re surviving in the wilderness and you need to start a fire. Your fire starting kit is obviously essential, but it could also be included in a “Camper Value Pack” - but those things don’t have anything to do with each other. The kit is essential, and it was obtained in a value pack. This message brought to you by Mr. Obvious.
You’re not alone. I went from being a mediocre security engineer to a full time reviewer of LLM code reviews last week. I just read reports and report on incomplete code all day. Sometimes things get humorously worse from review to review. I take breaks by typing out the PoCs the LLMs spell out for me…
What’s the intent of pointing out the presumed provenance in writing, now that LLMs are ubiquitous?
Is it like one of those “Morning” nods, where two people cross paths and acknowledge that it is in fact morning? Or is there an unstated preference being communicated?
Is there any real concern behind LLMs writing a piece, or is the concern that the human didn’t actually guide it? In other words, is the spirit of such comments really about LLM writing, or is it about human diligence?
That begs another question: does LLM writing expose anything about the diligence of the human, outside of when it’s plainly incorrect? If an LLM generates a boringly correct report - what does that tell us about the human behind that LLM?
The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.
reply