Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A structured language without ambiguity is not, in general, how people think or express themselves. In order for a model to be good at interfacing with humans, it needs to adapt to our quirks.

Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.

Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc



>Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.

I think there's a substantial subset of tech companies and honestly tech people who disagree. Not openly, but in the sense of 'the purpose of a system is what it does'.


I agree but it feels like a type-of-mind thing. Some people gravitate toward clean determinism but others toward chaotic and messy. The former requires meticulous linear thinking and the latter uses the brain’s Bayesian inference.

Writing code is very much “you get what you write” but AI is like “maintain a probabilistic mental model of the possible output”. My brain honestly prefers the latter (in general) but I feel a lot of engineers I’ve met seem to stray towards clean determinism.


Yep, humans have had a remedy for the problem of ambiguity in language for tens of thousands of years, or there never could have been an agricultural revolution giving birth to civilization in the first place.

Effective collaboration relies on iterating over clarifications until ambiguity is acceptably resolved.

Rather than spending orders of magnitude more effort moving forward with bad assumptions from insufficient communication and starting over from scratch every time you encounter the results of each misunderstanding.

Most AI models still seem deep into the wrong end of that spectrum.


> in order to better service ai

That wasn't the point at all. The idea is about rediscovering what always worked to make a computer useful, and not even using the fuzzy AI logic.


I think it's very likely that machine intelligence will influence human language. It already is influencing the grammar and patterns we use.


I think such influence will be extremely minimal, like confined to dozens of new nouns and verbs, but no real change in grammar, etc.

Interactions between humans and computers in natural language for your average person is much much less then the interactions between that same person and their dog. Humans also speak in natural language to their dogs, they simplify their speech, use extreme intonation and emphasis, in a way we never do with each other. Yet, despite having been with dogs for 10,000+ years, it has not significantly affected our language (other then giving us new words).

EDIT: just found out HN annoyingly transforms U+202F (NARROW NO-BREAK SPACE), the ISO 80000-1 preferred way to type thousand separator


> I think such influence will be extremely minimal.

AI will accelerate “natural” change in language like anything else.

And as AI changes our environment (mentally, socially, and inevitably physically) we will change and change our language.

But what will be interesting is the rise of agent to agent communication via human languages. As that kind of communication shows up in training sets, there will be a powerful eigenvector of change we can’t predict. Other than that it’s the path of efficient communication for them, and we are likely to pick up on those changes as from any other source of change.


Seems very unlikely. My parent said the effects have already started (but provided no evidence), so I assume you mean by less then a generation. I am not a linguist but I would like to see evidence of such rapid shifts ever occurring in anywhere the history of languages before I believe either of you.

I have a feeling you only have a feeling, but not any credible mechanism in which such language shifts can occur.


I am a little confused. Every year language changes. Young people, tech, adapting ideologies, words and concepts adopted from other languages, the list of language catalysts is long and pervasive.

Language has never stood still.


GPs original claim was “[Machine Intelligence] already is influencing the grammar and patterns we use.”

Your claim above was “AI will accelerate “natural” change in language like anything else.”

Now these are different claims, but I assumed you were backing up your parent’s claim. These are far stronger claims than what you write now, in a way that it feels like you are arguing in Motte and Bailey.

First of all if GP claim is true, linguists should be able to find evidence of that and publish their findings in a peer review papers. To my knowledge, they have not done that.

Second of all, your claim about AI “accelerating” changes in natural language is also unfounded, unless you really mean “like everything else”, in which case your claim is extremely weak to the point where it is a non-claim (meaning, you are not even wrong[1]).

1: https://en.wikipedia.org/wiki/Not_even_wrong


> These are far stronger claims than what you write now

> unless you really mean “like everything else”, in which case your claim is extremely weak

Language responds to changes in context. Books, the printing press, radio, the web, social media, mobile web, all changed how people used language and impacted language.

AI is a dramatic new context, with unique properties:

1. It is the first artifact to actively participate in realtime natural language communication. In a striking break with those predecessors.

2. AI language capabilities evolve quickly, and are unlikely to stop soon.

3. As learning during inference becomes prevalent, we will be co-adapting communication with AI in realtime.

4. Model to model communication is in its infancy, but is an entirely new category of language use, by entirely new users.

No preceding change to language context or purpose comes close.

Holding out for studies is reasonable, to determine the level of change. But statements of "not even wrong" make no sense. The default is changes in communication context and purpose, drive changes in language.

Language has never been static or unresponsive to new contexts.


My not even wrong argument was contingent on the weakest interpretation of your argument, where AI would change the language exactly like anything else in human society changes language.

> Changes in how we use language with AI will change even faster when AI starts learning continuously during inference.

This is a stronger claim, and shows that the weakest interpretation of your argument does not apply. I take not event wrong back. As this is a testable hypotheses which offers a solid prediction. I can in fact be wrong.

That said, I am skeptical of your claims for the reason stated above. People don’t interact with LLM’s nearly as much as they do with their dogs, and I am not aware of any research that shows that people who interact with a lot of dogs simplify their languages in human-to-human communication. To the contrary, there is ample research that humans are in fact quite good at context switching. You can speak extremely poorly in a second learning you are currently learning, and then in the next sentence speak fluently without hesitation in your native language.


I suggest that dogs are not a good comparison.

Interaction with language models involves a significant use of language and thought. Is not repetitive. And many users (myself included) continually find new ways to use them.

Others may take their time adopting language models, or be slower to branch out into many kinds of use, but young people in particular will be very fast adopters and adapters. That will be the place to watch.

"Even faster" with respect to inference learning, wasn't an attempt to undersell changes happening now. Teachers are experiencing a lot of new issues with how students respond to the availability of models today. One being the potential for students to put less effort into their own communications. If that continues, it won't just be a "dumbing" of literacy, it will have its own impact on vocabulary and grammar.

But looking forward is unavoidable. Models are not going to stay still long enough to say what stage impacted what changes. Model changes are too fast and fluid.

Well, this era is just getting started, so a diversity of expectations makes sense.


I think you might be underestimating human-to-dog interactions. Interacting with dogs require a whole lot of empathy and thought.

But really this is beyond the point. I didn’t provide dog interactions as an analogy, rather, I provided it as a counter point. We speak differently to dogs then we speak with each other, and have done so for thousands of years. I see no reason why LLM’s would have any more profound effects on our language. We will continue to speak with each other in a normal manner just like before.


> We will continue to speak with each other in a normal manner just like before.

We may just be operating on different versions of "significant" change. Because I do agree with that statement.

I just think there will be language changes directly tied to adaption/adaption with models in our lives. In addition to the normal drift and adaptation. And that the rate of language change is likely to be faster, both due to interaction with models, and indirectly due to accelerated changes in general.


> Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.

I'm on the spectrum and I definitely prefer structured interaction with various computer systems to messy human interaction :) There are people not on the spectrum who are able to understand my way of thinking (and vice versa) and we get along perfectly well.

Every human has their own quirks and the capacity to learn how to interact with others. AI is just another entity that stresses this capacity.


Speak for yourself. I feel comfortable expressing myself in code or pseudo code and it’s my preferred way to prompt an LLM or write my .md files. And it works very effectively.


> Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc

So no abstract reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: