I'm not a fan of how LinkedIn operates ... or the culture there in general.
At the same time I wonder what happens when users realize everything they look at is now more visible than ever? People just make fake accounts for browsing?
Maybe it should be that way, but there's an interesting dynamic to "what you look at (even if not a full picture) is visible to some people".
The summary but no content thing is interesting. I’ve seen it in many forms and I’m not sure why it plays out that way. Maybe the summary is tied to the prompt tightly? The rest not?
I saw some bots on Reddit that were very odd in that if anyone asked a question in relation to something like an news article some account would respond with a non answer but sorta summarized bit of the article. If you responded “that’s not really what I asked” you got an even odder response.
This isn’t that strange as people will do that in a way… but i noted it because I saw a flurry of those accounts in Reddit and then they vanished.
> The summary but no content thing is interesting. I’ve seen it in many forms and I’m not sure why it plays out that way.
I would guess that it's because the incentives and goals are different.
The point of a summary is to entice a listener to begin the podcast. So it has to offer the promise of interesting depth.
Once they've started listening, all the body of the podcast has to do is be soothing enough to get the user to keep listening until the next ad comes on. It has no need to actually keep the promise unless the listener is paying enough attention to hold it accountable.
A coworker of mine got his first job at IBM after graduating from what was effectively an early version of a tech trade school when tech trade schools were not common.
He showed up to work at an IBM hardware factory in the US and as soon as everyone walked in the door they was called into a meeting that day. IBM announced they were all laid off immediately. IBM having almost no experience with layoffs to that point and still styling itself as a company you go to work at for life seemed to be legitimately unsure what to do.
So they gave everyone minimum 1 years pay, benefits, IBM actually assigned HR people who were VERY involved in trying to place people other places and paid many to relocate them, and what amounted to a 4 year scholarship too if they wanted to use it.
Dude had been there less than an hour and decided to just go back to school for 4 years ...
I was looking for something to use for documentation recently.
Every dang tour wanted to show me their endless litany of features, often leaning into enterprise stuff. So much so that it didn't involve a chance to actually use the tool for what I wanted.
I just wanted to try documenting something and seeing how fast and easy it was but every form of a tour wanted to side track me.
Yeah I'd like to know in a solid way WHY Claude kept changing a file that I explicitly told it not to. The .mds, Claude's plan all said not to touch that file, and Claude just kept at it. I've had it happen repeatedly lately. Really basic failures.
The idea being that as frustrating as it is, if I knew why I might be able to do something about it.
But no, we have the black box, where sometimes what comes out just is brain dead and the rate that you get bad output is a mystery...
I am by no means an expert, but I'd like to offer my mental model - up to you to decide if it is solid or not, but it works for me.
I think the core intuition is that, like with any other "rasterized" system with finite memory that cannot encode an absence of anything - relation, concept, entity, LLM cannot encode an absence of something through its internal weights. Say, you can have "Product" or "Order" tables in you database, but you cannot have "NotAProduct" or "NotAnOrder" tables - for obvious reasons of such relations being infinite and uncountable. So, to establish an absence of Product or Order your application must execute a "search" operation through the relevant tables. But in LLM-space "search" operation does not exist. It is mathematically undefined. LLM arrives at output (or "what to do") through a sequence of projections of input token vector through its "latent space". It "moves toward" high-probability clusters, fundamentally unable to "move away". So, the success of any "negation" in the prompt ("don't touch this file", "draw me a ballot box without a flag on it") depends on how heavily such scenario represented in the training data/model space. And again, the absence-of-something may be hard-to-impossible to usefully encode, especially if "something" is not fixed. Therefore, to expect "don't touch this file" sentence to result in, well, not touching the file is pure gambling. Sometimes it may look like working, albeit for wrong reasons, and some other times LLM may do exactly the opposite - because its weight matrix statistically pushes it towards "touch this file", completely ignoring (nonexistent in its latent space) "don't".
There is no way to reliably know what will work, and no "skill" or "art" in this. Well, no more than in dice rolling or horoscope casting.
I'd like to add that for the above reason I find "agentic development" usefulness on par with avian remains reading. But when I explored it two practical advises seemed to be helpful in nudging LLM around negation problem:
- Omit the "don't" prompt completely, thus not creating a false "attractor" for LLM; and
- Provide an alternate positive directive ("what to DO", not "what to NOT DO") to act as "escape hatch" when LLM might "want" to touch the sacred file or drop the production DB.
While it looked like somewhat working, I think it is trivially obvious that trying to predict all the nonsense LLM might want to perform and coming up with possible "escape hatches" for everything very quickly becomes utterly impractical.
That was always my thing about early EVs. Thumbs up to the early adopters but it took a while to take off as a lot of the early ones were just car but battery and they weren't very polished and had a lot of downsides.
I like it. I don't really need 20 cards in my apple wallet, so I just don't ... I just keep what I need like my existing physical wallet (now smaller because of my apple wallet).
Farming history seems to be boom and bust and the golden ages of farming seem to be not quite what they were and surprisingly short.
A local university professor did a study on homesteading in my state and determined that even then the land offered to immigrants was actually to small to regularly turn a profit, to some extent that seems to continue to this day at times.
reply