If it continues to be a numbers game - the more resources you throw at it, the better it is - then on-device is always going to be not as good. I guess it might be good enough for some uses?
I kind of loathe the move away from a world where we could control our own computers and run our own software on them.
California is trying to ban the sale of 3d printers that don't detect and block "gun parts" from being printed[1]. All Anthropic and friends need is some kind of safety rationale and we won't be able to buy computers that can run local models.
The plausible way to do this is to force all software through some kind of signing process. This would be trivial for Apple to pull off and not much harder for Microsoft. On the Linux side, I expect the systemd folks would be happy to add some kind of signature checking to "head off the inevitable".
They're already nearly there. Once everybody's computers are rooted for age verification, with government-approved OSes, it's barely even a step to only allow government-approved AI on them.
For some reason, people assume that mandatory age/identity verification on people's machines is primarily geared toward reporting across the network. If you wanted to age-restrict across the network, there are any number of trivial ways to do it that they have never even entertained. The reason that the first victims of this legislation are operating systems is because they, like Apple, Google, and Microsoft already have, want to restrict the software you can run.
They have to make sure that you can't have AI that can make pictures of naked children, or naked anything, or that might "underreport" the number of casualties in the Tienanmen Square protests, or could cast doubt on the vaccine (whichever vaccine), or say anything that could be interpreted as anti-Semitic (like that the Palestinians aren't savage animals), etc., etc....
This is already easy to get public support on, because in the same way they whipped up bizarre mind control allegations against genuinely evil social media companies to throw the public off the scent, the public is being groomed with absolutely bizarre and incoherent predictions of the evils of generative AI in order to throw them off the actual evils of the people behind generative AI. The same way that the anti-social media agitprop just resulted in TikTok being sold to explicit propagandists during a genocide and age attestation (as the social media giants do business without interruption), the "AI scaremongering" is just going to result in physical restrictions on individuals running AI - it will be tracked like explosives or nuclear material. The giant AI companies will be sold as the solution, just like closed platform software "stores" from Apple and Google are sold as consumer protection.
> When has that ever happened though?
Microsoft has to sign Linux so it can be installed under Secure Boot. Encryption was regulated as arms export, and is fully under attack again.
Yes, this is the main point, employees will have less and less leverage (I'm even seeing AI doing interviews now, good luck). Soon we'll be explaining to an AI why we aren't as productive as two weeks ago.
"Cast My Apps" - did they, uh, use AI to make that actually work? Because it's very flaky on my Chromebook, which I am otherwise very, very pleased with (especially given the price)
For those who claim to be developers who code no more than 5% of their time and resort to arguments like "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?", it's not commenting, it's shilling for the AI corpocracy on HN.
>> "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?"
I never got that argument. Compilers are formally proven, deterministic algorithms . If you understand what compiler does, you can have pretty good idea what it will produce. If it doesn't do that, its a bug. Definition of correctness is well defined by semantic equivalence.
LLMs are none of that. Its a fuzzy system that approximates your intent and does its best. I can make my intent more and more specific to get closer to what I want, but given all that is just regular spoken language its still open to interpretation. And all that is still quite useful, but I don't get the assembly language comparison here.
Because compilers are only deterministic when using ahead of time compilation, without profiling data, and always the same set of compiler flags.
Introduce dynamic compilation, profiling data, optimization passes, multiple implementations, ML driven heuristics, and getting deterministic Assembly output from a compiler starts to get harder to achieve.
You are right about that but that's talking about what you generate but not what the output does. My point is that the compilers still designed to preserve semantic equivalence. semantic equivalence makes sense here because there are semantics well defined for both input and output. That bit is supposed to be deterministic. If something breaks that that is a bug.
I just don't think comparing with compilers is a good argument.
and hence reading code is unnecessary because how well LLMs understand and converts my prompts is almost equivalent to how well compilers can understand programs and turn into assembly. The prompts carry equal amount of ambiguity as the prompts I would write to define the behavior and want.
If you spend 95% of your time on that stuff, you better be working on like critical infrastructure where nothing can go wrong, otherwise you are in an incredibly dysfunctional company.
I agree it would be absurd for it to take 95% of your time.
I have, however, seen that it takes a lot more time than one would think.
I did some contracting work for a severely dysfunctional meeting heavy organization and it was about 2 hours of meetings for every hour of real technical work!
Even when it’s not dysfunctional, you spend a lot of time on communication and reading stuff other people wrote (including code). It’s very rare to work in isolation.
I guess it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code. If we say it's just typing, then 95% is not absurd no
> it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code
And that would be where we disagree. I don’t read code to look at code. When I’m reading code, I’m looking for the contracts to follow when interacting with a system. It would be nice if it were documented, but more often than not you have to rely on code.
It’s very rare that I plan with a technical mindset. Yes I use the jargon, but it’s all about the business needs. Which again create contracts.
Same with writing code. Code is like English for me. If I don’t have a clear idea on what to write, I stop and do research (or ask someone). But when I do, it’s as straightforward as writing a sentence.
Huh? So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimate?
We all do the same stuff, the disagreement would just be what you feel coding is and if you think technical work is the same thing or a superset. If you as software dev aren't hands on with planning or working more than 5% of your time, you are basically a PO with a programing hobby
> So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimate
I believe 99% of requests are not about what’s technically feasible. And the rare time I encountered one of those, my answer has mostly been “you don’t have enough resources to try solving that problem”.
If you know your fundamentals well, very often you will find the same common blocks everywhere. People much more smarter than me has solved a lot of fundamental issues and it’s rare that I see a business request that doesn’t reuse the same familiar stuff.
That’s why coding is mostly boring. You follow the same pattern again and again. But what dictates the flows are the business parameters. And that’s why most senior spend so much time gathering good requirements. Because the code is straightforward after that.
Ah yes agreed, if it's more than 90% it just signals to me that a developers skills are probably being wasted too much on business/coordination stuff.
But i guess if we mean actual time tapping your keyboard making code, then it's true some days for senior+ devs, but definitely not technical work overall.
He can absolutely counted on to do stupid, corrupt shit. We saw plenty of it when he had 4 years in office and was somewhat held in check by having hired some more or less 'normal' Republicans.
It was entirely predictable that he would fuck things up in some way. He's demented (although the press stopped caring about that sort of thing when Biden dropped out), deeply corrupt, narcissistic, and was never particularly intelligent to begin with.
Yeah, the shifting with electronic components is so fast, easy and precise, but I don't love how many batteries are all over some of my bikes. I kind of enjoy bikes in order to get away from screens and tech and all that.
Yeah, where I live, one guy was like "You know what, I've had it". He then started organizing within the community and got a big crowd to show up at a city council meeting, and we ended up getting rid of the Flock cameras. Yay!
I'm looking to do something interesting with good people. I'm thinking something pretty niche that's a real product and not some AI-based deal. Not that I have anything against AI, I kind of just don't want to chase the fad, you know.
The last thing place like that where I worked, iCare (formerly Centervue) makes fundus cameras for eye exams, and it was a really productive environment.
I don't need to chase a really high salary; I'm looking for something fun, challenging and meaningful.
HN isn't likely the place to look for it but I'm also kind of interested in something around housing policy or otherwise branching out from software. I've immensely enjoyed my volunteer work with a local YIMBY group (well, ok, I founded the group).
Depends. A professor told me AI is really good at writing bad pandas code because it's seen a lot of bad pandas code, so starting from scratch isn't necessarily the worst thing.
I kind of loathe the move away from a world where we could control our own computers and run our own software on them.
reply