For a long time in the dialup era my "answering machine" was a US Robotics voice-capable modem attached to my home Linux PC, with some scripts to make it pick up after N rings, play a message, record whatever the caller said, and then email me the resulting sound file. The Linux support for it included DTMF tone recognition, so I added in a quick hack so that if I sent it the right pin code during the "please leave a message" part it would wait for me to hang up and then dial my ISP, so I could ssh in to it from wherever I was...
I wrote some stuff in Turbo Pascal for DOS to do something like this (albeit it didn't email files-- it just dumped them into a directory on the disk). My parents had two phone lines so making test calls from a real phone was easy. I just had to go around the house and turn off the ringers on all the phones so I wouldn't wake anybody doing test calls in the wee morning hours.
I didn't understand the sample format so all my playback was via the phone handset. I was in over my head, at that time, when it came to grokking audio codecs.
My grand vision was to make some kind of voice-based bulletin board system.
TellMe was one: Call the number, ask it questions, get answers. Part of my normal commute for a time involved calling TellMe to get the weather for the day on a Nokia dumb phone once I got settled into the drive.
Goog411 was another one. You could call Google, ask it questions, and get summaries of search results along with answers for a distinct questions (a lot like LLMs do today). I distinctly recall standing in the supermarket looking at large and inexpensive hunk of meat that was labelled as a "Boston Butt Roast", and calling Goog411 to find some common uses for it. (It did give me confidence to buy it, and we did cook it and eat it. It was lovely.)
These things worked well for that brief moment in time when cellular calling minutes were either plentiful or unlimited, when smartphones didn't commonly exist, and when mobile data was ludicrously expensive.
I was going to use that to start a private voicemail company in my little rural town. Had the name and everything ready to go (mailvox!) but I was too broke to afford the second phone line xD.
Plus in retrospect I'm sure it would have been used almost exclusively for illicit purposes. But that wasn't really something I had thought of back then.
It's funny to think that not that long ago businesses would pay a premium for that feature set from their voice service provider. Maybe not the SSH part lol, but I worked with plenty of small and medium businesses in my career that paid for a voicemail to email service.
Unfortunately the 70% price rise on the JR pass back in 2023 made it much less likely to be economic for most people compared to just buying tickets as you go, even for trips that visit more than one city. Last time I was there I did a loop up from Tokyo to Hokkaido and back by rail, and it was still cheaper to buy individual tickets. (There are obviously still some itineraries where it works out cheaper, but it's much less of an "obviously good idea for most people" than it was back before 2023.)
Having followed some tourists coming to Japan, a large amount of the people appreciate convenience, and the rail pass gives them that. The price is secondary.
Hell, there are even people paying the equivalent of 100 USD just to have someone pick them up from the Haneda airport and accompany them to the hotel. Not even a taxi service, just to be with them to buy them the train tickets, etc.
Also, "why these 5 in particular" is definitely not obvious -- there are a great many possible "obvious in some sense but also true in an important way" epigrams to choose from (the Perlis link from another comment has over a hundred). That Pike picked these 5 to emphasise tells you something about his view of programming, and doubly so given that they are rather overlapping in what they're talking about.
If the team is that small and working on things that are that disparate, then it is also very vulnerable to one of those people leaving, at which point there's a whole part of the project that nobody on the team has a good understanding of.
Having somebody else devote enough time to being up to speed enough to do code review on an area is also an investment in resilience so the team isn't suddenly in huge difficulty if the lone expert in that area leaves. It's still a problem, but at least you have one other person who's been looking at the code and talking about it with the now-departed expert, instead of nobody.
A fairly large category of the flaky CI jobs I see is "dodgy infrastructure". For instance one recurring type for our project is one I just saw fail this afternoon, where a gitlab CI runner tries to clone the git repo from gitlab itself and gets an HTTP 502 error. We've also had issues with "the s390 VM that does CI job running is on an overloaded host, so mostly it's fine but occasionally the VM gets starved of CPU and some of the tests time out".
We do also have some genuinely flaky tests, but it's pretty tempting to hit the big "just retry" button when there's all this flakiness we can't control mixed in there.
If you're going to set a firm "no AI" policy, then my inclination would be to treat that kind of PR in the same way the US legal system does evidence obtained illegally: you say "sorry, no, we told you the rules and so you've wasted effort -- we will not take this even if it is good and perhaps the only sensible implementation". Perhaps somebody else will eventually re-implement it later without looking at the AI PR.
How funny would it be if the path to actually implement that thing is then cut off because of a PR that was submitted with the exact same patch. I'm honestly sitting here grinning at the absurdity demonstrated here. Some things can only be done a certain way. Especially when you're working with 3rd party libraries and APIs. The name of the function is the name of the function. There's no walking around it.
It follows the same reasoning as when someone purposefully copies code from a codebase into another where the license doesn't allow.
Yes it might be the only viable solution, and most likely no one will ever know you copied it, but if you get found out most maintainers will not merge your PR.
That's why I said "somebody else, without looking at it". Clean-room reimplementation, if you like. The functionality is not forever unimplementable, it is only not implementable by merging this AI-generated PR.
It's similar to how I can't implement a feature by copying-and-pasting the obvious code from some commercially licensed project. But somebody else could write basically the same thing independently without knowing about the proprietary-license code, and that would be fine.
You not realizing how ridiculous this is, is exactly why half of all devs are about to get left behind.
Like, this should be enshrined as the quintessential “they simply, obstinately, perilously, refused to get it” moment.
Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
> Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
Well that day doesn't appear to be coming any time soon. Even after years of supposed improvements, LLMs make mistakes so frequently that you can't trust anything they put out, which completely negates any time savings from not writing the code.
1) Most people still don't use TDD, which absolutely solves much of this.
2) Most poople end up leaning too heavily on the LLM, which, well, blows up in their face.
3) Most people don't follow best practices or designs, which the LLM absolutely does NOT know about NOR does it default to.
4) Most people ask it to do too much and then get disappointed when it screws up.
Perfect example:
> you can't trust anything they put out
Yeah, that screams "missing TDD that you vetted" to me. I have yet to see it not try to pass a test correctly that I've vetted (at least in the past 2 months) Learn how to be a good dev first.
> no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.
This is a non-argument. All of the cloud LLM's are going to move to things like micronuclear. And the scientific advances AI might enable may also help avoid downstream problems from the carbon footprint
Even if there isn't any 3rd party code, the whole process of going through the codebase to confirm there really isn't any 3rd party code, and generally getting the legal department to sign off on it, is a lot of work in itself. My impression is that this kind of "historic source" release typically only happens if somebody sufficiently senior in the company cares enough to actively push it through. The default is that nobody does care that much, and it doesn't happen.
"Do nothing" has essentially zero downside for a big company that happens to have something of niche interest like this in its vaults.
third-party code is one thing, political correctness is another. What was acceptable in 90s brogrammer culture may not be considered acceptable by PR obsessed corporate types now.
To put this more charitably, the only reason to release something like this is to get some good PR, but if not carefully controlled, such a release could create more bad PR than good PR.
I don't recall which product it was, it may have been Microsoft, that needed to sanitizes their code before releasing it. There where a lot of not so nice comments about other companies and oh so much swearing. Not really the type of language a company would have their name attached to.
I think Coccinelle is a really cool tool, but I find its documentation totally incomprehensible for some reason. I've read through it multiple times, but I always end up having to find some preexisting script that does what I want, or else to blunder around trying different variations at random until something works, which is frustrating.
"The specification describes bits as combinations of 0, 1, and x, but also sometimes includes (0) and (1). I’m not sure what the parenthesized versions mean"
the answer is that the (0) and (1) are should-be-zero and should-be-one bits: if you set them wrongly then you get CONSTRAINED UNPREDICTABLE behaviour where the CPU might UNDEF, NOP, ignore that you set the bit wrongly, or set the destination register to garbage. In contrast, plain 0 and 1 are bits that have to be that way to decode to this instruction, and if you set them to something else then the decode will take you to some other instruction (or to UNDEF) instead.
This is an important ISA feature -- an instruction encoding that is wasteful of its encoding space is one that has no room for future new instructions (or which has to encode the new instructions in complicated ways to fit in whatever tiny "holes" are left in the encoding space).
The old 32-bit Arm encoding had this problem, partly because of the "all instructions are conditional" feature. Even after the clawback of the "never" condition that wasted 1/16 of the available instruction encoding space as NOPs, it was tricky to find places to put new features.
reply