Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a big fan of stack-based languages conceptually, but they always seem to fall flat when it comes to basic reading comprehension. APL has the same issue, as does J. Factor did improve on this a little bit by eschewing the symbol fetish but it was still very difficult to rapidly scan the code for functionality. I'm not convinced the approach shown here is a good pairing with the human brain.


If (and I grant, this is a big if) you are used to them, the symbolic nature of APLs allows to discuss code fragments (and even entire* algorithms) using inline elements instead of as separate interspersed blocks.

The difference between scanning algol-style and apl-style code is a little like the difference between scanning history books and maths books: on one hand, one must scan the latter much more slowly (symbol by symbol, not phrase by phrase), but on the other hand, there's much less to scan than in the former.

* Edit: as an example, compare the typical sub-10 character expression of Kadane's algorithm in an array language with the sub-10 line expressions one typically finds online.


APL people often point to the definition of "average" as evidence for its economy of expression: +/÷≢

They make the argument "the word 'average' has more symbols than its definition, so why not just use the definition inline as a tacit function?"

There's some elegant beauty in this that I'm sympathetic to. However, I think it's fundamentally flawed for one big reason, which in my opinion is the core of the unreadability of APL (and other array languages that rely on custom symbology):

Humans think in words, not letters. There is never semantic content in individual letters; a word is not defined by its letters. Sometimes a word's definition can be deduced from its syllables ("un-happi-ness") but the number of prefixes/suffixes is miniscule compared with the number of root words.

We naturally chunk concepts and create referential identifiers to the abstractions. The point of an alphabet is to make the identifiers generally pronounceable and distinguishable.

+/÷≢ is not pronounceable as a single word, even among APL experts ("add reduce divide tally" is not a word). We have a word for this concept in English, "average" (and synonyms "avg" and "mean"), in common usage among programmers and laymen alike. Using +/÷≢ to mean "average" would be like defining a function AddReduceDivideTally(x), which in any reasonable codebase would be an obviously bad function name.

The semantics of array languages, like stack languages, already lend themselves to extraordinary expressivity and terseness, even without a compressed symbology. What is wrong with this?

    sum := add reduce
    average := sum divide tally
I mean that is it, the essence of "average"! Anyone who knows both English and Computer Science can look at that and understand it more or less immediately. Compressing this into an esoteric alphabet does nothing for furthering understanding, it only creates a taller Tower of Babel to separate us from our goal of developing and sharing elegant definitions of computation concepts.


But using +/÷≢ to mean average isn't like using AddReduceDivideTally, it's like +/÷≢. With substantial array programming experience I do read this as a single word and pronounce it "average"; why should the fact that each symbol has its own meaning prevent me from processing them as a group? It's purely a benefit in that I can (more slowly) recognize something like +/÷⊢/⍤≢ as a variant of average, in this case to average each row of an array.

My opinion on the overall question, which I've written about at [0], is that it's very widely acknowledged that both symbols and words are useful in programming. This is why languages from PHP to Coq to PL/I all have built-in symbols for arithmetic, and usually a few other things, and built-in and user-defined words. The APL family adds symbols for array operations, and often function manipulation. Perhaps not for everyone, but, well, it's kind of weird to see a proof that a language I use to understand algorithms more deeply couldn't possibly do this!

[0] https://mlochbaum.github.io/BQN/commentary/primitive.html


> Humans think in words, not letters

In some languages, e.g. Chinese, each symbol means more than a single letter would in a latin alphabet. These languages to me are just a bit more like that.


Each kanji in Chinese is a full word. The "letters" are the radicals, which do not contribute to the meaning.


Sure, but...that's my point. In APL those characters aren't letters either, they're words.

I agree with your overall point (which is, I think, DRY) but not this specific criticism.


> Humans think in words, not letters.

That's true. Just a nitpick: it's not about letters, it's about things that make up a word, and many words in natural languages are made up of smaller parts that have meaning.

But people do tend to forget them and just use the resulting word as a unit

Can't think of a good example in English but in Italian the word "alarm" is "allarme" which comes from "alle armi !" (to the weapons!). At some point people used that word to mean you should grab your weapons and then later they just meant it metaphorically. Most people today don't even make the connection between these two words despite it being right in the face.

What would be an equivalent example in English?


> +/÷≢ is not pronounceable as a single word, even among APL experts ("add reduce divide tally" is not a word

I’m not entirely certain most programs will ever be pronounced.

> What is wrong with this? average := add reduce divide tally

Nothing, but but it seems to imply that this: average := +/÷≢

… nicely resolves this issue of wanting to use words — AKA sequences of pronounceable but semantically meaningless glyphs —- as human-friendly (at least for English-fluent humans) mnemonics for programs — AKA sequences of not-muscle-memory-pronounceable but semantically meaningful glyphs), yeah?

Mind you I haven’t looked at the language deeply enough to know you can actually define an alias like that.


> If (and I grant, this is a big if) you are used to them, the symbolic nature of APLs allows to discuss code fragments (and even entire* algorithms) using inline elements instead of as separate interspersed blocks.

So I easily grant you this, though I don't think it's much different in practice than using subroutines.

But even if I grant you that a high level of experience allows rapid scanning, there's still a barrier of translating to and from English (or whatever language you develop in) that seems to be higher due to the symbolic nature of the language. This is also true with certain procedural languages, but there's also been an enormous number of think pieces about how best to encode procedural programs so that they do naturally translate into natural language domains. I'd imagine that not being able to leverage those decades of discussion straightforwardly would be a loss.

Granted, this is also a problem with stuff like Haskell too, but the type system goes a long way to ameliorating this concern by classifying structures very rigidly.


I think for most people the learning curve is steeper with tacit languages, mainly as you have to hold the stack/array state in your head. Mathematicians will probably find it familiar, and as-per forth you could annotate with some sort of stack diagram for an aide memoire.

If you're looking for something more understandable, rebol syntax is phenomenal. It's a concatenative prefixRL language like Uiua (forth is postfixLR), which you can use like a stack or array language by passing the stack/array as the far-right operand. http://blog.hostilefork.com/rebol-vs-lisp-macros/

Furthermore it handles types and has (declarative) scoping unlike say forth which is typeless (panmorphic) and global.

The idea with rebol, similar to Joy, is that operations-on-a-stack is analagous to passing-mutable-stack-to-function so you get tacit programming with both approaches. PrefixRL allows more of a lispy feel, especially when combined with blocks.


I'm always wondering what kind of people prefer this syntax to more classical syntax


Akshually APL can be considered more classical than algol-style languages, since it originated as mathematical notation for teaching programming on a blackboard in class.

Lisp beats APL by a few years though, so there it really depends on whether you value lisp's seniority or APL's connection to math notation traditions more when judging how "classical" a language syntax is.


Those iterating quickly on experimental stuff, or simply because arrays are the only needed collection (there are many such problems)


Hot take: people who don't touch type.


Oh, you can touch type emoji and symbols today. There are all sorts of interesting Input Method Editors (IMEs) for them. Since Windows 10 there's one by default in Windows if you type Win+. or Win+; (whichever you prefer). You can type and it will filter the emoji by name. It's not quite as nice for mathematical symbol entry, but it does include them (on the tab labeled with an Omega) and is still better than many other input methods for them. macOS has a similar out-of-the-box IME (Control+Command+Space), though it differs on the amount of math symbol support. Linux's most used IME subsystem `ibus` (specifically `ibus-emoji`) if correctly setup should also by default provide an experience for emoji (Super+. like Windows). There are third-party ones as well for most platforms.

Emoji is not just for tap-typing/swipe-typing mobile users. (Also, I think it is really handy as developer to learn your local emoji IME: including emoji in test data is really handy for checking unicode safety in your applications and regularly using any IME at all while in your applications helps you test some accessibility issues that might affect users that must use an IME for there language such as CJK languages and Braille writers and more. English software developers get to overlook a lot of how languages around the world work and can easily break accessibility needs with bad assumptions and it is great that emoji are a grand field leveling tool to bring those experiences to us English speaking developers in a way we can easily "read"/"write".)


I meant "brevity at all costs" languages like J and Uiua strike me as the kind of contraptions that people who can't type fast would come up with. I speak from experience. I had a coworker who would write absolutely cryptic code and one day we were working together and I noticed he was typing painfully slow, and then it all clicked.

I'm well aware it's possible to type eg Chinese rapidly, and I use the Windows emoji entry keyboard quite often.


I see these languages as resembling classic mathematical notation as much or more than I see them as "lazy typist" languages. To be fair, some of classic mathematics notation was designed to be easy to write on blackboards, but that's still very different from being easy to type.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: