Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Code boilerplate: Is it always bad? (medium.com/shazow)
88 points by dmitshur on June 10, 2017 | hide | past | favorite | 126 comments


It's always bad. The second line makes you understand with a glance what it does because it is declarative and explains what is doing instead of how. The go code is awful (personal opinion, although I hope it is widely accepted), much more verbose and it opens you up to a whole series of errors that are just impossible in the python version. Boilerplate code by definition is just duplication and it is always bad because opens you up to bugs in a specific part of the boilerplate while you can't have any bug at all in any boilerplate if you don't have any of it. Honestly I can't really see how anyone can prefer code that is more verbose and bug-prone to code that is more concise, clear and with less bugs.


> It's always bad.

Generalisation is always[tm] a bad idea. Taken to illogical extremes, at the other end of the spectrum we have the monstrocities of old Perl. After all, with Perl it was often said that you could fit most solutions to a oneliner.

In fact, I would go as far as to claim that using Python for n-tuple iterators inside branched comprehensions are getting awfully close to the worst aspects of what Perl scripts were commonly criticised for.


Perl golf is certainly an extreme example, but more commonplace is the convention-over-configuration in Ruby on Rails. Mostly this is useful in avoiding boilerplate and increasing "programmer happiness", but (in my opinion) the C-over-C is slightly overcooked. I believe a small amount of boilerplate declaration goes a long way towards assisting with debugging and production correctness. Two examples from Rails to illustrate:

1. RESTful routes leading to implied controller actions that render a view without any controller declaration at all. I've seen more than one codebase with routes that could be invoked by users despite being never linked in production and having no test coverage. It's basically undocumented code due to a total absence of declaration, and a case of needing to know the implied framework behaviour to understand a program.

2. I have an issue with ActiveRecord's schema behaviour. By avoiding attribute declaration in models and just defining ORM methods by asking the database for column names, it only takes one poorly handled migration or a DBA oops to lead to some very strange production behaviour. Situation is only partially offset via the (new) attributes API and/or a schema cache dump during CI but one can never fully do away with the hazard. Philosophically speaking, sometimes Rails thinks it is dictating the database schema, sometimes Rails accepts what the database tells it, and out-of-the-box rails has no model boilerplate that bridges the gap to guarantee a specific DB schema.

In both cases I add my own safeguards for production environments, but that's a matter of programmer discipline rather than mandatory declaration.


Convention most of the time doesn't reduce boilerplate at all while it increases exponentially the headaches for anyone reading the code. A typical example is using the awful Prism way to display a view in a WPF application. For some reason completely alien to me the people at Microsoft practices, or whatever it was called at the time, explicitly suggested to use the IOC container directly (or with a kind of service locator) to create a view that automatically will try to load a view model with the same name if it existed, without throwing any error if it wasn't found. This is the worst possible suggestion ever. It is much easier and much more readable to just get your view model as a parameter in your main viewmodel, the IoC container will create it for you, you assign it to a public property and the view defined in a template is automatically shown in a content presenter. This is extremely easy, much easier to do than to explain, there is no whatsoever connection to the container, if the viewmodel is not found you receive a runtime error, the view model has no relation at all with the view, it can be called whatever you like and the view has an explicit connection with the viewmodel. So in this case the removal of convention based loading not only makes everything explicit, but also removes all the boilerplate needed to get a view from the container. I don't particularly like ROR exactly for the same reason for which I don't like this convention driven way just explained. And as you can see removing conventions and using different techniques can actually reduce the boilerplate if done properly.


Perl's terseness is about ripping meaning away from the code's symbolic representation to save a few characters. That's a related issue to boilerplate, but it's not the same thing at all.

Boilerplate is usually maximized in projects which try to give users a huge amount of flexibility that they don't need or want.

This is partly why minimizing boilerplate is so hard - one thing that continually surprises me is what users want flexibility over and what they don't. I've lost count of the number of times I've given users flexibility to do something they don't give a shit about and not given them the flexibility to do something they really wanted.


>ripping meaning away from the code's symbolic representation to save a few characters

That's a great way to put it. I'll steal this one.


You can code golf in any language and it will probably result in less comprehensible code. If there's one thing I think I agree with the original author on, it's that terseness is not an end unto itself. But I think the overall point that terseness can sometimes be bad so therefore we shouldn't work at a higher level of abstraction is really at odds with everything we've learned about programming in the last century.


On a related note: The single thing I absolutely love in Visual Studio is how it handles C#'s #region statements. While they traditionally seem to be used mostly to group class members (which doesn't really benefit the reader much, IMHO), one can use them to hide the fact that C# still has the tendency to be a bit verbous by reducing complicated methods to pseudo code.

I tend to use this pattern quite a lot:

  #region Find the first Foo with Bar
    var foo =
      foos.Where(f => f.Bar)
          .Distinct()
          .OrderBy(f => f.Baz)
          .FirstOrDefault();
  #endregion

I still miss a similar feature in Emacs and Vim. There are similar folding markers one could use (say {{{ }}}), but they don't seem to work together with syntax-based folding.


This would be easy to code up in Emacs. If interested, I'll try and get an implementation out. (Currently on my phone...)


If you know how to do this, I would indeed be interested.

I already pondered if it wouldn't be possible to merge multiple "folding strategies" if one would build some kind of interface with which multiple routines could basically communicate folds in a buffer.

Say you have one routine scanning the syntax of the according language, one looking for folding markers and one for manual (Vim-style) folds. I guess it technically should be trivial to use all of them provided they are nested and don't intersect in weird ways (the latter could be resolved with some simple priorities).


I wound up not getting the computer out. Looking at it right now to see if it would be as easy as I was thinking.

Simply put, knowing how advanced org-mode can get with folds, this is likely trivial to do. (If you haven't played with it, I'd recommend taking a look. I don't have perfect examples, but http://taeric.github.io/DancingLinks.org is an example. Open that in emacs with org-mode and you will see that all of the source blocks can be folded together. For me, the source blocks are correctly colored, as well.)

Quickly searching, I found https://www.emacswiki.org/emacs/FoldingMode, which seems to be a large start to this. There were also some other ideas that seem to get close.

The basic idea I had was to just make a few functions that would:

    - Fold from current location to next #endregion
    - List all region/endregion's in the current buffer
    - Fold all region/endregion's in the current buffer
The second of those felt trivial, since it is basically M-x occur #region\|#endregion. On top of that, I just need to learn how to actually make overlays and then the rest feels like it would fall into place easily enough.

Edit: It occurs to me that I didn't show how to actually hide text, which I think is your question. Fastest way I can see is this (run this in your scratch buffer to see it in action):

    (let ((o (make-overlay 5 120)))
        (overlay-put o 'display "---hidden---"))
To easily delete this particular overlay, if you make it, use:

    (let ((o (overlays-at 5)))
        (mapcar 'delete-overlay o))
So, the basic task that will need to be completed is finding the point position for the start and end of make-overlay.


I think that's really cool, although

// Find the first foo with bar foos.map(f => f.bar).uniq().orderBy(f => f.baz)[0]

does the same basic thing, the ability to collapse and display intent is incredibly rad for lack of a better word


Honestly I can't really see how anyone can prefer code that is more verbose and bug-prone to code that is more concise, clear and with less bugs.

I agree with you, but seeing the number of proponents of Enterprise Java there are, it's clear that there are a significant number of programmers who prefer verbosity and extra abstractions --- although IMHO not necessarily because it's actually better.

In fact a trend I'm seeing with a lot of programming languages is more verbosity and unnecessary abstraction, leading me to wonder whether programmers actually know the languages they're using or are barely scraping by with the very minimal basics (and in the case of some languages, a bit of OOP cargo-cult dogmatism.)

Although I'm not really familiar with Python, and the order of the clauses its conditional expression syntax is unusual and a little surprising, I think that second line probably took longer to write than it did for me to read.


>I agree with you, but seeing the number of proponents of Enterprise Java there are, it's clear that there are a significant number of programmers who prefer verbosity and extra abstractions --- although IMHO not necessarily because it's actually better.

Even Enterprise Java wasn't verbose for the sake of it (I say "wasn't" because modern enterprise Java can be more like modern functional style than Go).

It was more verbose because it added tons of options and flexibility while keeping with the OO style, so everything was pluggable and configurable and factorable. But that's something else than Go style verbosity.


I don't know people who like verbosity per se. But I can symphatize with people who think that if removal of verbosity introduces other burdens that possibly affect the understandability or refactorability of the code, it is not clear-cut if it's worth it.

Say, with Scala implicit parameters, you get to remove code from call-site. Does it make the code better? Depends on the case, in my opinion. So I don't get the point to say some people categorically want verbosity. They may prefer explicitness and unsurprising code which may require verbosity. Or perhaps the removal of verbosity requires code that allocates more memory -- in that regard Kotlin seems to be a good citizen. It has ways to make code concise without making the byte code entirely different.


> probably took longer to write than to read

That's the point. Reading is far more important.


> unnecessary abstraction, leading me to wonder whether programmers actually know the languages they're using or are barely scraping by with the very minimal basics

It is more that they have good abstract thinking, so abstraction is not an issue for them. Abstractions are issue only for some people.

> I think that second line probably took longer to write than it did for me to read.

If it takes more time to read then write, then it is not done yet needs to be refactored. Good code is as obvious to read as possible.


Your comments clearly ring true, for the example code snippets shown. But i think you have missed the point of the article, which is the benefit of the Go version when iteratively changing the structure.

The author could have brought the examples forward, instead of just providing links to urllib3 and the Set type.


>Your comments clearly ring true, for the example code snippets shown. But i think you have missed the point of the article, which is the benefit of the Go version when iteratively changing the structure.

You can iteratively change a smaller, terser code, must faster. There's a reason prototypes are often written in languages like Python.


Logical duplication tends to be bad. But code duplication -- that is not logical duplication (business rules, etc) -- might not be.


I feel that this sort of 'cleverness' in packing a large amount of logic onto a small amount of code is treated quite differently between Python and C-type languages.

In C++, would you write the same structure using a single line for-loop and ternary operators?


I tend to avoid both 'cleverness' and writing ultra-terse code for a couple of reasons.

The first is that somebody else will eventually have to work with it, decipher what the heck I was trying to achieve, and generally get tripped up and slowed down by it.

The second is that eventually I may have to work with it again, and sit there scratching my head and wondering what I'd been smoking when I wrote instead of focussing on the changes I need to make.

The point is to code for maximum clarity to humans whilst still achieving the desired functional and non-functional outcomes on the machine. You will use varying terseness as part of doing this because terseness isn't the point: it's just one of many means to your end.


I completely agree. Writing a loop with several useless variables in scope, increment operators, exploding computational complexity is for sure more difficult for a human to understand compared to declaring what your code ought to do in a concise and well understood way. The maximum clarity for humans is always the goal and bug-prone, verbose code is for sure not the way to go.


Yes, I'd write that code in C++ in a single like if I could. At the moment it's not quite possible, but in C++17 and with ranges library the expressiveness of C++ will get quite close to the expressiveness of Python.

It's not "cleverness", but rather the ability to express the intent as succinctly and clearly as possible. Higher level languages allow you to do that, lower level languages don't. Obviously, Go is not a very high level language because it forces you to write explicit loops instead of using comprehensions or functional algorithms. Loop is a primitive concept. All other factors being equal (i.e. performance) you should always prefer comprehensions or algorithms over explicit loops.


If I were asked to do a similar thing in C it would probably look something like this --- I've renamed some of the variables to match a hypothetical situation and show that naming can have a significant effect on understanding what the code could be doing:

    for(i=0,total=0;i<resultlen;i++) {
        int price = results[i].price;
        total += results[i].selected && (price&1) ? price : 0;
    }
In C++, someone would probably do the same using the standard library functional programming templates etc., approaching the conciseness of the Python (and coincidentally, likely generating even more code than the simple loop above.)


This argument can be abstracted to subjective thinking vs. objective. In subjective thinking, every situation is unique and even the best objectively-validated boilerplate can increase risk by lowering flexibility, control, and responsibility for & deep knowledge of the situation.

Objective thinkers tend to go looking for existing, established boilerplate or frameworks because of the immediate leverage that is provided. They may think of too much subjective thought or originality as ridiculous NIH or wheel-reinvention. Their struggle is in the weighty task of grokking and maintaining third-party resources.

Scott Adams has kind of unwittingly uncovered this dichotomy with his own inventive, subjective thinking. He considers more objective thinkers as "word thinkers" or some name like that, because they go looking up the established names of things rather than understanding the things themselves.

Also Chris Langan similarly trashed Jeopardy contestants in a news interview, because they aren't subjective thinkers like he is; their minds are just filled with objective facts about various things. Well, true for him; his bias dictated the way he developed his own unique universal model--he prefers and has a gift for subjective thought.

Really though, we need both kinds of approaches.


Except, subjective thinking is what leads to objective facts. If not for thinkers, there would be nothing to share.

And words are amazing. All these complex and nuanced concepts we're talking about can be expressed and shared almost instantly because of the words we share. If we had to invent words as we communicated, or worse yet, had no words, there would be no discourse because the sun would go down before we got anywhere.

It is true that words are set in their definitions and intuitions, much in the way that scientific models are and common sense is. These can be considered objective facts.

But challenging these facts and inventing new words is just an important part of the process. Some have more success with it than others, but not looking up what is already known is just wasting your time. Same with not using words that already mean what you mean.

Subjective thought leads to objective facts with fact checking. Objective facts are then better shared, and better learned, because learning is far easier than arriving at them yourself. Learning just takes communication, which is proportional to the complexity of the information. Inventing or discovering new facts requires the entire lengthy and laborious process which is will inevitably include experiments and fact checking and evidence validation if we are to be scientific. If the problem can be solved with pure thought, chances are it was a philosophical one, although philosophy too highly relies on existing language.


While I don't disagree with the distinction, I don't agree that it applies here.

The thing with code is that you can take those existing patterns and make them a first-class concept in your program. You're still in "objectively thinking" - still reusing existing code - just in a more compact and safe way.


The proportionality argument is bunk, straw-man level bunk. Consider what goes on every time go accesses an object, casts to an interface or writes to a channel. Huge, complex code under the covers, way more than the Python examples he's given. What this article really says is "There are abstractions I'm not as comfortable processing as others, therefore those abstractions are bad."

If having code be proportional to complexity was a good idea, we'd all be writing in assembly and subroutines would be frowned on.


I think having code be proportional to 'complexity' is good, but having it be proportional to 'work' makes no sense. Personally I find things 'simple' when they only require one level of abstraction, such as accessing an object, or performing arithmetic operations on algebraic objects (e.g. multiplying matrices). And complex when they require more (e.g. manipulating the coefficients of two matrices to define their matrix product).

Boilerplate is when you requires more code than the complexity requires, usually because the language doesn't provide a sufficient opportunity for abstraction.

Note that, if we define a '->' operator that roughly expands to the following:

    (x, err) -> f :
        if (err != nil) {
             return nil, err
        }
        return f(x);
Then the entire code he wrote as an example of boilerplate can be written as

    DoSomething -> SomethingElse -> MoreWork;
with some minor differences that could be optimized away in the right language.


> "There are abstractions I'm not as comfortable processing as others, therefore those abstractions are bad."

This is the design philosophy of Go, as far as I can tell.


I have a Lua module I wrote that allow me to do stuff like:

    process.limits.soft.cpu = "4m" -- limit to 4 minutes of CPU time
    process.limits.soft.core = 0 -- and no core file
Yes, ultimately, it ends up in setrlimit(), but the amount of code that goes into backing this up is a bit daunting. But it more than makes up for in showing the intent (and it's not like this is done in the hot path of a program).


It seems he have been writing code so clever and compact that he didn't dare to change it when it came to refactoring time:

> I’d take any opportunity to write terse code. It feels good to be clever

> I noticed that when I wrote Python code, I would often go out of my way to avoid messing up a beautiful terse set of lines by moving logic to places other than where it might naturally belong.

I can see how a less expressive language might help with this problem, but I think a better solution would be to rein in the cleverness and focus on readability rather than shortness.


I think it's related to the iron law of abstractions. It's easier to change code that has no abstraction than the wrong abstraction. The right abstraction is better than both those, of course, but is often difficult to identify upfront.


Succinctly put. I have observed a tangentially related correlation: The code that needs least understanding to change will be the code changed, until it isn't the code needing the least understanding any more.

In unison they sort of tells us why pragmatism is often good when we start writing code, but how we must always try to push towards finding (good) abstractions as the work progress.


> Any time we write similar-looking code over and over, that’s considered boilerplate.

That's not what boilerplate code is, and DRY is not the way you get rid of boilerplate (the article claims it is a part of the solution).

boilerplate is a repeated preamble that's necessary and unchanging in order to get down to the business of actual logic.

An easy example of boilerplate would be include guards in a C/C++ header file, or 'use strict' in perl.

For example, the author says this:

> When I write Go, I noticed that my tweaks almost always go exactly where they naturally belong. My boilerplate doesn’t stay untouched!

If you're "touching your boilerplate" while writing code that affects the logic of your system, then it isn't boilerplate.


Boilerplate can be common and still be tedious.

Those examples are just clear indications that languages sometimes have really terrible default behaviors. Namely, Perl should have been strict by default (with a "no strict" if this didn’t perform well enough), and C++ should have at least added #import.


This is a hilariously bad take. If we follow the argument to its natural conclusion then go is itself a poor choice of language. After all, go removes memory management boilerplate in favor of an automated gc system. By the author's argument go should force you to manage your memory yourself. In fact, we really should all just code in assembly because it has the clearest relationship between code and what is going on under the hood. True, coding in assembly requires a lot of needless boilerplate but by this article's argument that's actually a feature.

Go has so many truly bad design decisions (lack of generics being the most damning) and it's sad to see its fanboys try to argue that they're actually good.


In the article:

I was expecting a discussion about boilerplate but the author writes about how ugly/bad is code that does a lot of operations in one line.

Then he introduces the concept of "proportionality", and according to it, assembler must be the best language since the program statements are directly proportional to the operations to be performed...

Suddenly high level programming is no longer good stuff...

Meanwhile no real discussion about boilerplate!!

Whenever your code shows implicit patterns that get repeated but cannot be abstracted away using code, you have boilerplate and boilerplate is almost always a bad thing. Your choices are then:

1. Pretend those patterns as a good thing.

2. Switch to a more powerful language and destroy the boilerplate either because it isn't necessary anymore, or because you can abstract it away with, for example, a macro.


    > Then he introduces the concept of "proportionality", and
    > according to it, assembler must be the best language since
    > the program statements are directly proportional to the
    > operations to be performed...
    >
    > Suddenly high level programming is no longer good stuff...
Looks like you didn't understand what he meant in that section if you think he implied Assembly is better than high level languages. It's not that 1 line of code should do exactly 1 unit of work, but rather that if 1 line does 100 units, then 10 lines should do 1000.

Function calls are exempt from that, because everyone understands that a function can do arbitrary amount of work, and one needs to know what it does (by reading its name and documentation). The cost is considered to be a "function call".

Yes, Assembly also has high proportionality, which is a benefit to it. But it doesn't mean all other languages are bad. These are tradeoffs.

The point is that it's a beneficial property of a language when you have code length that better corresponds to the amount of work it's doing.


> but rather that if 1 line does 100 units, then 10 lines should do 1000.

Why they should? It has no reason to be that way.

If anything, 1 line should do 1000, if possible. Some lines will do 1, others 10, others 5, others 100. "Proportionality" adds no value and, quite the opposite, discourages programming at the highest level possible.


> "Proportionality" adds no value

I think it helps improve readability of the code.


Hey HN, I wrote this post. I'm getting the sense that not a lot of commenters are taking the time to read the contents before reacting to the title, so I'll share a little spoiler/TL;DR:

Good boilerplate is boilerplate that usually gets changed over time (and in different ways each time) as we iterate over the code.

It took me a while to come to this realization (and I explain the reasoning and consequences in the post), so I hope it's helpful to other people who might not have found this intuitively obvious.

Also perhaps "terse" or "clever" are not the best words to describe the counterpoint—I welcome edit suggestions.


Code is meant to be read and incidentally to be executed. Code also should be read more often than it's written.

With allegedly idiomatic Go:

    b := 0
    for _, r := range results {
        j, k := r[0], r[1]
        if !k || j % 2 != 0 {
            continue
        }
        b += j
    }
... there are eight lines to read and then the reader is forced to draw the conclusion of what the code does at a higher level:

    // I see ... given an iterable of results pairs, we just sum the
    // first elements that are even for the pairs in which
    // the second element is truthy!
With Python, you can write the exact same code that asks for the same mental gymnastics.

Python:

    b = 0
    for j, k in results:
        if (not k) or (j % 2 != 0):
            continue
        b += j

    # I see ... given an iterable of results pairs, we just sum the
    # first elements that are even for the pairs in which
    # the second element is truthy!
But it also allows for a more direct expression of intent by using a higher level construct:

    b = sum( ( j for (j, k) # ... sum the first elements 
               in results # ... of results pairs 
               if (k and j % 2 == 0) # ... if first elt is even and second elt is truthy
             ) )
The argument seems to be that "when reading and modifying code, certain lines may contain constructs that will require more attention to be paid to a line" is a very serious problem, the solution for which is to enforce that all lines are equal in that they do relatively little at the language level, which Blub (err.. Go) does.

It's further argued that lowest-level-constructs-available scaffolding that may occasionally need to be added in the moment while tinkering is worth keeping around forever despite any tax it imposes on the creation, the reading, and the ease of modification of the codebase and a great advantage of Blub (err... Go) is that it forces this to happen by only having such constructs in the first place.


As a non-Go/Python user, those two snippets look practically identical to me, although I understand just enough to have written the same comments. Unfortunately, the comments don't tell me what I'd really like to know about the code if I ran across it when debugging:

* What are b, j, and k?

* What's so significant about even numbers - or is 2 a magic number that could just as easily have been 3?

* What can I assume about results? Can it be empty, null, 10 elements, 10 million elements?

With proper naming and a good, brief chunk-level comment, I could tell at a glance whether this chunk is likely to be hosting my bug or not. 5 lines vs. 8 lines don't play much into it.

Actually, this really isn't what I think of when I hear "boilerplate". I think of, for example, the adapter pattern, or having to build up a finicky set of states every time some common task is done.


The first python snippet was meant to be identical to the Go one and the Go one is the one from the article.

> With Python, you can write the exact same code that asks for the same mental gymnastics.

The second Python snippet would idiomatically be written on one line and that's what the article is railing against for unclear reasons.

The comments would not appear in source code, they're supposed to illustrate the thought process of the reader as they encounter the source code.


As a heavy daytime user of Go, I kind of get what you're getting at. You gave as examples two of the most typical cases of Go boilerplatitis: Error handling and for loops. Of that, error handling definitely gets refactored to set states, wrap errors, do cleanup (though less often, since go has defer) etc.

I find myself refactoring Go code a lot in patterns just like you describe (internal functions for error handling), but for me it feels like a throwback to my imperative programming era of the 90s, using Pascal and C. I'm still old enough that it automatically clicks in my mind as clean code, but I also know it's hopelessly verbose.

Error handling doesn't have to be this verbose. In other static languages that eschew exceptions you could do something like this to achieve the same result:

  let result = try!(DoSomething());
This form is clearly more readable once you understand what try! does (which doesn't have a steeper learning curve than the weird if syntax in Go).

Now, can you refactor it without throwing away all that nice terseness? Sure. Let's say we need to wrap the error value:

  let result = try!(DoSomething().map_err(|e| MyError.wrap(e)));
It takes time to get used to, but you state what you're doing instead of how you're doing it.

Go's verbosity is even more cringing with the loops, and this is actually a place where I rarely need any refactoring in my experience. Is your experience any different? Can you give an example of you ended up refactoring loops?


I'm getting the sense that not a lot of commenters are taking the time to read the contents before reacting to the title

As evidenced by none of the discussion hitherto noticing that your Python and Go examples actually perform a slightly different process? Was that deliberate? ;-)

(I rarely use Python and never Go; yet I noticed quite easily that the Python code is summing the selected odd elements, while the Go code is summing the selected even elements.)


> As evidenced by none of the discussion hitherto noticing that your Python and Go examples actually perform a slightly different process? Was that deliberate? ;-)

Good catch. I like abstract, declarative and structured code because if nothing else, it requires some level of engagement to write which has a side effect of reducing silly mistakes.


Good catch indeed! Maybe I should keep it as-is for fun. :)

Edit: I changed it.


I would add, code that changes over time but not often enough to warrant a new abstraction.

If you're only using the boilerplate for a new project every few weeks, it's probably overkill to set up some abstraction with variables and templating language.

Granted, I'm using "boilerplate" here in a more general sense than you did in your blog post.


To take an analogy, it is similar to saying that we should use small words when trying to communicate. But once you understand the meaning of the more complex words, it is more fluent to communicate in terms of the longer words which represent the abstract concept more concisely and correctly. Of course it actually holds true to some extent in case of spoken languages, but that is because your intended audience is not expected to understand all the words in the dictionary. But the programmer who is reading the code is expected to understand the whole dictionary of python, including all the keywords, constructs and idioms.

It gave me some thought, but ultimately I think don't think it is a correct argument.


To take your analogy to its conclusion, we don't want to communicate only using small words, true, but we don't want to communicate only using big words, either.

Ain't nobody (= most people) got time for Hegel :)

I suspect that's why some of the very "expressive", i.e. dense languages, aren't widely adopted.


I always write terse code, not only because it reduces the number of lines (which is important to me) but also because:

- You clearly state the atomicity, that this thing happening in 1 long line is specific thought, intent, desire - it shouldn't really be on multiple lines - its easier to remove/comment out intent (as it is always self contained), replace it, optionalize it and reuse it.

- Scrolling the code is damaging to the focus.

- Once you learned the specific methods used to create terse code seeing it repeated all over again you will become more proficient in reading the code all around.

- Its way more fun. Its like poetry in programming - why would we all write dummy boring stories. Language is everything (obligatory babel 17 mention: https://en.wikipedia.org/wiki/Babel-17)


"By the time I’m done—adding more recovery scenarios, better logging, augmenting errors with more context, handle more edge cases—most of my boilerplate gets a lot more interesting. I did not realize that good boilerplate could be fertile ground for iteration."

This point makes sense, however I think I like to have my cake and eat it too. Java 8's stream constructions can be relatively terse, but if you need to switch back to a for loop, IntelliJ has nifty right-click menu items for doing so. I think the argument made by TFA is at least related to John Carmack's take on inlining vs. abstraction: http://number-none.com/blow/john_carmack_on_inlined_code.htm...


I followed Carmack's suggestion and never looked back. I write simpler code on average now, and I spend less time having to re-read it too because I'm changing the same lines less often. What I lose in terseness in the short-term, I regain in being able to spot key abstractions that are more crucial to long-term program health. It plays well with debugging since there are fewer places where state can act at a distance.

It's very unfashionable advice in some circles, though. It sounds just enough like "go back to writing unstructured copy-paste spaghetti" to trigger some people's kneejerk. But the ground rules of "don't take subroutines, don't jump backwards" actually keep it from getting too far out of control, because they align you towards code that only uses enough flexibility for the task at hand, hence there's a lower likelihood of missing something. Very good practice for greenfield code where you might reflexively add a generalization that you don't use. I can still get better at it.

Edit: And one of the most important things it taught me, is that I can use scope blocks and a comment instead of a function, and it'll be sufficient.


Interesting take. I think everyone can find something in this with which to agree, so I don't think it's a particularly polarizing way of thinking, but I haven't seen terminology like this used to describe it before. I fall into the same psychological trap in C++ that the author falls into with Python: the "if this is over here instead it's more [elegant/right/concise/impressive/less boilerplatey]". Unfortunately, I find it too tempting to value beauty and elegance (and often masking real weight and complexity) over the in-your-face 'rawness' of simple loops, branches and so on. In Go this is harder to do in general, so you only try for so long before you learn not to fight it. This comment really spoke to me:

> Maybe this is subjective but I posit that the code’s shape looks more like what it is.

Indentation isn't merely stylistic or syntactic, but it tells us useful things about code at these lower levels. A more complex code shape should convey more complex logic, and we should want that complexity apparent to us...at least sometimes. It looks great to have a nice, terse one-liner, but visually flattening possible control flow paths feels really nasty.


I didn't agree with anything. The Python version is leagues better.


All of the examples are far better than the code I normally deal with.

That being said, the most tiresome code is this kind of step-by-step error checking.

  if (! is_numeric($id)) {
    exit('The ID is not a number.');
  }

  $db = open_database();
  if (! $db) {
     exit('There was an error connecting to the database.');
  }

  $query = db_query($db, 'select * from widgets where id = $1', $id);
  if (! $query) {
     $db_error = get_db_error_msg($db);
     exit('There was something wrong with the query: ' . $db_error);
  }
Zzzzzzz. It triples the code size. One third is the normal code path. Two thirds is all the error checking. I'm sorry, I just don't write it:

  $db = open_database();
  $query = db_query($db, 'select * from widgets where id = $1', $id);
I let errors bubble up to some general error handler, which stops the script in its tracks and gives the user a vague and unhelpful error message. It doesn't matter. Even if I gave them a very specific error message, all that they would do is log a ticket, and I would still go into the server log to uncover the line number the error was on, debug, and fix it.

Don't get me wrong. I write JavaScript to constrain user input, with friendly directions and error messages. And I set tight types in the database and add database constraints where I can. But I don't write all this step-by-step checking in the middle layer (PHP, Python, Go, what have you). So if the user somehow bypasses the JavaScript and tries to insert text into an integer column, the database will simply refuse, and the resulting web page will be a very ugly error message, which is what they deserve.

If the database is down, it's going to be ugly, regardless.

I admit that this strategy works okay for non-life-threatening, SQL-backed web apps and such. You may need to do something tighter in your work. But I wonder if there would still be something more elegant than the every-other-line error checking shown in the article.


This is one of the situations where the difference between scripting and "real" development becomes obvious.

Yes, it makes you zzzZZZz. If you don't care for errors anyway. However, when developing a large reliable software project you need to think through the possible error cases and handle them appropriately. Even if that just means logging them correctly and exiting so the error can be easily tracked down later (for example: was it expected/unexpected? What where the parts of the context that can't easily be collected by a runtime?).

And in a large project, it's not that bad. How many places are there where you setup the database connection, an OpenGL context, a network connection, etc? Just build the abstractions appropriate for your project once. Simply crashing backwards through the function calls with a generic exception and a stacktrace is not always good enough.


My dogma around this sort of pattern is similar, except that I don't just use one global error catching statement.

Basically, every module or function or whatever has a client. Sometimes those clients are other parts of your own system, sometimes they're end users outside of your control. Regardless, at significant barriers of abstraction, I try to catch errors, and translate them into new error types/representations that would be relevant or actionable by the intended client of that code, with the original error provided as a member. If you follow this pattern, the logic you write is free of error handling code, error handling code is centralized at abstraction barriers, and at any point you should have few errors that you wouldn't know how to respond to and probably just give up and bubble to the top.


But I wonder if there would still be something more elegant than the every-other-line error checking shown in the article.

Factor out the error checking into a function:

    onerrexit(is_numeric($id), 'The ID is not a number.');
    onerrexit($db = open_database(), 'There was an error connecting to the database.');
    ...
I admit you still have to explicitly state you're checking, but that's not much overhead, makes the code somewhat clearer, and gets rid of all the if(...). If the error-action is more involved, like your "third paragraph", then you can make an errcheck that takes a function reference for the error-action instead (not sure if that's possible --- haven't used PHP in a long time --- but there's probably a similar method otherwise.)


Sometimes when I'm being very lazy writing C, I define a macro of this kind which doesn't even take any error message argument but just prints line number, textual representation of the code which failed and errno. Huge debugging aid at negligible cost.


Oh, well, Apache and PHP give me this out of the box. So even though there is not a great deal of custom error checking in my code as I said before, I always have enough information in Apache's logs to track down the problem. I guess with C and Go you have to roll your own.


You only write it once in your Database class. The only thing that changes from call to call is the query string and parameters. Handle all errors gracefully there, throw an error to the calling code and let it handle it to the user with a 500 while the support channel gets alerted with the real error.


The catch-all top level error handler isn't typically a hallmark of good design but I think it's pretty relatable as a quick fix when you need to get something working on a deadline.


Given that software systems are at their heart gate configurations on silicon, and given that those gates need to interact with mechanical and electrical systems, there will always be different classes of recoverable error for which the recovery procedure differs.

So yes, outside of pure functions you will always need special-snowflake error handling.


I have no idea what the Go code does, but the python code is immediately clear. It reads like a line of English.

I can't see how anyone could prefer the Go version.

And I don't know if python allows this (it may not due to the significance of whitespace), but your proportionality problem can be easily resolved by placing each different clause on a different line, like this:

  sum((j if j % 2 else 0) 
    for (j, k) 
    in results 
    if k)


>if j % 2

This type conversion madness needs to stop. Integer 1 and Boolean true mean different things.


python does this to be consistent with C, not the other way around.


Java was a lot more concerned with being consistent with C, but in Java where a boolean is called for you have to actually supply a boolean.

It's one of those rare features that I wish other languages would adopt from Java. Possibly the only one. :/


i dont really get the problem.

especially in the case of modulo, it often makes sense to just treat the result like a boolean, because it practically says "divisible yes/no".

theres nothing i remotely like about java and its great that python isnt trying to be like java, but those are just opinions.

is your complaint about the implicit typecast or about the fact that integer can be cast to boolean at all? if its the former then python is just less pointlessly verbose than java. the latter may make sense to complain about.


this is what happens when people who "dont need math to program" write code.

if you want to sum over even numbers, just do:

sum([j*2 for j in range(0, end_range/2)])

and yes, brackets supercede pythons "whitespace is syntax". if you need linebreaks where, ordinarily, linebreaks are not allowed, just write brackets.


What the code in the article is doing is:

Given an iterable of pairs, sum the first elements of the pairs for which the first element of the pair is odd and the second element is truthy.

What the code you're suggesting does is, to be polite, unrelated.


and no sane person would write it that way. whatever it is that its trying to do.

i agree that i misinterpreted the codes intention.


That's silly!

Programming is more than that.

This is best: sum(range(0, end_range, 2))

;)


i dont know if thats better. but both solutions are a lot better than a statement that contains about half a dozen implied negations.


Why don't you think it is better? Easier to understand. I was mostly joking, but I think it's a little unfair to imply people who are not great at math are bad programmers.


they are certainly bad computer scientists, since its a subdiscipline of mathematics.

can they be great artisans? sure.

why do i think its not better? i said im not sure. i think the two options are about equal in readability, up to taste. yours is probably more efficient in terms of cpu cycles.


Is yours even correct, though? I can't prove it...


you can prove it by opening a terminal, typing "python", hitting enter, then copying my line and verifying the result.

whats your sticking point?


sum([j*2 for j in range(0, 11/2)]) ?

It's not right. It doesn't give you the value 10, as you'd "expect".


0

2

2+4 = 6

2+4+6 = 12

if you sum even numbers, starting from 0, "10" is not a possible result. i dont know what youre trying to do there.


I mean the list: [j*2 for j in range(0, 11/2)] Does not give you the item "10" in the list.

So it skips 10...


yea, because 5 is the end of the range.

if you do sum(range(0,10,2)), 10 isn't reached either.

why are we having this discussion?


Yeah, but if you write "end_range" I'd expect that to be the end range. So your implementation should actually be something like:

[j2 for j in range(0, (11+1)/2)]

Your code is wrong.

sum(range(0,11,2)) works as expected, [j*2 for j in range(0, (11)/2)] does not.


congrats.


What?

You seem like you have issues.


"Man inadvertently discovers reason for monads and strong typing while writing Medium post"


Yes. By using higher level constructs, it is much easier to see what is going on in the code. If there are too many lines, it is much more difficult to understand a piece of code as a whole.


I iterate code like he does, then when it gets to where it's doing what I want it to do, I refactor it to make it more readable even if it reduces iterability. After the initial phase of writing the code, going back and forth, etc, the code is going to be read many more times than edited, if in the future it needs to be edited again, the future editor can refactor the nice looking code into whatever shape, and the important thing was that future editor understood my code in the first place, and I hope that future editor, when done editing, will refactor the code into being easily read for the next person, too.


I assumed the title was about boilerplate in terms of some skeleton or framework you download. Turns out this is different from what youre talking about (verbosity). Its interesting that even in programming jargon we have two uses for the same term. They are not totally separate though. The repetitive, DRY violations eventually get abstracted into a framework, at which point the boilerplate becomes ok. And if its still too verbose, than another boiler may arose to encompass it.


Well, when you have several projects using same framework/skeleton, and the latter is downloaded and copied over and over again in every project - it became verbose on highest level and can be called boilerplate.


The author doesn't understand what boilerplate means.


>> Any time we write similar-looking code over and over, that’s considered copy-pasta.

is what i'd call it.


Meaningful names are critical for understanding. Without good names, it's incomprehensible, regardless of the complexity.


Exactly.

If you abstract things but don't give meaningful enough names, you loose everything (because your callers are still forced to lookup the definitions of your abstractions, and now this definition can be hard to search for).


    b = sum((j if j % 2 else 0) for (j, k) in results if k)
This definitely is borderline unreadable but that isn't necessarily because it does a lot of things. It is because it does a lot of things without order and with a lot of line noise. Compare it with

    sum [j | (j, True) <- results, odd j]
or

    filter snd >>> map fst >>> filter odd >>> sum
The bigger problem is that this carries a bool around. If it is only used for filtering filter it beforehand.

The Opportunity for correction is an interesting idea but the authors issue mostly come from a lack of referential transparency. In a pure language refactoring wouldn't be an issue so one could argue this is a symptom of too little abstraction, not too much.


If you'd name your variables all three would be perfectly readable I bet.


Technically the third has no variables...

Slightly less wise-ass, I couldn't come up with a use case for this code that'd give sane variable names. Third attempt:

    sum $ do 
        (element, True) <- results
        guard (odd element)
        return element
That's pretty far from imperative languages, though, which I kind of tried to avoid.


The title is significantly more ambitious than the content. I think "Go: I have found my Blub" would be more accurate.


So expressiveness of a language is now seen as a bad thing?


Among golang developers, yes.


From the maintenance perspective I would prefer the more verbose code in most cases, because it is easier to debug. Moving the verbose code block to a method or function for reuse will fulfill the DRY principle as well. Then you can decide during debugging to step over or step into the method. Doing to much in one step would also contradict another rule: the single responsibility principle.


A counterpoint is that more declarative code can often remove the need for tedious step by step debugging because the behavior of the code is more transparent. I do most of my work in Haskell and find I don't typically need to debug it like I would for a less declarative language because the meaning of the code is much more clear.


I come from a C# background and have little experience with Haskell, so YMMV. C# got many new features over the years like LINQ or Lambda Expressions. My experience is that you get most of them if used in a moderate way. Like splitting complex LINQ into smaller parts and introduce intermediate results instead of one big query. They are also easier to test this way. Another important aspect of this topic is if you are maintaining your own code or code others have written. I used to maintain code written by another team, which only did feature development and barely any code maintenance. While doing only code maintenance on code written by others for a longer period I gained a new perspective on what's good code in the long term.


You can write code like Hemingway (simple) or like Shakespeare (dense). Hard to say whether one is better than the other without some attempt to quantify and measure things. Seems more like just a preference.

In regard to boilerplate the author brings up an interesting point that with a good language, the boilerplate can effectively disappear once all the code has been written. If people are worried that boilerplate violates DRY, I think they worry too much.

The best way I've found to determine whether a codebase is good or not is to take someone who's never worked with the code before and ask them to fix a bug or make an enhancement. If that's an excruciating task but would have been easy in a greenfield project, the code sucks.


https://vimeo.com/9270320 (Greg Wilson - What We Actually Know About Software Development, and Why We Believe It’s True).

It seems we have scientific data showing that the number of code lines one can review per day doesn't depend on the programming language. This means one would spend a lot more time reviewing this C++ code if you looking at the manually-rewritten-to-assembly version instead.

This is a very good argument supporting programming at the highest possible level of abstraction, which is the exact opposite of boilerplate and verbosity.

So please let's not rationalize/stockholm-syndrome over the limitations of any programming language ; boilerplace and verbosity are always bad.


I think most of the complaints about the python version go away when you do something like this....

    is_even = lambda n: n % 2 == 0
    score_result = lambda j, k: j
    is_interesting_result = lambda j, k: is_even(j) and k
 
    a = some_variable + 42
    b = sum(map(score_result, filter(is_interesting_result, results)))

or

    b = sum(j for (j, k) in results if is_even(j) and k)
List comprehensions only become difficult when you start doing conditional execution within the result side. Use if for filtering or turn your complex operation into a meaningfully named function/lambda expression.


The first one is horrible and super un-pythonic (it even violates PEP8[1]). But perhaps even more important, it doesn't even work since a map-function can only take one argument (not two).

Using functional programming paradigms like this often advised against in Python.

[1] https://www.python.org/dev/peps/pep-0008/#programming-recomm... (search for "Always use a def statement")


> The first one is horrible and super un-pythonic (it even violates PEP8[1]).

I'm not really concerned with PEP. I'm concerned with if the idea being expressed is cleaner then the original. My first iteration was cleaner then the original comprehension (despite the comprehension being PEP8). It also let me see what I was really doing and I was able to turn my map and filter into a much nicer list comprehension.

Also, pointing out lambdas instead of defs is silly. It was done for an example. Most of the time you'll be operating on data structures with built in functions or supporting functions you will have written making it cleaner to use the map and filter paradigms.

    money = sum(user.get_payed_balance() for user in users if
                                           user.has_paid_bills() and 
                                           user.is_still_subscribed())
bs

    clients = sum(map(User.get_payed_balance,
                  filter(User.is_still_subscribed,
                      filter(User.has_paied_bills, users))))
I much prefer the filters to the list comprehension in this case. I'll think "User's money for user in users if user has paid their bill and subscribed". I'm going to think "Sum the get_payed_balance for every user who is_still_subscribed and who has_paid_bills" for the functional implementation. If I had complex tuple arraignments then (like the OP's post) then I agree LCs will be the better way forward. If you're dealing with objects or functional-fitting problems I prefer map and filter.

> But perhaps even more important, it doesn't even work since a map-function can only take one argument (not two).

The concept still stands. Just use `from itertools import starmap`. Starmap is like map that calls the * operator on arguments.

    >>> a = lambda a, b: 10
    >>> list(map(a, ((1, 2), (3, 4))))
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: <lambda>() missing 1 required positional argument: 'b'
    >>> list(starmap(a, ((1, 2), (3, 4))))
    [10, 10]
> Using functional programming paradigms like this often advised against in Python.

I'm not concerned with if it's "Advised against". I'm concerned with a) will someone be able to understand this better then the implementation that currently exists) and b) will I be able to come back in 10 years and tell what this is doing if need be.

You should check out this talk [1] before going around trying to use PEP8's inconsequential clauses to call something "horrible" code.

[1] - https://www.youtube.com/watch?v=wf-BqAjZb8M


It's not true for all languages, but I actually like Erlang's boilerplate. Almost every OTP project has the same structure, and a very close set of primitives that drive the setup and structure of the application. It can be annoying to setup and get right, but its a common set of structure and patterns which make it much easier to dive into a large project.

So for me it is not always bad, just usually, with some specific exemptions that I think are done well.


More bad excuses for not having exceptions in your language ...


I find the number of mistakes I made scales at least linearly with the number of lines I write. Boilerplate is bad because it allows me to write more.


>In Python (and many other languages), it’s very easy to unintentionally hide tons of work in a single line of built-in syntax. Some lines do almost no work, other lines do a moderate amount of work, other lines do what I’d describe is tens of lines worth of work—a lot of work. It varies a lot, even in beginner code. In Go, I’ve found that this variability is much more diminished.

I call BS. Consider these 2 Go lines:

  heck_if_I_know_what_this_does()
  heck_if_I_know_what_this_does_either()
Just because Go has less powerful primitives, it doesn't mean that you don't use opaque code, libs, etc. all the time. Sure, you can look into what each function does -- but then again, you can also immediately tell that a list comprehension in python is doing at least O(N) work compared to a simple addition, without even having to open any other source file.

>Is this good? I believe it is. I believe this code is more proportional to the complexity of the work that it’s doing.

We invented programming languages on top of machine code so that our code can be LESS proportional to the "complexity of the work that it’s doing".

>When reading proportional code, it’s easier to notice where the interesting bits are.

Not really. It's the inverse: the terse code only keeps the interesting bits. The "proportional code" gives you all the non-interesting, and getting in the way, mechanics.

>When a friend was reading through the source code for ssh-chat, he was surprised that I implemented my own Set type. I explained that while Go doesn’t have a built-in Set, it’s just a few lines of boilerplate to make your own on top of a map. In fact, I noticed that my version of Set evolved to be fairly specific to how it was being used. In retrospect, I’m glad that I was tweaking my own implementation iteratively rather than spending time working around whichever limitations a generic library might have had.

Pray tell, what "limitations" a generic Set would have?

And why does those kinds of post remind one of Stockholm Syndrome?


> Just because Go has less powerful primitives, it doesn't mean that you don't use opaque code, libs, etc. all the time.

True, but that's not what the article's arguing.

> We invented programming languages on top of machine code so that our code can be LESS proportional to the "complexity of the work that it’s doing".

No, we invented programming languages so that our code can be smaller and less arcane, not less proportional. The actual point of the article, which I believe you missed, is that if we focus on 'cleverly' compressing large amounts of logic into dense nuggets of code, it interferes with our ability to appropriately factor the code so that it produces the desired behaviour in the simplest and most readable way.

The article also states (and I agree) that if your code is wildly variable in the amount of semantic content per line of text, it's much harder to properly grok and (more importantly) to modify what's going on.


> True, but that's not what the article's arguing.

Reading it twice, I think this is a very veiled argument against generics in Go on the grounds that well, uh... proportionality?

I agree with the post above you. I do not like the premise of this or of Go. I see it as a deliberate attempt to walk away from empowering programmers. It's not solid technology, it's the technical manifestation of business decisions.


>No, we invented programming languages so that our code can be smaller and less arcane, not less proportional

Smaller also means less proportional to the work that it's doing, since the abstraction, sugar and techniques we use to make code smaller do not yield the same compression ratio from the underlying instructions in all cases.

So, smaller = less proportional -- unless you suggest we should also strive to artificially constrain how succinct we make our code so that there's a constant ratio everywhere.

That is, so that 10 lines of code in a language always are (or at least, always strive to be) 10×n times of machine code. I don't see how that's feasible -- some abstraction could yield a 1/100 less code than machine code in one line, while another can yield a mere 1/5 or 1/10 less code.

But even if we could achieve that, I don't see it being even useful. Having higher level code strive to be more proportional to machine instructions is not necessarily telling us much about the cost of such code. It might tell us that it's complex and slower than another, but that doesn't mean much. It only matters what role it plays in the overall runtime of our program. Just because something A 5 times bigger than something else B, doesn't mean A needs more optimizing. A could be just fine as it is, e.g. if A is called rarely, and B is in some tight loop.

Gauging code by "code proportionality" (the only semi-legitimate use I could think of such a feature) is bogus. We should always profile.

>The article also states (and I agree) that if your code is wildly variable in the amount of semantic content per line of text, it's much harder to properly grok and (more importantly) to modify what's going on.

On the contrary. The part of "What's going on" that matters when it comes for modification is the high level stuff that goes on -- which a more terse code shows better, as it doesn't hide it with boilerplate.

In the end, that point seems both trivial and non-sensical: yes, boilerplate is easier to modify. That's because it doesn't matter much -- it's the stuff that you want to do that's hidden inside the boilerplate that matters.

e.g. I can write a loop to filter 100 numbers in tons of ways. And I can always modify it from one to one of the other ways.

But something like:

  value = filter(my_array, (val) => val > 5)
Just does what it says is does, and since it encapsulates the intention 100% there's no much room for wiggling.


In general, we're less concerned about "the work that it's doing" in the sense of raw processor instructions, and more concerned about the semantic complexity for the programmer. So if a line says "set A to true if B or C are true", that's simple. If it says "filter array A to only include elements where this inlined lambda function returns 5 or more, and then return the results with an even index" it's a bit harder to figure out the intent behind it.


>If it says "filter array A to only include elements where this inlined lambda function returns 5 or more, and then return the results with an even index" it's a bit harder to figure out the intent behind it.

I'd argue the contrary though.

That the "semantic complexity" is less when you only keep intent-code, than when you spread it over 20 lines of boilerplate which you have to visually parse to get to the point.

Your examples are not analogous, because they do different things.

Let's make them implement the same example, Go/C/etc style and "terse" style:

  array_2 := make(int64, 0)
  over_five_index = 0
  for i, val := range(my_array) {
    if val >= 5 {
      over_five_index += 1
      if over_five_index % 2 == 0 {
        append(array_2, val) 
      }
    }
  }
vs:

  array_2 = my_array
              .filter((val) => val >= 5)
              .filter((_, i) => i % 2 == 0)
I don't see how there's less "semantic complexity" in the former for the programmer to parse. It's just less per line, but worse overall -- in that he has to now reconstruct the full intention from all the individual lines.

As for the first example, it would be:

  a = b || c
in both approaches.


I think you have discovered and defined the "Stockholm syndrome for programming".

Indeed the author is suffering from it.


When I see the error-handling examples, I think that the problem with this kind of "boilerplate" is that it's easily forgotten. This results in errors being eaten and that's bad. But I agree with the main point that "proportionality" of code is a good thing.


Yes but it is avoidable if you use a Lisp. Every part of the code can be compacted into a smaller form. I know when I write in Clojure it's always the most concise code I've ever written, with minimal effort too.


This is true in Elixir as well, which has hygienic macros very much like a Lisp. There's a balance, of course, but it really is quite nice to be able to just rewrite the AST as you need. And one must always remember the adage data > functions > macros.


Call me old fashioned, but I remember when encapsulated was lauded as a good thing because it insulated the programmer from whatever programming model was considered popular this year was a virtue.


I bet Fortran has an even lower 'proportional' variableness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: