Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure why you got downvoted for this as I think it is a fair assessment. I think it assumes that the current state of development will continue on its current path, though. The increasing complexity of applications demands that we come up with better models for development that allow us to think about problems more abstractly. One could make the argument that this is only truly possible with formally verified applications.


Historically, we SOTSOG - using abstractions, so complexity is pushed down to a lower layer (language, libraries, OS). I think this will continue to be how we cope with complexity - one specific case at a time, rather than a new general way to address all complexity issues (sounds like magic).

If the rate of complexity increase itself increases (i.e. accelerates), then we might need a a different methodology altogether, such formal verification as you suggest. However, we have had the need for better ways to handle complexity since before Fred's Mythical Man-month (IBM 360), and we've had the tool of formal verification techniques to do it for a long time. It hasn't happened yet, suggesting there is some problem with it.

In my admittedly very limited experience of formal verification, it makes problems more complex, rather than less (This is why Dijkstra says that you can't prove software the way it is currently written - you have to write it in a simpler way, with clearer interactions between modules and so on). Its benefit is that you know it's 100% correct (assuming your assumptions, understanding of the real-world problem, understanding of interactions with other components are all correct, your proof is correct, and any proof automation tools you use are also 100% correct). OK, it gives you a higher probability that it is correct.

Ah, this looks like the opposite cause and effect from your suggestion (that formal methods are a way to handle complexity), and instead that trying to formally verify applications forces us to write them in a simpler way, which is... simpler. Or, we could just write them in a simpler way, and not worry about the formal verification part. It still leaves the problem: what is this way of writing apps in a simpler way? Isn't the only way to do this by dividing it into abstractions with well-defined behaviours and clear interfaces? In other words: Historically, we SOTSOG - using abstractions, so complexity is pushed down to a lower layer (language, libraries, OS).

PS: I would love it if there was a new general way to approach complexity. But as Fred said, "no silver bullet".

"better models for development that allow us to think about problems more abstractly"

That's possible I guess; but I don't think we lack in our ability to think about problems abstractly. It's finding the right abstraction that is tricky, and this amounts to a search through the space of abstractions. That sounds straightforward to automate, except that he search space increases exponentially with the complexity of the problem. So we need a heuristic to guide that search.

So far, the best heuristic is us. If we can automate this guide through the search space - the ability to make connections, have intuitions and insights - it would be a huge leap towards Strong AI. My Masters was an approach to this, and the start of my PhD. Quite possible, I think, but hard.


This is a well put argument and I won't attempt to go into all of your points more than to say that I pretty much agree, and step back some from my argument that formal analysis necessarily helps abstraction.

I guess my point was that in order to for larger and larger abstractions to be implemented, there needs to be trust in those abstractions. One problem that you point out is that right abstractions can be difficult to come up with, which will not be improved by formal analysis. The other problem is that as you build greater and greater abstractions on top of each other, "holes" in those abstractions can cause odd problems in the system. Here, formal verification can at least provide the level of trust that the abstraction works exactly how you expect it to and can implement your system on top of it without concern for odd effects.

One analogy could be with the computer hardware on which software is implemented upon. It goes without saying that while software is often expected to have flaws, the hardware is essentially assumed to be flawless for all intensive purposes. This isn't a perfect analogy because you have firmware, etc, but hopefully it illustrates my point. You don't worry about the processor not correctly implementing

  add ax, bx
It is taken as a given. At almost no point in software development today can you make that assumption about any code.

Maybe formal analysis isn't the solution to this particular problem. I do think it is important to research the topic as fully as we can to figure out if it is or isn't.


"It is by logic we prove, it is by intuition that we invent" - Henri Poincaré

I emphasized the invention, but you're quite right that we also need the proving. I agree with your hardware example (things like the Pentium bug are rare...), but I think we similarly rely on other layers (OS, languages, libraries), though I concede with eroding confidence as we rise.

For hardware, it's also that we have more trust in the common pathways (Dijkstra wouldn't like it - it all should be perfect. I can relate to this, but that's not how it is - today, anyway). Popular hardware is used extensively, and very importantly, I think it tends to be designed to be pretty flat, in that almost all of it gets exercised by almost all uses (i.e. there are few rare pathways). So flaws will show up quickly.

We also don't worry about the language not correctly implementing

   a+b
So I think you can make that assumption about code, if the code is well exercised (e.g. in a fundamental part of the code in the language runtime).

Concessions: Dr. Gosling apparently did write some (informal) proofs for Java, but it was in the byte-code verifier, not for the typical language stuff. But I've heard that Intel does some formal verification :-).

I agree that formal analysis is a worthwhile endeavour. I just wish they would do more work on interesting, useful problems. It used to be that computer scientists would invent something incredibly cool (like Boyer-Moore substring matching, or arithmetic coding), and then also prove it (well, really, prove some specific qualities about it). But these days, there seems to be a lot of proving things just because they can be proven - not because they are worthwhile proving. It's quite... insular.

They focus on proof, not invention.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: