It doesn't make economic sense because VMWare and Microsoft don't have to pay out of pocket if a box in a datacenter is compromised and data is leaked because of bugs in their VMs and kernels that are only possible because of C.
This is like hiring somebody to build a bridge, it falling due to incompetence killing hundreds of people and then having nobody suffer any consequences. "Eh, bridges fall, what can you do."
Make people pay for buffer overflows and C is gone tomorrow.
> This is like hiring somebody to build a bridge, it falling due to incompetence killing hundreds of people and then having nobody suffer any consequences. "Eh, bridges fall, what can you do."
You can't really use these analogies in a security context, because real world engineers aren't liable if their designs fail due to sabotage, overloading or other unexpected conditions outside the standards they designed to.
Software is different to real world 'analog' engineering in that a single very small mistake can bring down the whole thing. Most civil engineering designs would be riddled with small mistakes like that, but the overall system has (multiple layers of) factors of safety built in and the ability of other parts to compensate. You can see how concrete buildings in war zones (even places with shoddy engineering standards to begin with) can take large fractions of their structure being shot away before failing. A software vulnerability though is like finding just one perfect place where a single bullet could bring the whole thing down.
Good point - some rights can't be signed away by contracts and license agreements, no matter what those say. For example automobile manufacturers can't hand you a sales agreement that exempts them from (a whole list of) responsibilities, because governments recognize a public interest and pass laws insisting on safety standards, availability of parts for 20 years, etc, etc. Govts should now catch up with the modern world and recognize the public interest in computer infrastructure safety; by ensuring that IT can't license away its responsibility to take all reasonable measures to reduce risk (thus opening up liability as well.) At some point, using Rust - in many cases - even if it's more expensive, will be such a reasonable safety measure enforced by tort law or statute. Not quite yet, perhaps, because the language is young. But soon.
There will always be software bugs. What I'm trying to say is that people should be made liable for avoidable bugs due to incompetence or ill-placed personal preferences ("Real programmers write in C!11").
There are times when C is unavoidable, unfortunately, but when somebody chooses a technology regardless of better, safer alternatives, "just because", they should be made liable, absolutely.
There are safer, better alternatives for almost everything we do. But economy dictates that we end up with a compromise. Rust would be just another compromise, slightly different stage, no huge difference and potentially a huge cost.
Silver bullets in software development do not exist, rust is no exception to this and the irrational hyping of rust as being a silver bullet actually has the opposite effect.
If rust is that much better at all aspects of software development (and not just in preventing one class of bugs) then it will find mainstream adoption. But you don't get that effect by ramming it down other people's throats, you get that effect by showing it in practice. And this is where rust - at least so far - is underwhelming.
Also, and this is another point of irritation with me, the rust community makes it seem as if theirs is the only language that will avoid this kind of bug, which is far from true, there are other platforms / languages with far wider adoption that have these traits as well.
I'm not promoting Rust, nor have I actually written anything in it. I do think though it's a step in the right direction.
> But you don't get that effect by ramming it down other people's throats, you get that effect by showing it in practice.
I disagree. I've seen many C programmers who think that C is the be all end all of programming languages. Those people will not be convinced by showing them another technology that is better in practice. The fact that it's actually unbelievably hard to write correct software in C is somehow a point of pride for many people, though I speculate that few of them can actually follow through.
Of course there will always be logic bugs in software, even with formal systems and whatnot (i.e. bugs in the specifications). But memory bugs and the resulting security exploits could in many cases be a thing of the past already.
Many industries have similar regulations, i.e. seatbelts in cars. If somebody dies because of a faulty or non-existing seatbelt, there will be consequences. If somebody dies because they were talking on the phone and the car didn't prevent it, well, you can't control everything.
Is this a buffer overflow? Why was this written in C? No good answer - pay up.
> There are safer, better alternatives for almost everything we do. But economy dictates that we end up with a compromise. Rust would be just another compromise, slightly different stage, no huge difference and potentially a huge cost.
I think the argument is that the economics need to change. People need to stop being irresponsible - we need liability when someone is negligent, just like in any other engineering discipline.
>
Silver bullets in software development do not exist, rust is no exception to this and the irrational hyping of rust as being a silver bullet actually has the opposite effect.
I keep seeing this, and yet I have not once seen anyone call it a silver bullet.
> Also, and this is another point of irritation with me, the rust community makes it seem as if theirs is the only language that will avoid this kind of bug, which is far from true, there are other platforms / languages with far wider adoption that have these traits as well.
Show me another language with no garbage collection, memory safety, C/C++ level performance that has been anything other than academic.
> I think the argument is that the economics need to change. People need to stop being irresponsible - we need liability when someone is negligent, just like in any other engineering discipline.
I totally agree with that and have been a long time proponent of liability for damage caused by software bugs.
But that would have to be all bugs, not just some classes of bugs.
> Show me another language with no garbage collection, memory safety, C/C++ level performance that has been anything other than academic.
D.
And by the way, rust is only 'memory safe' as long as you don't disable the safety mechanisms, so I think 'memory safe by default' would be a better way to describe it.
> I totally agree with that and have been a long time proponent of liability for damage caused by software bugs.
But that would have to be all bugs, not just some classes of bugs.
Why? That goes against everything we've done to categorize bugs - we do not treat all bugs the same, and we would not consider all bugs to be due to negligence.
> D.
Only with a garbage collector or a 'safe' annotation. Rust has the opposite - a safe language by default.
> And by the way, rust is only 'memory safe' as long as you don't disable the safety mechanisms, so I think 'memory safe by default' would be a better way to describe it.
That's fine, memory safe by default is an acceptable way to refer to it. It's worth noting that I have written thousands of lines of rust code and never published code with a single line of unsafe. None of my projects have required it. So 'by default' is pretty powerful, since I've never ever needed to opt out.
> Why? That goes against everything we've done to categorize bugs - we do not treat all bugs the same, and we would not consider all bugs to be due to negligence.
That would be something you could only know by evaluating this on a bug-by-bug basis, the important thing to realize is that to an end user it simply does not matter what class of bug caused their data-loss, loss of privacy or loss of assets.
My point is that not all bugs lead to data loss or loss of privacy. But yes, I agree.
That said, I feel that we're now talking about memory safety vs semantic safety.
We can certainly prove a program is entirely memory safe. Rust's type system is prove, so the work is then to prove the unsafe code safe (actively being researched).
There is no way to prove all semantics of a program (Rice's theorem). Therefor I would argue that a bug is due to a semantic issue is not necessarily negligent, whereas we could easily see memory safety issues from using C as a case of negligence.
But in terms of liability they would likely both fall into the same bucket.
Negligence in the legal sense of the word revolves around care. If someone is maintaining some codebase and causes a memory safety related bug you would have to look at that bug in isolation and what the person did to avoid it. Saying 'you should have rewritten this in rust in order to avoid liability' is not a standard of care that anybody will ever be held to.
So the ideal as you perceive it is so far from realistic that I do not believe pursuing that road is fruitful.
What we can do is class bugs (or actually, the results of bugs, the symptoms that cause quantifiable losses) in general as a source of (limited) liability. This will put those parties out of business that are unable to get their house in order without mandating one technology over another (which would definitely be seen as giving certain parties a competitive edge, something I don't think will be ever done).
So that's why I believe that solution is the better one.
But in a perfect world your solution would - obviously - be the better one, unfortunately we live in this one.
> Saying 'you should have rewritten this in rust in order to avoid liability' is not a standard of care that anybody will ever be held to.
Why is 'you should not have exposed C code to the internet' an unreasonable basis for care?
I'm not saying your approach is wrong, I think they're not mutually exclusive - your solution would provide incentive to not use C in a technology agnostic way. But at some point isn't it just irresponsible to write critical infrastructure, or code that exposes user information, in a language that has historically been a disaster for security?
> Why is 'you should not have exposed C code to the internet' an unreasonable basis for care?
Because it does not mesh with reality. There are 100's of millions of lines of C code exposed to the internet in all layers of the stack. So if you start off with a hardcore position like that you virtually ensure that your plan will not be adopted.
It's the legacy that is the problem, not the future.
That's one of the arguments for SaferCPlusPlus[1]. It's a (high performance) option for retrofitting memory safety to existing C/C++ codebases. It requires (straightforward) modifications to the existing code, but involves much less effort than a complete rewrite. A tool to (mostly) automate the required modifications is being worked on. (But more resources might hasten its progress, if any of those well-resourced entities wanted to redirect some of their efforts from vulnerability detection and mitigation to prevention :)
Just out of curiosity, what kind of performance penalty are people be willing to accept in exchange for memory safety?