There is plenty of room for debate over this advice. Prefer do notation if the effects of the flow are more important than the data structure, and prefer using Applicative style if the data structure layout is more important to understand than the effects to build it. In the example in the article, the software is easier to understand from the perspective of how data was acquired to populate the record. However, this isn't true in a general sense; often records are purpose built in higher level languages to reflect effects and constrain them.
Example: when using Parsec or similar, Applicative style is far easier to understand when examining how records are parsed. In this case, the records will likely reflect the AST, which in turn will reflect the grammar. The two reinforce each other: additional constraints on syntax will naturally flock to data constructors.
> Prefer do notation if the effects of the flow are more important than the data structure, and prefer using Applicative style if the data structure layout is more important to understand than the effects to build i.
I could spot the clone because I'm familiar with the form factor of the FTDI IC, and I'm familiar enough with the datasheet to spot the expected passives.
I'm not too keen these days with FTDI's reputation for manipulating their Windows device drivers to brick clones. So, while I'm familiar with their IC, I don't give them any more money. The next time I need a USB to serial cable, I'll bust out KiCad to build it using one of the ubiquitous ARM microcontrollers with USB features built in. Of course, this is easier for me, since I can write my own Linux or BSD device driver as well. Those using OSes with signing restrictions on drivers would have a harder time, unless they chose to disable driver signing.
I think that's what happened here. I spotted the fake because it has a large number of unused pins, which would not be the case with an FTDI chip that was literally made for this.
I think it's just some generic microcontroller emulating FTDI's protocol in software, but it can't keep up with high-speed transfers of course, and that's how they noticed there was a problem.
We need more people in this world willing to do their own thing, even if others might find it intimidating or silly. The important thing is to have fun and learn things. Compiler hacking is just as good as any other hobby, even if it's done in good jest.
Sometimes, these things become real businesses. Not that this should be the intent of this, but it shows that what some consider silly, others will pay good money for.
Example: Cards Against Humanity started as a bit of a gag game between a small group of friends and eventually became something that has pop culture relevance.
Example: The founder of FedEx actually wrote a business pitch paper for an overnight shipping company. This paper was given a low grade by his professor. He went on to form this company, which become a success, despite this low grade. I like to think that he did this out of spite, and that Christmas letters to his old professor must've been fun.
You can't have paradigm shifts by following the paradigm.
How I think of it is we need a distribution of people (shaped like a power law, not a normal).
Most people should be in the main body, doing what most people do. They're probably the "most productive".
Then you have people in the mid tail who innovate but it's incremental and not very novel. They produce frequently (our current research paradigm optimizes for this). But there aren't leaps and bounds. Critically it keeps pushing things forward, refining and improving.
But then there's those in the long tail. They fail most of the time and are the "least productive". Sometimes never doing anything of note their entire lives. But these are also the people that change the world in much bigger ways. And sometimes those that appeared to do nothing have their value found decades or centuries later.
Not everyone needs to be Newton/Leibniz. Not everyone should be. But that kind of work is critical to advancing our knowledge and wealth as a species. The problem is it is often indistinguishable from wasting time. But I'm willing to bet that the work of Newton alone has created more value to all of human civilization than every failed long tail person has cost us.
In any investment strategy you benefit from having high risk investments. Most lose you money but the ones that win reward you with much more than you lost. I'm not sure why this is so well known in the investment world but controversial in the research/academic/innovation world.
Hah. I think of it as a slime mold. There's the main body (bodies?), but it's always shooting out little bits of itself that try weird stuff - founding underwater communes, or climbing mountains in Crocs or something. Most of these offshoots don't have that much of an impact, but occasionally one lucks out and discovers America or peanut butter and the main body saunters off that way.
Yeah we find this type of optimization all over nature. Even radiation is important to create noise. We need it in machine learning. A noisy optimizer is critical for generalized learning. Too much noise and you learn nothing but no noise and you only memorize. So there's a balance
I'm reminded a bit of a robotics professor I had at uni, a very long time ago in the 1990s. When he was at uni in the late 70s he and a couple of friends were working on a robot arm, which didn't work because it kept dropping stuff. They made a more precise linkage for the gripper, and if anything it was worse. They made ever more precise bushings for the pivots, and it just didn't help or made it worse.
Eventually after a conversation in the pub with one of his friends who was studying sports physiotherapy, they ripped out the 47th set of really precise little Teflon bushings and put in new ones made of medical silicone rubber tubing.
Now all the joints were a bit sticky and squashy and wobbly, and it picked everything up perfectly every time.
I get the "outliers are useful" thing you're trying to emphasis. But as someone from a mountainous country, please dont "climb[ing] mountains in Crocs", we regularily get media reports of hopelessly underequipped people having to be rescued with a whole team of people, in the middle of the night, with horrible weather, usually also endangering the people that do the rescue. I guess what I am trying to say is, there is a limit to how silly you can/should be.
What is the expected value, of some dude at university spinning dinner plates in the cafeteria? What a silly pointless thing to do! Of course, if you're physics Professor Feynman, you get a Nobel out of it, so do the silly pointless things after all!
If the bongo man taught me anything it's that you should do something for the pursuit itself. The utility can always be found later. But chasing utility first only limits your imagination
I don't understand how this is a power law and not normal. The "long tail" is usually mentioned in a normal distribution being the right-most end of it.
I think you have a misunderstanding because a heavy tail is essentially one that is not exponentially bounded. A long tail being a subclass.
There's kinda a big difference in the characteristics of a normal distribution and power and I think explaining that will really help.
In a normal you have pressure from both ends so that's why you find it in things like height. There's evolutionary pressure to not be too small but also pressure to not be too large. Being tall is advantageous but costly. Technically the distribution never ends (and that's in either direction!). Though you're not going to see micro people nor 100' tall people because the physics gets in the way. Also mind you that normal can't be less than zero.
It is weird to talk about "long tail" with normal distributions and flags should go up when hearing this.
In a power distribution you don't have bounding pressure. So they are scale free. A classic example of this is wealth. It's easier to understand if you ignore the negative case at first, so let's do that (it still works with negative wealth). There's no upper bound to wealth, right? So while most people will be in the main "mode" there is a long tail. We might also say something like "heavy tail" when the variance is even moderate. So this tail is both long and the median value isn't really representative of the distribution. Funny enough, power laws are incredibly common in nature. I'm really not sure why they aren't discussed more.
I think Veritasium did a video on power distributions recently? Might be worth a check.
Before I watched this I would default to thinking about most distributions as normal. It's really fun to think about whatever "game" you are playing - wether you are building a business or trying to win a carnival game - and consider if the results follow a normal distribution or a power law?
> The founder of FedEx actually wrote a business pitch paper for an overnight shipping company. This paper was given a low grade by his professor. He went on to form this company, which become a success, despite this low grade.
Was the paper given a low grade because it was a bad idea or because Fred Smith wrote a bad paper? If his pitch didn’t work, did feedback from the professor help Smith sharpen his idea so he was in a better position to make FedEx a success?
Allegedly, it was given a lower grade due to it not being a feasible business plan, in the professor's estimation. Of course, this forms part of the legend behind Fred Smith and FedEx, so that should be taken with a grain of salt.
That still feels a bit off, as you are "having fun" because it ultimately is the road to success.
There is a deeper hurt in the tech world, which is that we have all been conditioned to crave greatness. Every employer tries to sell us on how important what they do is, or how rich everyone will become. We can't even vacation without thinking how much better we will perform once we get back. That struggle with greatness is something every human grapples with, but for workers in tech it is particularly difficult to let it go. The entire industry wants us to hold onto it until we are completely drained.
Anyway the result is sentiments like this, where having fun, exploring and learning can't just exist for the inherent rewards.
As per my original comment, these examples are only indicative that profitable endeavors can come out of these things in unexpected ways, but that's not the point of doing these things. I'm never going to profit from, nor recoup the costs I've sunk into most of the mad science I do. That's not the point. I do it because it's fun and because I like building cool things.
These examples are one justification for why we should embrace these kinds of hobbies, and not the desirable outcome for these kinds of hobbies.
Josef Pieper wrote a book called "Leisure: the Basis of Culture"[0] - published in 1948 - in which he discusses the meaning of leisure, which is not what we mean by it today, and criticizes the "bourgeois world of total labor" as a spiritually, intellectually, and culturally destructive force.
Today, we think of "leisure" as merely free time from work or recreation, something largely done to "recharge" so that we can go back to work (in other words: modern "leisure" is for the sake of work). This is not the original meaning. Indeed, etymologically, the word "school" comes from σχολή ("skholē"), which means "leisure", but with the understanding that it involves something like learned discussion or whatever. (Difficult to imagine, given how hostile modern schooling is, resembling more of a factory than a place of learning.) The purpose of work was to enable leisure. We labored in order to have leisure.
What's also interesting is that unlike us, who think of "leisure" in terms of work (that is, we think of it as a negation of work, "not-working"), the Greeks viewed it in exactly the opposite way. The word for "work" is ἀσχολίᾱ ("askholíā"), which is the absence of leisure. The understanding held for most of history and explains why we call the liberal arts liberal: it freed a man to be able to pursue truth effectively, and was contrasted with the servile arts, that is, everything with a practical aim like a trade or a craft.
This difference demonstrates an important shift and betrays the vulgar or nihilistic underbelly of our modern culture. Work is never for its own sake. It is always aimed at something other than itself (praxis and associated poiesis). This distinguishes it from something like theory (theoria) which is concerned with truth for its own sake.
So what do we work for? Work for its own sake is nihilistic, a kind of passing of the metaphysical buck, an activity pursued to avoid facing the question of what we live for. Work pursued merely to pay for sustenance - full stop - is vulgar and lacks meaning. Sustenance is important, but is that all you are, a beast that slurps food from a trough? Even here, only in human beings is food elevated into feast, into meal, a celebration and a social practice that incorporates food; it is not merely nutritive. Are you merely a consumerist who works to buy more crap, foolishly believing that ultimate joy will be found in the pointless chase for them?
Ask yourself: whom or what do you serve? Everyone aims at something. What are the choices of your life aiming at?
That is an excellent way of considering both leisure and work, and certainly, a testament to the importance of studying the humanities.
Aristotle famously developed the Greek concept of εὐδαιμονία (eudaimonia), which dovetails into what you wrote. Roughly, the concept translates into "human flourishing" or "living well". While Aristotle's conception of what best constitutes this differed a bit from more ancient Greek concepts passed down through their oral tradition, and definitely differs from what we may consider today, it bears investigation. I definitely think that education and personal research fits into my conception of it, but tastes differ. Nietzsche gave what I considered some excellent responses to Aristotle, especially when it comes to finding / making meaning in our lives with respect to the modern world. The Transcendentalist school, in particular Henry David Thoreau and Ralph Waldo Emerson, also provided some interesting flavor.
I think that your questions should be asked continuously. We should all adjust our life trajectories based on our own flourishing, in ways that challenge us and lead to growth. There aren't clear answers to these questions. In fact, they should lead to a bit of discomfort, like sand in one's clam shell. Much as this sand will eventually form a pearl, these questions should drive us to better ourselves, little by little.
I have been formally verifying software written in C for a while now.
> is that only for some problems is the specification simpler than the code.
Indeed. I had to fall back to using a proof assistant to verify the code used to build container algorithms (e.g. balanced binary trees) because the problem space gets really difficult in SAT when needing to verify, for instance, memory safety for any arbitrary container operation. Specifying the problem and proving the supporting lemmas takes far more time than proving the code correct with respect to this specification.
> If you do this right, you can get over 90% of proofs with a SAT solver
So far, in my experience, 99% of code that I've written can be verified via the CBMC / CProver model checker, which uses a SAT solver under the covers. So, I agree.
I only need to reach for CiC when dealing with things that I can't reasonably verify by squinting my eyes with the model checker. For instance, proving containers correct with respect to the same kinds of function contracts I use in model checking gets dicey, since these involve arbitrary and complex recursion. But, verifying that code that uses these containers is actually quite easy to do via shadow methods. For instance, with containers, we only really care whether we can verify the contracts for how they are used, and whether client code properly manages ownership semantics. For instance, placing an item into the container or taking an item out of a container. Referencing items in the container. Not holding onto dangling references once a lock on a container is released, etc. In these cases, simpler models for these containers that can be trivially model checked can be substituted in.
> Now, sometimes you just want a negative specification - X must never happen. That's somewhat easier.
Agreed. The abstract machine model I built up for C is what I call a "glass machine". Anything that might be UB or that could involve unsafe memory access causes a crash. Hence, quantified over any acceptable initial state and input parameters that match the function contract, these negative specifications must only step over all instructions without hitting a crash condition. If a developer can single step, and learns how to perform basic case analysis or basic induction, the developer can easily walk proofs of these negative specifications.
I'm starting to. One of the libraries I've started on recently is open source. I'm still really early in that process, but I started extracting a few functions for the optimized single and double linked list containers and will be moving onto the red-black and AVL tree containers soon. Once these are done, I should be able to model check the thread, fiber, and socket I/O components.
Around 15 years ago, I built a barbecue controller. This controller had four temperature probes that could be used to check the temperature of the inner cooking chamber as well as various cuts of meat. It controlled servos that opened and closed vents and had a custom derived PID algorithm that could infer the delayed effects of oxygen to charcoal.
Anyway, of relevance to this thread is that the controller connected to the local wireless network and provided an embedded HTTP server with an SVG based web UI that would graph temperatures and provided actual knobs and dials so that the controller could be tweaked. SVG in the browser works nicely with Javascript.
Sadly, I did not. I have the source code on an old laptop somewhere. I was disheartened when I considered productizing it and discovered just how deep of a patent tarpit I was dealing with.
It's on my list to revisit in the future. At this point, most of the patents are coming up on expiration, and it would make for a great open source project. Hardware has gotten much better over the subsequent years; there are nicer lower power solutions with integrated Bluetooth LE as well as other low power wireless technologies.
In C, you're correct. The problem is that, in C++, one must account for the fact that anything could throw an exception. If something throws an exception between the time that f is opened and f is closed, the file handle is leaked. This is the "unsafe" that Bjarne is talking about here. Specifically, exception unsafety that can leak resources.
As an aside, it is one of the reasons why I finally decided to let go of C++ after 20 years of use. It was just too difficult to teach developers all of the corner cases. Instead, I retooled my system programming around C with model checking to enforce resource management and function contracts. The code can be read just like this example and I can have guaranteed resource management that is enforced at build time by checking function contracts.
The function contracts are integrated into the codebase. Bounded model checking tools, such as CBMC, can be used to check for integer UB, memory safety, and to evaluate custom user assertions. The latter feature opens the door for creating function contracts.
I include function contracts as part of function declarations in headers. These take the form of macros that clearly define the function contract. The implementation of the function evaluates the preconditions at the start of the function, and is written with a single exit so the postconditions can be evaluated at the end of the function. Since this function contract is defined in the header, shadow functions can be written that simulate all possibilities of the function contract. The two are kept in sync because they both depend on the same header. This way, model checks can be written to focus on individual functions with any dependencies simulated by shadows.
The model checks are included in the same project, but are separate from the code under instrumentation, similar to how unit tests are commonly written. I include the shadow functions as an installation target for the library when it is installed in development mode, so that downstream projects can use existing shadow functions instead of writing their own.
I'm living with heart failure. I have 20-30 years before I'll need a transplant, if I live a perfect lifestyle and keep my other health issues under control. Due to my other health issues, I am not a good candidate for a human heart transplant. It's not that a human heart transplant would fail, but that when I'd be placed against others on the list for a new heart, my other health issues would reduce my priority such that there is always someone with higher priority to receive a heart, up until the point in which I'm no longer healthy enough to receive a transplant. There are far too few human hearts, and far too many people who need one. All that the transplant boards can do is give hearts to those with the greatest momentary need, with the best chance of surviving.
Xenotransplantation is one of the life lines I'm counting on. I'm hoping that, by the time I need it, the issues that we currently have will be worked out. I have zero ethical issues with breeding and eventually culling pigs in order to save human lives. I hope that there will be other, better, breakthroughs by then, but if not, the best I can hope for is that the pigs are raised in a sterile and enriching environment, and that the only bad day they have is their last day.
I have actually argued for the use of mailing lists for corporate engineering discussions. When that becomes the medium for code review or design discussions, there's a nice streamlined workflow. Further, it's practically trivial to write or customize a mailing list reflector. If you have a decent and secure mail client library, you're a weekend away from it just working. Contrast that with customizing or rolling your own IRC, Slack, Discord, or web forum clone. Mailing lists don't suffer from vendor lock-in, and anyone with a mail client and who can follow basic rules can participate.
An invitation-only mailing list with a reflector that verifies PGP encryption and non-repudiation is just fine for most corporate discussions. For mailing lists open to the public, new users can be placed in a moderation queue for a period of time until it's clear that they understand list netiquette and formatting rules.
I think for internal corporate use, NNTP would work even better, if it were still supported in mail clients. I deployed it for exactly that purpose in the late 1990s and it was great.
Fortunately there are IMAP shared mailboxes, which we worked incredibly hard at CMU during the Cyrus project to ensure would work just as well as bboards did in CMU’s previous Andrew Mail System. (“Bulletin boards,” aka bboards, were basically CMU-local Usenet, with global Usenet under a netnews.* hierarchy via a gateway.)
Most IMAP clients worth their salt support shared mailbox hierarchies, even Apple Mail does, so it’s “just” a matter of setting up shared groups on a server.
I agree that NNTP would be better. I do have a NNTP server because of this, although my intention is that it can be used for public discussions and not only for internal corporate use, although I did implement authentication in case some newsgroups are used for private discussions too.
(Note, the article in the IETF mailing list does mention Usenet too)
The applicability of Rice's theorem with respect to static analysis or abstract interpretation is more complex than you implied. First, static analysis tools are largely pattern-oriented. Pattern matching is how they sidestep undecidability. These tools have their place, but they aren't trying to be the tooling you or the parent claim. Instead, they are more useful to enforce coding style. This can be used to help with secure software development practices, but only by enforcing idiomatic style.
Bounded model checkers, on the other hand, are this tooling. They don't have to disprove Rice's theorem to work. In fact, they work directly with this theorem. They transform code into state equations that are run through an SMT solver. They are looking for logic errors, use-after-free, buffer overruns, etc. But, they also fail code for unterminated execution within the constraints of the simulation. If abstract interpretation through SMT states does not complete in a certain number of steps, then this is also considered a failure. The function or subset of the program only passes if the SMT solver can't find a satisfactory state that triggers one of these issues, through any possible input or external state.
These model checkers also provide the ability for user-defined assertions, making it possible to build and verify function contracts. This allows proof engineers to tie in proofs about higher level properties of code without having to build constructive proofs of all of this code.
Rust has its own issues. For instance, its core library is unsafe, because it has to use unsafe operations to interface with the OS, or to build containers or memory management models that simply can't be described with the borrow checker. This has led to its own CVEs. To strengthen the core library, core Rust developers have started using Kani -- a bounded model checker like those available for C or other languages.
Bounded model checking works. This tooling can be used to make either C or Rust safer. It can be used to augment proofs of theorems built in a proof assistant to extend this to implementation. The overhead of model checking is about that of unit testing, once you understand how to use it.
It is significantly less expensive to teach C developers how to model check their software using CBMC than it is to teach them Rust and then have them port code to Rust. Using CBMC properly, one can get better security guarantees than using vanilla Rust. Overall, an Ada + Spark, CBMC + C, Kani + Rust strategy coupled with constructive theory and proofs regarding overall architectural guarantees will yield equivalent safety and security. I'd trust such pairings of process and tooling -- regardless of language choice -- over any LLM derived solutions.
Sure it's possible in theory, but how many C codebases actually use formal verification? I don't think I've seen a single one. Git certainly doesn't do anything like that.
I have occasionally used CBMC for isolated functions, but that must already put me in the top 0.1% of formal verification users.
It's not used more because it is unknown, not because it is difficult to use or that it is impractical.
I've written several libraries and several services now that have 100% coverage via CBMC. I'm quite experienced with C development and with secure development, and reaching this point always finds a handful of potentially exploitable errors I would have missed. The development overhead of reaching this point is about the same as the overhead of getting to 80% unit test coverage using traditional test automation.
You're describing cases in which static analyzers/model checkers give up, and can't provide a definitive answer. To me this isn't side-stepping the undecidability problem, this is hitting the problem.
C's semantics create dead-ends for non-local reasoning about programs, so you get inconclusive/best-effort results propped up by heuristics. This is of course better than nothing, and still very useful for C, but it's weak and limited compared to the guarantees that safe Rust gives.
The bar set for Rust's static analysis and checks is to detect and prevent every UB in safe Rust code. If there's a false positive, people file it as a soundness bug or a CVE. If you can make Rust's libstd crash from safe Rust code, even if it requires deliberately invalid inputs, it's still a CVE for Rust. There is no comparable expectation of having anything reliably checkable in C. You can crash stdlib by feeding it invalid inputs, and it's not a CVE, just don't do that. Static analyzers are allowed to have false negatives, and it's normal.
You can get better guarantees for C if you restrict semantics of the language, add annotations/contracts for gaps in its type system, add assertions for things it can't check, and replace all the C code that the checker fails on with alternative idioms that fit the restricted model. But at that point it's not a silver bullet of "keep your C codebase, and just use a static analyzer", but it starts looking like a rewrite of C in a more restrictive dialect, and the more guarantees you want, the more code you need to annotate and adapt to the checks.
And this is basically Rust's approach. The unsafe Rust is pretty close to the semantics of C (with UB and all), but by default the code is restricted to a subset designed to be easy for static analysis to be able to guarantee it can't cause UB. Rust has a model checker for pointer aliasing and sharing of data across threads. It has a built-in static analyzer for memory management. It makes programmers specify contracts necessary for the analysis, and verifies that the declarations are logically consistent. It injects assertions for things it can't check at compile time, and gives an option to selectively bypass the checkers for code that doesn't fit their model. It also has a bunch of less rigorous static analyzers detecting certain patterns of logic errors, missing error handling, and flagging suspicious and unidiomatic code.
It would be amazing if C had a static analyzer that could reliably assure with a high level of certainty, out of the box, that a heavily multi-threaded complex code doesn't contain any UB, doesn't corrupt memory, and won't have use-after-free, even if the code is full of dynamic memory (de)allocations, callbacks, thread-locals, on-stack data of one thread shared with another, objects moved between threads, while mixing objects and code from multiple 3rd party libraries. Rust does that across millions lines of code, and it's not even a separate static analyzer with specially-written proofs, it's just how it works.
Such analysis requires code with sufficient annotations and restricted to design patterns that obviously conform to the checkable model. Rust had a luxury of having this from the start, and already has a whole ecosystem built around it.
C doesn't have that. You start from a much worse position (with mutable aliasing, const that barely does anything, and a type system without ownership or any thread safety information) and need to add checks and refactor code just to catch up to the baseline. And in the end, with all that effort, you end up with a C dialect peppered with macros, and merely fix one problem in C, without getting additional benefits of a modern language.
CBMC+C has a higher ceiling than vanilla Rust, and SMT solvers are more powerful, but the choice isn't limited to C+analyzers vs only plain Rust. You still can run additional checkers/solvers on top of everything Rust has built-in, and further proofs are easier thanks to being on top of stronger baseline guarantees and a stricter type system.
If we mark any case that might be undecidable as a failure case, and require that code be written that can be verified, then this is very much sidestepping undecidability by definition. Rust's borrow checker does the same exact thing. Write code that the borrow checker can't verify, and you'll get an error, even if it might be perfectly valid. That's by design, and it's absolutely a design meant to sidestep undecidability.
Yes, CBMC + C provides a higher ceiling. Coupling Kani with Rust results in the exact same ceiling as CBMC + C. Not a higher one. Kani compiles Rust to the same goto-C that CBMC compiles C to. Not a better one. The abstract model and theory that Kani provides is far more strict that what Rust provides with its borrow checker and static analysis. It's also more universal, which is why Kani works on both safe and unsafe Rust.
If you like Rust, great. Use it. But, at the point of coupling Kani and Rust, it's reaching safety parity with model checked C, and not surpassing it. That's fine. Similar safety parity can be reached with Ada + Spark, C++ and ESBMC, Java and JBMC, etc. There are many ways of reaching the same goal.
There's no need to pepper C with macros or to require a stronger type system with C to use CBMC and to get similar guarantees. Strong type systems do provide some structure -- and there's nothing wrong with using one -- but unless we are talking about building a dependent type system, such as what is provided with Lean 4, Coq, Agda, etc., it's not enough to add equivalent safety. A dependent type system also adds undecidability, requiring proofs and tactics to verify the types. That's great, but it's also a much more involved proposition than using a model checker. Rust's H-M type system, while certainly nice for what it is, is limited in what safety guarantees it can make. At that point, choosing a language with a stronger type system or not is a style choice. Arguably, it lets you organize software in a better way that would require manual work in other languages. Maybe this makes sense for your team, and maybe it doesn't. Plenty of people write software in Lisp, Python, Ruby, or similar languages with dynamic and duck typing. They can build highly organized and safe software. In fact, such software can be made safe, much as C can be made safe with the appropriate application of process and tooling.
I'm not defending C or attacking Rust here. I'm pointing out that model checking makes both safer than either can be on their own. As with my original reply, model checking is something different than static analysis, and it's something greater than what either vanilla C or vanilla Rust can provide on their own. Does safe vanilla Rust have better memory safety than vanilla C? Of course. Is it automatically safe against the two dozen other classes of attacks by default and without careful software development? No. Is it automatically safe against these attacks with model checking? Also no. However, we can use model checking to demonstrate the absence of entire classes of bugs -- each of these classes of bugs -- whether we model check software written in C or in Rust.
If I had to choose between model checking an existing codebase (git or the Linux kernel), or slowly rewriting it in another language, I'd choose the former every time. It provides, by far, the largest gain for the least amount of work.
Example: when using Parsec or similar, Applicative style is far easier to understand when examining how records are parsed. In this case, the records will likely reflect the AST, which in turn will reflect the grammar. The two reinforce each other: additional constraints on syntax will naturally flock to data constructors.