Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The VeraCrypt Audit Results (ostif.org)
288 points by conductor on Oct 17, 2016 | hide | past | favorite | 92 comments


Those results do not really inspire confidence. Shipping an ancient version of zlib without patching, ever, is an avoidable mistake. The GOST 28147-89 cipher basically cannot be used with XTS due to the cipher's 64-bit block size, which should have been caught by a good crypto engineer.

Also, the most significant new work - the UEFI support - seems to have quite a few issues.

Note that, despite the color-coding, the list of fixed issues is not the list of critical (red) issues!


Valid points, though I would consider the requirements for writing really secure software quite high, so I do not think most open source projects could meet such standards. Maybe it could be compared to developing mission critical software, like Nasa for example. The amount of resources and rigor they use to get bugs out of the code is out of the scope of most free/donation software projects. There are probably a few open source projects which gets close to being considered secure though, mostly those which are very used and backed by big companies or foundations.

So we have Veracrypt as the 'good enough' option right now and ease of use, but you shouldn't happen to know any secure alternatives? You seem to know a bit about this.


"though I would consider the requirements for writing really secure software quite high, so I do not think most open source projects could meet such standards. "

Data from prior work indicates it ranges considerably from significantly harder to extremely hard. The LOCK system with an Orange Book A1 development process was highly secure and gave cost breakdown. A1 assurance acyivities added around 37% or so on top of regular, labor cost. Altran/Praxis's Correct-by-Construction that does highly-assured systems with mix of Z specs, Ada, SPARK, reviews, and testing costs 50% premium on top of normal development. Just using SPARK automatically knocks out whole classes of bugs. Galois did and open-sourced CRYPTOL so people can specify algorithms in easy DSL then generate C from it. Also parser and protocol generators.

So, prior evidence shows it takes specialized skill and domain knowledge... at least two extra people unless one has both... but otherwise costs 30-50% more time. Like Cleanroom methodology, it partly achieved this by saving you time debugging and refactoring due to reduced bugs in general plus doing fixes earlier in lifecycle.

Most OSS projects dont do this stuff just because they dont know it's necessary, don't care, or don't have staff for both demanded features and assurance activities. Interestingly, there's more high-assurance products in proprietary than FOSS software despite the huge labor advantage FOSS has. That there's little of even medium-assurance work in majority of both says even worse things about IT's priorities or apathy given medium assurance cost little to nothing. Microsoft is the one exception of big, software houses via SDL and MS Research's work. For FOSS, DJB, OpenBSD, and SQLite come to mind.


Thanks for the info, did some searching based on your comment and I managed to find a report made by Altran/Praxis detailing the development process and technologies for a tokeneer ID station implementation they did for the NSA. Very interesting stuff. http://www.adacore.com/uploads/downloads/Tokeneer_Report.pdf

It really shows the amount of resources and knowledge required for actually having safe or secure systems.


Glad you enjoyed it. Remember Praxis' method next time some fool says you can't engineer software. A few companies like them do. They're now just called Altran. The Tokeneer link was good since they published the source code on AdaCore website for people to learn from. However, the highest-security thing they did was the CA below under UK equivalent of EAL6/7. http://www.anthonyhall.org/c_by_c_secure_system.pdf

Example of new one for model-to-code-to-ASM verification https://www.umsec.umn.edu/sites/www.umsec.umn.edu/files/hard...

LOCK project was pretty landmark in all that it accomplished with Sidewinder firewall & SELinux being in its ripple effects http://www.cyberdefenseagency.com/publications/LOCK-An_Histo...

The B method is one of most successful in industry. You'll definitely see the engineering aspect in this one. http://www.methode-b.com/wp-content/uploads/sites/7/2012/08/...

Cleanroom was early one in 1980's (start at p13) https://www.sei.cmu.edu/reports/96tr022.pdf

Note: Cleanroom had excellent results at low cost but disappeared for some reason. I saw someone recently combine Python with Cleanroom with good results. Cleanroom's compositional style means something like Haskell + QuickCheck could be great combo. If anyone tries it, let me know about the results.


Really secure software isn't easy to write. But we're not holding the VeraCrypt developers to a higher bar than the one they've set for themselves.

For disk encryption, use whatever comes with your system. If you have no preference, Microsoft's BitLocker is excellent - it can incorporate hardware security features such as a TPM, self-encrypting (OPAL) SSD, it's somewhat easy to administer, etc.


If you are on Windows 10, the Home edition doesn't come with BitLocker - so your choice is to upgrade to Pro or use something else, such as VeraCrypt.

I think it's probable that both can be broken/bypassed by state actors - I'd be happy to use either as a defence against everyone else though.


> For disk encryption, use whatever comes with your system.

I acknowledge this is likely a super-niche problem, but if you want to use an encrypted thumb drive (or external HDD/SDD) to access data from Windows, Mac and Linux machines, there is almost no alternative. Is there any realistic alternative?

And BitLocker, even if you are on Windows is not completely trouble-free - on Windows 7, e.g. to my knowledge only the Ultimate/Enterprise editions support it. Windows 7 Pro does not.


Regarding BitLocker, even on Windows 10 Pro, you can't encrypt the virtual disk and put it on the DVD or BlyRay and later read from there directly (why? because MSFT). That means it's not suitable at all for longer-term archiving.


Bitlocker is excellent in a commercial environment, but the question some people are concerned about is how it might handle state-level threats. Although I suspect after Windows 10 techies would find that consideration moot, there are quite a lot of human rights people still using it for the usual reasons.


If your threat model is that you assume your OS vendor is part of the attack then you can't use a proprietary OS.

There's probably no free OS either that can give you the confidence level you'd want, because you probably would want to have reproducible builds, a trustworthy update process (preferably with some kind of transparency log), a wide array of exploit mitigations (aslr/pie/pic, grsec kernel, maybe CFI). There's currently no free OS offering all of those.


Just out of interest: is there any OS that offers all of those? Libre/Open Source/Free or proprietary (latter with source access I assume)?


If BitLocker seems riskier to you than VeraCrypt, might I suggest you recalibrate your threat model? See e.g. https://news.ycombinator.com/item?id=8193364.


Open source or not shouldn't be a constraint for spending resources and effort on code; it'd just mean development would be a bit slower and more tedious. But we're dealing with encryption here; if there's anything that should be done slowly, tediously and with an enormous amount of OCD, it's encryption software.


TrueCrypt inspired confidence in me because the way the dev team operated displayed true expertise and professionalism. Maybe I perceived that incorrectly, but that's how I felt.

VeraCrypt has always seemed like someone swooped down and forked TrueCrypt the moment it became abandoned, to grab all its faithful users.

Now, that's a bit dramatic, and probably not entirely true. But that's how it always felt to me -- ANYONE can just fork TrueCrypt. Why should I trust these guys in particular?


TrueCrypt went belly up in May,2014[1]

VeraCrypt 1.0 was released in May,2013[2]. One year prior.

[1] http://truecrypt.sourceforge.net

[2] https://sourceforge.net/u/yanpas/veracrypt/ci/VeraCrypt_1.0b...


VeraCrypt is currently maintained by Mounir Idrassi, which is like a real person with a name. VeraCrypt is also actively developed, the big issue solved in version 1.8 being to add support for UEFI GPT drives, which is actually a big deal.

TrueCrypt was developed by an anonymous developer that vanished, abandoning the project. Funny how trust works.


The way in which TrueCrypt "became abandoned" does not inspire confidence as well. Considering they left a cryptic and very strange exit note that many perceived as a notice of law enforcement activity.


The strongest evidence I have seen is that the developer was arrested by the DEA on unrelated charges.[0]

The full story about him is fascinating and well worth the read.[1]

[0] http://www.newyorker.com/news/news-desk/the-strange-origins-...

[1] https://mastermind.atavist.com/


It's not a developer of TrueCrypt but of E4M. At the time TrueCrypt was developed Le Roux was busy with doing crimes.

https://en.wikipedia.org/wiki/E4M

https://en.wikipedia.org/wiki/Paul_Le_Roux

Although TrueCrypt existed since 2004 he ordered his employees around 2007 to use E4M (which would be improbable if he really developed TrueCrypt). The developer (or developers) of TrueCrypt isn't publicly known.


If anything that cryptic abandonment actually gave me more confidence in TrueCrypt. Was it so hard to crack that the government shut it down?


I meant confidence in what came after it. Meaning the likelihood of a backdoored and/or weak software was likely.


Ah I get what you meant now, I agree. If some entity already had their eye on TrueCrypt and had shut it down, you wouldn't expect them to let a copycat pop up unless it was less secure.


This is the "NSAKEY" of open source crypto conspiracy theories.


As far as I recall, there are 8 bytes set to zero in the TC header at a very curious location.


That's always how these worst backdoors begin... with curious patterns of zeroes. How better to zero out a key than with actual zeroes. Nobody will ever suspect!


> Note that, despite the color-coding, the list of fixed issues is not the list of critical (red) issues!

yikes, one hell of an oversight


UEFI support is HUGE. So happy to see this. Finally we're nearing a feasible open-source solution for FDE on modern laptops running Windows.


The danger I caution against, is that I've seen colleagues jumping to obscure, and in some cases, obviously broken crypto products, because "I read an audit that Veracrypt is insecure".

That some person's weekend project doesn't have an audit like this expressing issues doesn't make it better - it makes it worse. Please consider that when reviewing the context of this paper.


OSTIF's financial support is sad. Why is the open source world so generous with time and so cheap with money? Kudos to DuckDuckGo for apparently being the only business that pulls their weight; another great reason to use them.

https://ostif.org/top-ostif-donors/

    Top OSTIF Donors

    These are the individuals and organizations that have given
    the most support to the OSTIF.

    Individuals:
    Derek Zimmer – $1947
    Zach Graves – $188
    Amir Montazery – $200
    Ben W – $30
    Nathan N – $10

    Groups:
    DuckDuckGo – $25,000
    VikingVPN – $1000
    A special thank you for website support from Mike from
    HTPCGuides.com

EDIT: On a second look, is that list really accurate? The fifth highest individual donor gave $10?

I read about the founder and long-time developer of a well-known, respected Linux distro, who had to move back to his hometown because he couldn't afford the Bay Area and has no health insurance. Maybe some of his millions of users could chip in a little for the incredible service he provides to them. How depressing.


Might be also about how they present things. For example in the linked page they have the request for donations in small font and end of the results, but no link to donation page [1]. Edit: and on the donation page the Amazon and PayPal links are not working.

Don't know how they are collecting money in general, but for me it works best when you do this: "Now we have $xxx, but we still need $yyy to do this fairly specific great thing. Please donate!"

[1] https://ostif.org/donate-to-ostif/


I thought you were exaggerating at first, and then I went to try and donate. Wow. The "donate to OSTIF" link looks like text (black), when all other links are blue, and, as you said, the PayPal link isn't working (so that removes my first donation method), and the Bitcoin link requires me to copy/paste an address to my phone (there isn't even a QR code), so I just plain can't donate.


Like you said, that was surprisingly tough to donate with. I had to make a donation by sending money via Square Cash to $OSTIF.


I've often found a problem with open source projects is that they aren't set up for non-profit, so you can't claim a donation on taxes. Perhaps minor but this could mean a lot for, for example, an organization looking for a donation target.


To back this up with an example, we (DuckDuckGo) tried to look for open source projects with 501(c)3 status or similar for donations, but many projects or organisations don't make it clear. The alternative was to look through online IRS records which not every donor will do.

Another problem was finding up-to-date info about whether funding is actually needed and what for. Donors usually want to donate to a project that is needy and in line with their values.

My advice for any OSS project looking for donations is:

* Have a dedicated easy-to-find donations page or donations.md file.

* Explain what you need funding for. Specific is better.

* Indicate how to donate and how to contact you for donation questions.

* Make your tax status clear, even if you're not a non-profit.

* To reduce bureaucracy, consider joining a non-profit umbrella organisation such as OSTIF ( https://ostif.org/ ) or Riseup Labs ( http://riseuplabs.org/ ).

* If you do get donations, report the effect of the funding either publicly or to the donor. This builds trust and increases the chance of repeat donations in future.

* Include non-financial donation ideas and link to your developer page or contributing.md file (or equivalent).

EDIT: The OpenBSD Foundation is a good example of easy-to-access information about donating, activities, and current and past fundraising campaigns: http://www.openbsdfoundation.org/


Just look at the openssl debacle.

One critical and almost ubiquitous piece of software and the funding it was getting was ridiculous.


That bureacracy is part of why joining an open source foundation such as KDE with your project can be worth it.

(KDE is a registered entity in Germany, and donations to them can be reclaimed from taxes)


> long-time developer of a well-known, respected Linux distro, who had to move back to his hometown

Interesting. Do you remember what distro?


Cursory research seems to suggest Pat Volkerding, of Slackware fame.


So many bootloader fixes, this is awesome! For those using FDE, read this: http://spaceisdisorienting.com/when-fulldisk-encryption-goes...

I tend to use FDE for non mission critical working environments, like casually surfing the web, or just messing around with code. FDE can go wrong at the worst of times, and can undo years of work if you let it.

That's why if you're using FDE for anything important, you should be backing up crucial data to containers, or otherwise preparing for the entire disk to be scrambled beyond repair and or bricked.


You should be doing regular back-ups, FDE or not FDE. If you can, keep a "cold one" in a remote location.

For day to day, I use the T3 SSD[1]. It is tiny, ultra-resistent and not so expensive considering it's an SSD with USB 3.1 Type C.

[1] https://www.amazon.com/Samsung-T3-Portable-SSD-MU-PT1T0B/dp/...


That article describes how the master encryption key stored on the hdd (your password encrypts this key afaik) gets corrupted, which means the author was unable to recover the files on the computer even if nothing is wrong anywhere else.

Now I'm not familiar with the inner workings of OS X as I don't personally use that system. However, with FDE (LUKS) on Linux, this is 100% avoidable if you backed up the LUKS Header: https://calum.org/posts/backup-your-LUKS-header-and-LVM-conf....

This does not negate the need for a real backup tho. So you should backup your data regardless, however you want to do that.


I think there ought to be two copies of the header these days; each written out separately and atomically, and checksummed, similar to GPT (in the UEFI spec).


I've read several times that VeraCrypt is not 100% TrueCrypt compatible. Any experiences? I switched anyway for most stuff to EncFS (and not using containers anymore, but the reasons for this is more cloud backup) but I'm not sure if I would upgrade from old TrueCrypt 7.1a (for old containers) to VeraCrypt in the future.


VeraCrypt's on-disk format is NOT compatible with TrueCrypt's on-disk format,they are two totally different things and must be handled differently.

VeraCrypt binary application can unlock TrueCrypt volumes but CAN NOT create them.The application has a setting to default to TrueCrypt volumes and hence if all you do is unlocking your TrueCrypt volumes,you can conveniently use VeraCrypt to unlock your TrueCrypt volumes.

I think it will be better if you switch to VeraCrypt and set this option.Alternative "native linux" tool you could use to manage your TrueCrypt volumes in linux is zuluCrypt[3].

EncFS has its own issues[1] and there are a number of new projects that seeks to replace it and SiriKali[2] is a GUI tool to manage all these newer tools.

[1] https://defuse.ca/audits/encfs.htm

[2] https://mhogomchungu.github.io/sirikali/

[3] http://mhogomchungu.github.io/zuluCrypt/


> I think it will be better if you switch to VeraCrypt and set this option.Alternative "native linux" tool you could use to manage your TrueCrypt volumes in linux is zuluCrypt[3].

You can mount truecrypt volumes without any special tools on linux (only cryptsetup).

    cryptsetup open --type tcrypt [volume] [name]
It even works with hidden partitions.


When you try to decrypt a partition encrypted using TrueCrypt, you need to click a checkmark that says "TrueCrypt compatibility mode". It never failed me.

I think that what they mean by "not 100% TrueCrypt compatible" is that it doesn't work the other way — you (probably, I've never tried it) can't decrypt VeraCrypt-encrypted partition with TrueCrypt.


It didnt worked in my case. I stayed with last version of TrueCrypt..


I have TrueCrypt 7.1a container and VeraCrypt was not able to open it even with the "compatibility mode". So I am staying with TrueCrypt for now..


They can pry TrueCrypt 7.1a from my cold, dead fingers.

https://www.grc.com/misc/truecrypt/truecrypt.htm


How do we request OSTIF to audit a project? For instance, Tox [0] claims to be secure. Is there a way to request them to audit Tox?

[0] https://tox.chat/


Next they are planning to audit OpenVPN and then OpenSSL (according to today's AMA[0] answers on Reddit).

[0] https://www.reddit.com/r/privacy/comments/57yfla/veracrypt_h...


In my experience veracrypt doesn't actually support guid partitions. I had to re-install windows after using veracrypt to encrypt my new machine.

FWIW I'd recommend changing your partition type to mbr and using truecrypt 7.1a instead.

I don't trust veracrypt after such a negative experience.


VeraCrypt has not been usable for me, since upgrading (if I can call it that) to macOS Sierra. There seems to be an issue with FUSE for macOS, preventing me from mounting anything.


Works for me. OSXFUSE 2.8.3 installed - as the VeraCrypt installer instructed. Installed on El Capitan, works after Sierra Upgrade.


Same thing happened to me with macOS Sierra, but it started working after installing the latest Fuse and then rebooting.


I wish macOS had 1st party FUSE (or equivalent) support.


Thanks for the info. Not only am i more sure that VeraCrypt is a good alternative to discontinued TrueCrypt (which i used and loved for years), but i learned about a new version with important bug fixes.

I should donate to more open source projects. I use them everyday. Yesterday i found "paste" program and almost broke down crying that somebody 40 years earlier provided free solution to problems i didn't even know i had. I love that about free software.


Anyone with experience on DiskCryptor? I've been using it for a while and it seems to be stable and well-built, but development seems to have halted and now I'm looking to possibly jump ship to VeraCrypt.


I was a big fan of TrueCrypt before its still very explained end.

Glad we finally have a security audit of this fork. True/VeraCrypt has been essential in defending the rights and freedoms of many people. Hope this project continues.


Interesting you make that point. Do we have anything good to back this up - like court cases, etc? Where use of TC/VC stopped LE snooping into peoples' belongings??



IIRC one of its main uses was in places where one needed to protect oneself from government spying, like TOR. I believe, but don't recall specifically, that there were some instances like what you mention, though.


If you find this many critical bugs, there is no doubt that there are more, since it is indicative of the overall code quality. Still, what they're doing is good work and crypto remains hard. I am glad they're doing it.


That reasoning strikes me as flawed. It would be like submitting a term paper to a teacher after he proofread your draft, with all of his suggested fixes, yet his final response is "Oh sure, you fixed all the problems I found. But I found so many before, what about the ones I didn't find? C+"

This was a third-party audit, and VeraCrypt fixed all of the critical ones that they found. All software has bugs. To find some that are critical is not necessarily indicative of the quality of code on the whole.


actually, it would be more like students submitting 100,000 term papers to a teacher after the teacher proofread the drafts and the students made all his suggested fixes.

auditing a large codebase certainly isn't as easy as proofreading a paper. it's not a "one and done" scenario. especially when you also factor in the security requirements of encryption software. the world won't end if there's a missed spelling mistake or two in a term paper, but if there are critical vulns in VeraCrypt the entire software could be rendered useless


I do not think your analogy applies here. Finding 8 critical-risk bugs in a project from an audit is bad news. It is highly suggestive that the codebase is riddled with flaws.

Smoke, fire, yadda yadda.


To add onto the admirals statement, I have two kinds of critical bugs mentally:

Critically dangerous, but understandable(complex crypto/ interactions). Those do not indicate low code quality.

Critically stupid/unwise(you know what these are, e.g. the libressl rant list). These do.


If there were more bugs, why weren't they found?

This is a form of "slippery slope" argument that holds no weight as to the security of VeraCrypt, one way or another.


It's not slippery slope. The slippery slope fallacy is arguments of the form 'if we allow x, y and z will necessarily occur, therefore we can't allow x'. (Note that the slippery slope fallacy is not a pure logical fallacy, the argument is valid if the slope is, in fact, slippery).

Between two pieces of software with the same function, if the same audit process finds 10 critical bugs in software A, and 100 critical bugs in software B, which do you think is more likely to have undiscovered bugs?

I'd say B, since no bug discovery process is perfect. For example, assuming a ~90% hit rate, you'd expect there is ~1 undiscovered bug in A, and ~10 in B.

(Note that I am not saying VeraCrypt is bad or that you shouldn't use it. I have no idea how it compares to similar software. All I'm arguing is that, ceteris paribus, discovering more critical bugs is at least weak evidence that more undiscovered critical bugs exist, since no bug-discovery process is perfect.)


"If we found bug x, y and z will necessarily be found, therefore VeraCrypt is insecure."

That's the argument being made implicitly, and the argument you're reiterating.

Saying, "Who knows if there are more bugs?" is a useless statement, you simply can't do anything with it. That's the nature of bugs, if you can't identify or find them, you can't claim they exist.

100 bugs in software B does not imply the 101st bug, is my point. There's no such thing as a bug "hit rate".


>If we found bug x, y and z will necessarily be found, therefore VeraCrypt is insecure.

I did not make that argument, I made a probabilistic argument. Trying to apply deductive logic fallacies to probabilistic arguments is a type error.

>if you can't identify or find them, you can't claim they exist.

I made no absolute claim about the existence or nonexistence of bugs, only about the expected number of bugs, that is, a probabilistic estimate.

> There's no such thing as a bug "hit rate".

Of course there is, it is, trivially, the number of bugs found by the audit / the total number of bugs.

Look at this way: I have two auditors, Alice and Charles. Charles is known to be pretty sloppy, he looks over code quickly, skip parts, etc, but the bugs he finds are usually actual bugs. Alice is extremely rigorous and misses very few bugs.

I have 2 pieces of software, SuperCrypt and UltraCrypt.

Charles audits SuperCrypt and finds 137 critical bugs. He audits UltraCrypt and finds only one.

I know Alice will review both code bases next week. Where do you think she'll find more bugs?


It doesn't matter what I think, it matters what can be supported by evidence, and your emotional narrative aside, the number of bugs found in a piece of software does not indicate there are more bugs to be found.

You did make the argument I presented above, even if you didn't mean to. It is not a type error, it is a conclusion that you're trying to imply but don't want to directly state because it's indefensible, and you know it.

There is not such a thing as a bug "hit rate" because it would require knowledge of unknown bugs, which isn't possible.

To be abundantly clear, I find your emotional narrative entirely unhelpful to this conversation, but to illustrate how unhelpful, I will play along:

UltraCrypt clearly wasn't audited correctly, and will have more bugs in it given the complexity of the problem space and the lack of issues found. SuperCrypt was clearly audited well and will therefore have fewer bugs if Charles continued looking and when Alice looks next week.

This, I repeat, is wholly unhelpful, because the same information can be used to create arguments for either point, and is therefore not useful.


>There is not such a thing as a bug "hit rate" because it would require knowledge of unknown bugs, which isn't possible.

It is possible to estimate the number of things you have not seen. The German Tank Problem [0] is a famous example. For software, if you Alice and Bob independently look at a piece of code for bugs, and Bob finds 17, Alice find 21, and 16 overlap, then you have some knowledge of Alice and Bob's rate at finding bugs, and you can estimate the number of bugs expected to be left. As the sample sizes increase you have smaller and smaller confidence intervals, giving quite good knowledge of the count of unknown things.

You are assuming once an audit is done, all bugs are found. This is rarely the case, and I doubt has ever been the case on software as complex/large as this piece.

He is assuming an audit can find bugs, each with some probability. This is a quite reasonable position, one borne out by practice.

His view is true in practice (and theory, since proving software correct is tremendously difficult except). There is ample literature on how many bugs are likely left after an audit, and this audit itself shows that these bugs remained after people (presumably) looked over the code previously.

More effort/skill usually turns up more bugs. It's that simple. Do another audit with more skilled eyes, paid more money, given more time, and they will likely find bugs that this one did not.

There is a large literature on estimating the number of bugs left after some are found. Here [1] is an example of how this is done.

[0] https://en.wikipedia.org/wiki/German_tank_problem [1] http://www.johndcook.com/blog/2010/07/13/lincoln-index/


His view is not correct, nor are the links you've provided directly relevant to this discussion. The "German Tank Problem" requires knowledge of variables that aren't present in the VeraCrypt security audit.

In fact, your second link begs the question, as it requires the prior knowledge of a "bug hit rate", which is defined in the blog post as a fairly arbitrary number, with plenty of caveats, and requires multiple independent audits.

In fact, from your linked blog, "Maybe the Lincoln estimate is not at all accurate, but it does tell you to be worried that there may be a lot more bugs to find since the overlap between the two bug lists was so small."

I begin to wonder if you actually read your own links...


From the link [1]

"There’s a simple statistic called the Lincoln Index that lets you estimate the total number of errors based on the number of errors found"

"Suppose two testers independently search for bugs. Let E1 be the number of errors the first tester finds and E2 the number of errors the second tester finds. Let S be the number of errors both testers find. The Lincoln Index estimates the total number of errors as E1 E2/S. "

E1 = count from searcher 1 E2 = count from searcher 2 S = Number overlapping

If S != 0, then a good estimate of the number of bugs in the code is E1E2/S. One can derive error bounds and confidence intervals if desired

Nowhere does this formula need prior knowledge. All is needs is at least two testers to count the bugs each found and the overlaps. This is precisely* the example I gave above.

As one analyzes how this works, one finds that this is likely lower than the true bug count. A simple reason is explained in "Beginning Software Engineering" by Rod Stephens, p197, which you can find on google books and read.

Alternatively, try searching for "estimate number of software bugs" in Google scholar and you can see a vast literature based exactly on refining methods like this.


> Nowhere does this formula need prior knowledge

I never said the Lincoln Index required prior knowledge, I said the German tank problem required prior knowledge that isn't available in the VeraCrypt security audit.

Your example may have been copied from the blog post, but you failed to include any of the caveats that were present in the blog post, which are also present in the "vast literature" in Google Scholar.

Rod Stephens even says, in the very citation you attempt to make, "Like all the other bug estimation techniques, this one isn’t perfect. It relies on the assumption that the testers have an equal chance to find any particular bug, and that’s probably not true." It does go on to say the Lincon Index, "probably underestimates the true number of bugs." but in that aspect, Rod Stephens disagrees with the body of work available both in your own links, and in the Google Scholar articles. He then cites Wikipedia, so in that he's basically admitting he's not an expert.

I am not and have not been saying there are no further bugs in VeraCrypt.

I am saying the likelihood of there being "many" more bugs in VeraCrypt is not available, given the data we currently have. We don't have E2, for example, since you're focused so intently on the Lincoln Index.


>I never said the Lincoln Index required prior knowledge

You wrote

>In fact, your second link begs the question, as it requires the prior knowledge of a "bug hit rate",

so you wrote exactly that the Lincoln index "requires the prior knowledge of a bug hit rate".

>which are also present in the "vast literature" in Google

Ah, so you read all the literature. Good. Then you're realize there are other methods than Lincoln, so your complaint about not knowing E2 is mitigated.

>I am not and have not been saying there are no further bugs in VeraCrypt.

You wrote above

> If there were more bugs, why weren't they found?

What did you mean here except to imply you believe there are no more bugs?

>I am saying the likelihood of there being "many" more bugs in VeraCrypt is not available, given the data we currently have.

Here [1] is the report.

It states in multiple places not all code was checked. For example, page 6, states that "Some components of VeraCrypt have not been investigated" including "the OS X and Linux versions of VeraCrypt". Want to bet if there are bugs in there?

Of the parts checked, bugs were found by multiple independent groups, including Open Crypto Audit Project, Fraunhofer Institute for Secure Information Technology, and James Forshaw. These bugs have very little overlap.

OCAP phase I found 11 bugs, page 4. OCAP Phase II found 4, page 16, two of which overlap previously known bugs. James Forshaw found 2 more bugs, page 18. Sections 5, 6, and 7 each list more bugs by (presumably) the two people listed working on this portion.

Given less than 100% code coverage and the lack of independent groups finding the same set of bugs, makes it extremely likely there are more bugs.

This paper itself demonstrated audits by one group followed by later audits by another group, each finding more bugs.

Yet you want people to believe there are no more bugs since we have not located them, despite the vast evidence and literature that the above points to more bugs lurking.

As you stated, "If there were more bugs, why weren't they found?"

[1] https://ostif.org/wp-content/uploads/2016/10/VeraCrypt-Audit...


Thanks for catching my mistake, I was referring to the first link when I said it "requires the prior knowledge of a bug hit rate".

I think you've misread the report, as there weren't independent groups, it was two people from QuarksLab who performed the audit. Right there on page 6, the page you cite. There weren't multiples of the same bugs found by these two people in the report (why would they both report it?), OCAP phase I and II weren't done on VeraCript, and James Forshaw hasn't looked at VeraCrypt at all. This was E1. There is no E2.

The alternative methods for determining bugs are all as flawed in this context as the one you've weirdly chosen to champion, the Lincoln index. What other specific method would you like to attempt to apply here? I'll gladly walk you through why it won't make sense in this context.

I'll say it again: I never said there were no more bugs. You've built this strawman because it's easy to dismantle, however it isn't real.

What I asked was, as you've quoted, "If there were more bugs, why weren't they found?" This draws an important line between the theoretical and the actual; the accusation of bugginess can be levied at any program. It is of no use or value to anyone to say such vague things as, "VeraCrypt may have more bugs!" Of course it could, that's a worthless statement. What's valuable is actually demonstrating those bugs.

It's painfully clear you didn't actually understand the report, and you quote me twice, despite admitting you didn't understand the quoted text.

I'll repeat here for you: I don't think and never said there are no remaining bugs in VeraCrypt, I do however believe it is useless to essentially exclaim to everyone that VeraCrypt is a piece of software (that's what you're doing when you're saying it might have other bugs).

If there are other bugs, where are they? Surely you can find them if you know they exist.


>Thanks for catching my mistake, I was referring to the first link when I said it "requires the prior knowledge of a bug hit rate".

Odd, since the rest of that sentence discussed a blog post, and the first link is a Wikipedia article, while the second is a blog post. Nothing in the German Tank problem corresponds to prior knowledge of a rate of any kind. The word "rate" does not even appear on the page. It does appear in the blog post.

Your repeated intellectual dishonesty makes further discussion pointless. You are bizarrely resistant to admit there is an entire field called software defect estimation, or that future trends for bugs are not independent with current bugs. I give up like everyone else in your thread.


I honestly don't even think you know what I'd be wrong about, since you said you don't know the meaning of the comment you disagreed with.


I don't understand how you still can't get it. Let us try again.

Suppose you're a professor and you had a super-productive student who wrote 100 very interesting papers. You only have time to read 50 of them, selected at random. However, you notice that in 10 of these 50 papers, there are mathematical errors. What do you expect of the remaining papers, how many errors will there be? This is basic statistics and nothing about this argument is emotional.

What if you read 50 papers and only one contained mistakes? Would you expect more or less errors now in the remaining papers?


Your analogy isn't relevant to this conversation.


Do you really think they read through the entire source code and understood every facet of it? Code audits are nothing but sampling. Just like in the example I mentioned. I don't mind people disagreeing with me, but you could at least make an effort to tell me why. Unless this bores you, of course.


I don't think you're understanding the conversation that's going on, and I do find it boring to re-state things I've already written.


I respect that. So, let us not continue and I hope we didn't bore you too much. I wish you a pleasant day.


Uh, on the contrary, I've found your narrative helpful in showing how one might arrive at the conclusion that finding more bugs is evidence that more won't be found later.

I often try to back up abstract arguments with a concrete illustration, it makes things clearer for me.

I've clearly failed to make myself understood, I don't think it's useful to continue this conversation.


I'm saddened how you'd rather end conversation than change your beliefs.


The audit confirms my suspicions that VeraCrypt is relatively unsafe.

I would also suggest to take into account the fact that, albeit being open source, it is primarily developed by a French security company whose web pages https://www.idrix.fr/Root/ do not inspire confidence in combination with France's history of dealing with encryption. A conflict of interest can easily arise in such companies between the companies or authorities that pay them and the interests of the general public.

"You'll find below a partial list of those who gave us their full trust, as some of our customers prefer to remain discrete about our collaboration."


I don't think that's a particularly fair reading of the audit. First, they had the audit done at all, which is something a very short list of software products can even claim. E.g., and I am not casting aspersions on the project, I don't think `dm-crypt` has done a formal 3rd-party audit to date, and it's probably the next-best alternative to VeraCrypt, not counting the last version of TrueCrypt itself.

About the only really unfortunate thing that I saw in the audit was the addition of the old Soviet 64b block cipher that they need to remove. They shouldn't have added that, noble though the goal of adding non-Western-derived algos might have been. But the remainder of the issues noted in the audit are likely there from TrueCrypt, and as the VC team fixes them, it will mean that VeraCrypt is definitively better than TrueCrypt. That's a good piece of information to know, as I suspect there are lots of people around still using TrueCrypt because nobody has really produced anything better yet.


> do not inspire confidence in combination with France's history of dealing with encryption

So which country would inspire confidence then? Reading the news these days it seems most developed countries are equally guilty of mass surveillance and trying to meddle with encryption technologies.

The only way to really be sure is to have independent audits like they just did, and the VeraCrypt developers are apparently keen to keep making these audits happen.


First of all, mass surveillance has nothing to do with this topic. Second, trying to meddle with encryption technologies is not the same as state regulation of encryption and prohibiting encryption, which is what France's lawmakers did in the past and tried to re-institute recently.

I'm not the kind of guy who compiles the source code himself and vets it against the audited code (who is, frankly speaking?), so I only run encryption software that has been developed by individuals and non-profit groups of individuals that do not run a private security company and that I consider trustworthy based on personal criteria.

In cryptography, private companies have proved again and again that they cannot be trusted. They invariably mess it up due to incompetence or meddling with the government (see e.g. Crypto AG as a typical example). Open source is good, of course, but it is only a necessary, not a sufficient step. Security audits will not help you at all if you download binaries from untrusted sources.

It's fine if you trust Idrix, but it's also fine and fully rational if I don't, based on what I know about France's intelligence apparatus and how they operate.

Yes, I do consider it possible with a low but significant enough subjective probability that Idrix is a government setup.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: