Wouldn't this attack be better eliminated by fixing the timing leak that is potentially allowing people to guess valid MACs on packets?
The reality is that you probably can dick around with things in your deployment and your app to make timing attacks prohibitively expensive/annoying; if you understand that you're not eliminating the timing leak, but rather masking it, you can take advantage of the additional measurements required to unmask the leak to give your MAC enough of a buffer to last for its whole useful lifetime.
But when you do this, you're really playing on the razor's edge of what we currently know about side channel attacks on crypto, and you're probably going to end up putting more effort into your workaround than you would in just fixing the underlying bug.
The idea of trying to detect people employing timing attacks on your cryptography and block them individually by IP address is so obviously retarded that the comment I'm replying to is indistinguishable from trolling.
I'm not an IT guy, so no, I wasn't trolling. Why exactly is it "retarded" to build your system to reject (or at least flag) access patterns that are unlikely to be due to legitimate activity?
I'd recommend using the word "retarded" with a bit more circumspection. Obviously the incoming IP address doesn't uniquely identify a client who's likely to be on the other side of a NAT gateway. But the idea that a system should just sit there silently and carry on business as usual while any one address or class-C block generates large numbers of failed access attempts seems like a good application of the word in question.
You're not an IT guy, but you are a programmer, and you know that leaving a vulnerability in your code, hoping the devops team catches attempts to exploit it, is a fucking retarded idea. I think you're just trolling.
Who said anything about leaving a vulnerability in the code? If your security model depends on a suboptimal implementation of strcmp(), you have bigger problems than timing attacks.
I'm talking about iptables rate limiting (on Linux and I assume other OS's firewalls can implement this). Fixing bugs in isolated code is part of the scope we face but preventing the business and its customers from suffering due to a bug is also part of we get paid for. If you are still in school or work in the scientific area then perhaps you have not come across rate limiting?
Fixing the leak today is admirable and difficult but challenging as the many comments have shown. The problem is that at any time regressions can and will happen. There was a problem on Ubuntu with SSH a couple of years ago, SSH had been fine then someone made a change (I don't recall the details) that went unnoticed for I think it was 2 or 3 years. That change made SSH vulnerable to I believe it was timing attacks and it could have been easily prevented. This was in SSH. SSH.
No, the change you're thinking of is when Debian's maintainers managed to comment out the "secure" part of it's cryptographic secure random number generator, thus ensuring that SSH would only generate keys from a trivially small range of possible values. That change had nothing to do with the fundamental difficulty of generating random numbers.
The fix for not leaking timing from your comparison is trivial. Either double-hash, or use a timing-independent comparison function like the accumulator XOR function upthread.
It does nobody any favors to spread drama and FUD over what is in fact a simple and easily fixable problem.
The reality is that you probably can dick around with things in your deployment and your app to make timing attacks prohibitively expensive/annoying; if you understand that you're not eliminating the timing leak, but rather masking it, you can take advantage of the additional measurements required to unmask the leak to give your MAC enough of a buffer to last for its whole useful lifetime.
But when you do this, you're really playing on the razor's edge of what we currently know about side channel attacks on crypto, and you're probably going to end up putting more effort into your workaround than you would in just fixing the underlying bug.