I’m not sure that’s enough, robots.txt isn’t really legally binding so if the zip bomb somehow would be illegal, guarding it behind a robots.txt rule probably wouldn’t make it fine.
Neither is the HTTP specification. Nothing is stopping you from running a Gopher server on TCP port 80, should you get into trouble if it happens to crash a particular crawler?
Making a HTTP request on a random server is like uttering a sentence to a random person in a city: some can be helpful, some may tell you to piss off and some might shank you. If you don't like the latter, then maybe don't go around screaming nonsense loudly to strangers in an unmarked area.
The law might stop you from sending specific responses if the only goal is to sabotage the requesting computer. I’m not 100% familiar with US law but I think intentionally sabotaging a computer system would be illegal.
No, why would they? If I voluntarily request your website, you can’t just reply with a virus that wipes my harddrive. Even though I had the option to not send the request. I didn’t know that you were going to sabotage me before I made the request.
Because you requested it? There is no agreement on what or how to serve things, other than standards (your browser expects a valid document on the other side etc).
I just assumed court might say there is a difference between you requesting all guess-able endpoints and find 1 endpoint which will harm your computer (while there was _zero_ reason for you to access that page) and someone putting zipbomb into index.html to intentionally harm everyone.
So serving a document exploiting a browser zero day for RCE under a URL that’s discoverable by crawling (because another page links to it) with the intent to harm the client (by deleting local files for example) would be legitimate because the client made a request? That’s ridiculous.
That is not the case in this context. robots.txt is the only thing that specifies the document URL, which it does so in a "disallow" rule. The argument that they did not know the request would be responded to with hostility could be moot in that context (possibly because a "reasonable person" would have chosen not to request the disallowed document but I'm not really familiar with when that language applies).
> by deleting local files for example
This is a qualitatively different example than a zip bomb, as it is clearly destructive in a way that a zip bomb is not. True that a zip bomb could cause damage to a system but it's not a guarantee, while deleting files is necessarily damaging. Worse outcomes from a zip bomb might result in damages worthy of a lawsuit but the presumed intent (and ostensible result) of a zip bomb is to effectively cause the recipient machine to involuntarily shut down, which a court may or may not see as legitimate given the surrounding context.
Has any similar case been tried? I'd think that a judge learning the intent of robots.txt and disallow rules is fairly likely to be sympathetic. Seems like it could go either way, I mean. (Jury is probably more a crap-shoot.)
> can make an easy case to the jury that it is a booby trap to defend against trespassers
I don't know of any online cases, but the law in many (most?) places certainly tends to look unfavourably on physical booby-traps. Even in the US states with full-on “stand your ground” legislation and the UK where common law allows for all “reasonable force” in self-defence, booby-traps are usually not considered self-defence or standing ground. Essentially if it can go off automatically rather than being actioned by a person in a defensive action, it isn't self-defence.
> Who […] is going to prosecute/sue the server owner?
Likely none of them. They might though take tit-for-tat action and pull that zipbomb repeatedly to eat your bandwidth, and they likely have more and much cheaper bandwidth than your little site. Best have some technical defences ready for that, as you aren't going to sue them either: they are probably running from a completely different legal jurisdiction and/or the attack will come from a botnet with little or no evidence trail wrt who kicked it off.
The illegality of boobytrapping your house appears to be illegal because of the potential threat to life/health. A zip bomb doesn’t threaten any people. At worst, it can fill up memory and storage on a device. I’m pretty sure it wouldn’t violate any of the same statutes and it most likely wouldn’t fall nicely under any of the common law jurisprudence that you mentioned.
> pull that zipbomb repeatedly to eat your bandwidth, and they likely have more and much cheaper bandwidth than your little site.
Go read what a zip bomb is. There is one that is only a few KB, which is comparable in server load + bandwidth to a robots.txt.
No need to be a dick. Especially when you yourself are in the process of not understanding what others are saying.
I know full well what a zipbomb is. A large compressed file still has some size even in compressed form (without nesting, 1G of minimal entropy data is ~1M gzipped). If someone has noticed your bomb and worked around it by implementing relevant checks (or isn't really affected by it because of already having had those checks in place), they can get a little revenge by soaking up your bandwidth downloading it many times. OK, so nested that comes down to a few Kb, they can still throw a botnet at that, or some other content on your site, and cause you some faf, if they wish to engage in tit-for-tat action. Also: nesting doesn't work when you are using HTTP transport compression as your delivery mechanism, which is what is being discussed here: “standard” libraries supporting compressed HTTP encodings don't generally unpack nested content. There is no “Accept-Encoding: gzip+gzip” or similar.
Most, perhaps the vast majority, won't care to make the effort, so this could be considered a hypothetical, but some might. There were certainly cases, way back in my earlier days online, of junk mailers and address scrapers deliberately wasting bandwidth of sites that encouraged the use of tools like FormFucker or implemented scraper sinkholes.