Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There has never really been a place for it; "Fragmentation Considered Harmful" is one of the original and most famous "Considered Harmfuls" and it's from the late 1980s. A lot of protocol engineering, from MSS options to ENDS0, goes into ensuring that you never hit the fragmentation case in the first place. It's been kept around as a mechanism of last resort for weirdo hops with bizarro MTUs.


I’m surprised by the willful disregard of the feature, e.g., with some routers just dropping fragmented packets. Sending packets bigger than the MTU seems like a reasonable thing to want to do—especially given that IP by design won’t guarantee a stable path between endpoints. Was the reasoning that it’s always better handled at a higher protocol layer?


Yes, that's the reasoning. Fragmentation is kind of a performative half-measure. You're not really enabling delivery across varying network links, in that the performance is so bad (necessarily!) that it alters the service model for many protocols.

IPv6 moves to an end-to-end fragmentation model, but even then the right answer is to negotiate in a higher level protocol a maximum segment size for the path you're talking on, and then just avoid fragmentation entirely. Fragmentation is an absolutely wretched stream transport protocol!


the messy challenge here is that the trade-offs shifted, but persist contrarian arguments. packets are too small to be efficient without doing excessively tight loop optimizations, which is what makes fragmentation "slow" (it's not such a big slow down in a simple software implementation, it's a disaster in a highly common case optimized device). On the other side in order to move away from those excessively optimized systems while still delivering on user demand for higher throughput we need larger payloads - but we can't get to larger payloads without transparent solutions for fixup (ICMP, fragmentation, etc). Sprinkle in some stuff that has never been formally fixed (bad parts of ICMP concerns), and it's an ongoing recipe for ossified non-progress.


Fragmentation isn't slow because routers are bad at fragmenting (though: they are), they're slow because the loss of any one fragment forces the discarding of the whole packet. Because you can't possibly make a transport protocol as dumb as fragmentation reassembly fast, forwarders don't optimize it.


The picture in the case of losses is pretty complicated with dack, sack, congestion control, and so on. If loss is high enough that this is a main concern TCP performance is generally shot already.


I'm not a network engineer, but somehow I imagine that for at least some links fragmentation could be also handled transparently at lower layer, i.e. limit fragmentation to single hops and always reassemble packets before passing them on.


It is. In my opinion that's what you should do if you can't provide 1500 MTU to your users directly.


How is fragmenting and reassembling a better option than just communicating what the real MTU is so higher-layer protocols can adjust to it? What is the extra mechanism buying you?


At this point lots of things don't work right otherwise, and assuming your users want to run wireguard over this link that takes them down to 1420, and if you're already at 1420 it takes them down to 1340... Anyway, you can probably use a lot less header overhead if you do it at a lower layer too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: