Maybe moving the entire application to the browser/cloud wasn't the best idea for a large number of use cases?
Video streaming, sure, but we're already able to stream 4K video over a 25Mbit line. With modern internet connections being 200Mbit to 1Gbit, I don't see that we need the bandwidth in private homes. Maybe for video conferencing in large companies, but that also doesn't need to be 4K.
The underlying internet protocols are old, so there's no harm in assessing if they've outlived their usefulness. However, we should also consider in web applications and "always connected" is truly the best solution for our day to day application needs.
> With modern internet connections being 200Mbit to 1Gbit, I don't see that we need the bandwidth in private homes
Private connections tend to be asymmetrical. In some cases, e.g. old DOCSIS versions, that used to be due to technical necessity.
Private connections tend to be unstable, the bandwidth fluctuates quite a bit. Depending on country, the actually guaranteed bandwidth is somewhere between half of what's on the sticker, to nothing at all.
Private connections are usually used by families, with multiple people using it at the same time. In recent years, you might have 3+ family members in a video call at the same time.
So if you're paying for a 1000/50 line (as is common with DOCSIS deployments), what you're actually getting is usually a 400/20 line that sometimes achieves more. And those 20Mbps upload are now split between multiple people.
At the same time, you're absolutely right – Gigabit is enough for most people. Download speeds are enough for quite a while. We should instead be increasing upload speeds and deploying FTTH and IPv6 everywhere to reduce the latency.
This is a great post. I often forget that home Internet connections are frequently shared between many people.
This bit:
> IPv6 everywhere to reduce the latency
I am not an expert on IPv4 vs IPv6. Teach me: How will migrating to IPv6 reduce latency? As I understand, a lot of home Internet connections are always effectively IPv6 via CarrierNAT. (Am I wrong? Or not relevant to your point?)
> Google has measured to most customers about 20ms less latency on IPv6 than on IPv4, according to their IPv6 report.
I've run that comparison across four ISPs and never seen any significant difference in latency... not once in the decades I've had "dual stack" service.
I imagine that Google is getting confounded by folks with godawful middle/"security"ware that is too stupid to know how to handle IPv6 traffic and just passes it through.
Here, I would say "need" is a strong term. Surely, you are correct at the most basic level, but if the bandwidth exists, then some streaming platforms will use it. Deeper question: Is there any practical use case for Internet connections about 1Gbit? I struggle to think of any. Yes, I can understand that people may wish to reduce latency, but I don't think home users need any more bandwidth at this point. I am astonished when I read about 10Gbit home Internet access in Switzerland, Japan, and Korea.
Zero trolling: Can you help me to better understand your last sentence?
> However, we should also consider in web applications and "always connected" is truly the best solution for our day to day application needs.
I cannot tell if this is written with sarcasm. Let me ask more directly: Do you think it is a good design for our modern apps to always be connected or not? Honestly, I don't have a strong opinion on the matter, but I am interested to hear your opinion.
Generally speaking I think we should aim for offline first, always. Obvious things like Teams or Slack requires an internet connection to work, but assuming a working internet connection shouldn't even be a requirement for a web browser.
I think it is bad design to expect a working internet connection, because in many places your can't expect bandwidth be cheap, or the connection to be stable. That's not to say that something like Google Docs (others seems to like it, but everyone in my company thinks it's awful) should be a thing, there's certainly value in the real time collaboration features, but it should be able to function without an internet connection.
Last week someone was complaining about the S3 (sleep) feature on laptops, and one thing that came to my mind is that despite these being portable, we somehow expect them to be always connected to the internet. That just seems like a somewhat broken mindset to me.
Note that in deeper sleep states you typically see more aggressive limiting of what interrupts can take you out of the sleep state. Turning off network card interrupts is common.
I don't have access to the article, but they're saying the issue is due to client side ack processing. I suspect they're testing at bandwidths far beyond what's normal for consumer applications.
Gigabit fiber internet is quite cheap and increasingly available (I'm not from the US). I don't just use the internet over a 4/5g connection. This definitely affects more people than you think.
But that is not what i was replying to. I was replying to the claim that this affects regular 4g/5g cell phone speeds. The data is clear that it does not.
The problem is that the biggest win by far with QUIC is merging encryption and session negotiation into a single packet, and the kernel teams have been adamant about not wanting to maintain encryption libraries in kernel. So, QUIC or any other protocol like it being in kernel is basically a non-starter.
The flexibility and ease of changing a userspace protocol IMO far outweighs anything else. If the performance problem described in this article (which I don't have access to) is in userspace QUIC code, it can be fixed and deployed very quickly. If similar performance issue were to be found in TCP, expect to wait multiple years.
No, but it depends on how QUIC works, how Ethernet hardware works, and how much you actually want to offload to the NIC. For example, QUIC has TLS encryption built-in, so anything that's encrypted can't be offloaded. And I don't think most people want to hand all their TLS keys to their NIC[0].
At the very least you probably would have to assign QUIC its own transport, rather than using UDP as "we have raw sockets at home". Problem is, only TCP and UDP reliably traverse the Internet[1]. Everything in the middle is sniffing traffic, messing with options, etc. In fact, Google rejected an alternate transport protocol called SCTP (which does all the stream multiplexing over a single connection that QUIC does) specifically because, among other things, SCTP's a transport protocol and middleboxes choke on it.
[0] I am aware that "SSL accelerators" used to do exactly this, but in modern times we have perfectly good crypto accelerators right in our CPU cores.
[1] ICMP sometimes traverses the internet, it's how ping works, but a lot of firewalls blackhole ICMP. Or at least they did before IPv6 made it practically mandatory to forward ICMP packets.
SCTP had already solved the problem that QUIC proposes to solve.
Google of all companies has the influence to properly implement and accommodate other L4 protocols. QUIC seems like doubling down on a hack and breaks the elegance of OSI model.
The OSI model? We're in the world where TCP/IP won. OSI is a hilariously inelegant model that doesn't map to actual network protocols in practice. To wit: where exactly is the "presentation layer" or "session layer" in modern networking standards?
IP didn't originally have layering. It was added early on, so they could separate out the parts of the protocol for routing packets (IP) and the parts for assembling data streams (TCP). Then they could permit alternate protocols besides TCP. That's very roughly OSI L3 and L4, so people assumed layering was ideologically adopted across the Internet stack, rather than something that's used pragmatically.
Speaking of pragmatism, not everyone wants to throw out all their old networking equipment just to get routers that won't mangle unknown transports. Some particularly paranoid network admin greybeards remember, say, the "ping of death", and would much rather have routers that deliberately filter out anything other than well-formed TCP and UDP streams. Google is not going to get them to change their minds; hell, IPv6 barely got those people to turn on ICMP again.
To make matters worse, Windows does not ship SCTP support. If you want to send or receive SCTP packets you either use raw sockets and run as admin (yikes), or you ship a custom network driver to enable unprivileged SCTP. The latter is less of a nightmare but you still have to watch out for conflicts, I presume you can only have one kind of SCTP driver installed at a time. e.g. if Google SCTP is installed, then you switch to Firefox, it'll only work with Mozilla SCTP and you'll have weird conflicts. Seems like a rather invasive modification to the system to make.
The alternative is to tunnel SCTP over another transport protocol that can be sent by normal user software, with no privileged operations or system modification required. i.e. UDP. Except, this is 2010, we actually care about encryption now. TLS is built for streams, and tunneling TLS inside of multiple SCTP substreams would be a pain in the ass. So we bundle that in with our SCTP-in-UDP protocol and, OOPS, it turns out that's what QUIC is.
I suppose they could have used DTLS in between SCTP and UDP. Then you'd have extra layers, and layers are elegant.
As others in the thread summarized the paper as saying the issue is ack offload. That has nothing to do with whether the stack is in kernel space or user space. Indeed there’s some concern about this inevitable scenario because the kernel is so slow moving, updates take much longer to propagate to applications needing them without a middle ground whereas as user space stacks they can update as the endpoint applications need them to.