Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see how rust will manage to fix async await, unless making a v2 of the language. While working with sync rust is often a very pleasant experience, everytime I used async await was awful. To the point that I avoid using rust for web /networking stuff, any other option is better. It's also very discouraging to read some people telling there's no issue with async await. Of course if you use a thousands of libraries (that are more or less hacks), and never use trait and generics, it's probably a decent experience half of the time I'd say.

It's something I've only encountered with rust community, in Java when a feature sucks, we just say it sucks. Even the architect that approved it will say it sucks.



I recently built a little Rust project, which I wanted to integrate into a Web App using Web-assembly (wasm-binden). Writing the Rust code was super fun, however, when I execute one of my functions (which takes around 10 seconds to complete), the whole Web App freezes. Well, this should be an easy fix, just make the Rust function async - right? Nope. Making the Rust function async to return a Promise: Web App freezes. Wrapping the call of the Rust code in Promises: Web App freezes. Using Web Workers to run any Rust code in a separate thread: Great, now you can’t share ANY state whatsoever with the UI thread (and no, you can’t pass any Rust objects to another Web Worker thread, because any wasm objects only consist of _binding_ which need an initialised WASM instance in their own thread. To fix this problem, people on GitHub redirected me to a parallel retracer example, which requires nightly Rust and some other tacky tricks to get running. Maybe I will figure out how to integrate this into my project, maybe I won’t. All I wanted to do, is to run one simple async function using WASM. But as it turned out I will have to implement a frikin raytracer first.


I think your issue has nothing to do with rust, and is all about WASM being executed on the main event loop thread.

Rust can't magically create parallelism (this is not even an concurrency thing which is what async provides) where there is none.


Yeah you'd have the same issue if you ran the expensive computation in Javascript. What you need is worker threads, and yes, you can't share objects with worker threads easily.


The core problem I see is Rust tried too hard to avoid allocations/copies and a heavy async runtime. The result is confusing and difficult types and very unclear errors due to apis producing ridiculous types like Future<Arc<Box<dyn Thing>>>.

The language should have defined a consistent async model (like coroutines with message passing) and kept "raw async" confined to high performance special libraries and embedded use cases.


That's not Rust's fault but library creators / programmers fault who often paint themselves into corner by refusing to use a heap allocation or clone here and there. Once you are ok with occasional boxing, you can easily do a lot of stuff that's otherwise hard - e.g. async traits or storing/passing/moving futures like any other value.

It is quite unfair to complain on convenience of Rust async used with no heap allocations to other languages which don't offer an option to avoid boxing at all. In e.g. Java when you return a Future it is always on the heap. In Golang, you don't even have a Future concept. So you basically can't do the same thing in other languages at all.


One thing which I am afraid of, is that people who make Rust applications or libraries aren't really responsible with its advanced features (which are great!).

Like I am positive we'll be getting GAT misuse / abuse just by sheer accident because people don't understand them (they're a complex language feature), just like we have libraries going async needlessly.


Funnily enough, this is discussed in length in the GAT stabilization thread.


There's only the ``type Future`` mess because GATs aren't exposed (they're currently compiler internal) so that's easy to fix. They could stabilize it already, but they're listening to community feedback in the respective issue (which is a fantastic read btw)


It would only be a partial fix. It will allow async interfaces - but if just "plain GATs" are exposed then those interfaces would transform into something where only Rust language experts know whats actually going on. And the majority of developers are not that, and care more about working on their actual tasks/projects and less about extremely unique features of programming languages (like GATs).

It will also not fix the "lifetimes for async functions are special" issue, due to the lazy evaluation of futures, and the "async functions might sometimes stop executing in the middle" challenge.


I can confirm this behaviour. I once made a video about the downside of Rust[1] where I also talk about the problems you encounter when using `async`. Even though it's clear that there are fundamental problems (e.g. missing support for async traits), people usually comment that I "should've used a higher-level library" to overcome this problem.

[1]: https://www.youtube.com/watch?v=pmt0aaquh4o


> To the point that I avoid using rust for web /networking stuff, any other option is better

How can you write networking concurrent code in Golang then? It doesn't support async traits either. Well, it doesn't support async at all, only green threads ;) And somehow people are ok with it, and say it is simple.

Hint: you can use the same style of concurrency in Rust. Don't use futures. Spawn frequently and communicate with channels.


All the ecosystem is using Tokio and async for web stuff. I agree for those cases go is much saner.


My point was that you can program in Rust in the exactly same style as Go, not hitting any of the problems mentioned in the linked article. Just use tokio::spawn and tokio channels and you get all the stuff you have in go.

And btw, I just did that recently - I ported a proxy written in Go to Rust and actually it turned out to be simpler than the original thanks to things like join!, select!, RAII, being able to return a value from an async call easily.


If you have any pieces of code you can share, I'd be interested in seeing what Go code looks like in equivalent Rust with tokio.


Async/await is syntactic burden. In golang, everything is async so there is no need to differentiate between async/sync methods.

Goroutines = coroutines with implicit await on every function call

> And somehow people are ok with it, and say it is simple.

Threads and queues are (IMHO) fairly easy to understand and work with. If you know how you'd write an application with threads and queues, goroutines and channels work pretty much the same way.


It is not a syntactic burden, because it provides valuable information to the code reader. It is very important to me that I can quickly spot all places where execution can wait. For example, the user interface should take that into account, because noone likes programs that freeze temporarily waiting on something.

Also, await points are places where execution can jump threads - this is critical to thread safety analysis. Making them invisible in a language that doesn't offer reliable data race detection is a similar idea like making types invisible in a language with dynamic and weak typing.

And finally a model where await is implicit and automatically inserted by the compiler/runtime is less expressive (more limited) than async/await. For instance you wouldn't be able to do this if await was automatically inserted on each async thing, because they'd run sequentially and I don't want to:

    let future1 = do_one_async_thing();
    let future2 = do_another_async_thing();
    let (res1, res2) = join!(future1, future2);
> Threads and queues are (IMHO) fairly easy to understand and work with

1. Sure, goto is easier to understand than for loops, ifs and functions. And it has its place, and I agree sometimes it can make things simpler. But I wouldn't want to work in a language where it is the only control flow construct.

2. Threads and queues are present in Rust as well, so that's not the reason Go could have an edge here.


> It is not a syntactic burden, because it provides valuable information to the code reader

"Valuable" is subjective. I personally find async/await annotations annoying and pointless, and even leading to subtle bugs if you forget to put an "await" in the right place. Code should not depend on questions like "how long will this method run".

> Making them invisible in a language that doesn't offer reliable data race detection is a similar idea like making types invisible in a language with dynamic and weak typing.

Golang has a race detector: https://go.dev/blog/race-detector

> For instance you wouldn't be able to do this if await was automatically inserted on each async thing, because they'd run sequentially and I don't want to

You can do that in Golang with channels:

    ch1 := make(chan ResultType1)
    ch2 := make(chan ResultType2)
    go do_one_async_thing(ch1)
    go do_another_async_thing(ch2)
    res1 := <-ch1
    res2 := <-ch2
To me, this approach is much more intuitive because communication happens explicitly through a channel, instead of implicitly through a Future that is magically returned from async functions if you don't put "await".

> Sure, goto is easier to understand than for loops, ifs and functions. And it has its place, and I agree sometimes it can make things simpler.

This is a dishonest take. Threads and queues are the most basic concurrent mechanisms, and async/await just emulates threads of execution in userspace. Golang does that too. The only difference is the lack of special syntax, and channels are used for inter-task communication instead of Futures.

> Threads and queues are present in Rust as well, so that's not the reason Go could have an edge here.

Threads in Rust are OS threads, and they are heavy. That's why the whole "async" world exists in the first place - because launching many OS threads slows down your system and takes up lot of memory.

Go takes the execution model of threads, and emulates them in userspace with implicit async runtime, so they are much faster and take much less memory.


> async/await annotations (...) leading to subtle bugs if you forget to put an "await" in the right place

In most cases if you forget to call them, the return type would be different and it would be caught by the compiler as a hard error. And if you don't use a result of something important that also leads to a warning.

However, I partially agree here with you - I can see this could lead to a potential problem when you rely on side-effects of a function that doesn't return anything. But a side effecting, async, infallible function that doesn't return anything (so it doesn't return potentially an Err) is IMHO quite a weird thing in this world. You're much better off spawning a task in such case. And spawned tasks are executed eagerly, they don't have to be awaited to run.

> Golang has a race detector: https://go.dev/blog/race-detector

Which works only at runtime and detects only some races. https://eng.uber.com/data-race-patterns-in-go/

> To me, this approach is much more intuitive because communication happens explicitly through a channel...

So an occasional single await here and there is a syntactic burden, but having to create explicit channels to communicate the results back, instead of using function return values, is suddenly not a syntactic burden? Not mentioning a performance impact of it. You need to allocate those queues, allocate those cross-thread communication channels and then let the runtime synchronize the communication properly, and also beware of the fact that your coroutines execute in parallel (not just concurrently) and the risk of data races is higher.

> async/await just emulates threads of execution in userspace

Technically yes, but the way one program with async/await is fundamentally different than programming with threads. It is a much more structured and high level approach. The data flow is basically the same as with normal sync calls - I run some async code, I get results back as standard function return. Channels offer a lot more flexibilty, I can have a channel that delivers data from arbitrary point of thread A to arbitrary point of thread B. Very much like goto can jump from arbitrary point A to point B. See: https://vorpus.org/blog/notes-on-structured-concurrency-or-g...

> Threads in Rust are OS threads, and they are heavy.

Actually OS threads are suprprisingly fast and lightweight these days and this claim could be challenged. I've seen many threaded programs outperform programs using async and there are probably quite few usecases where green threads would offer significant performance advantage. The difference is you're scheduling them in userspace vs kernelspace, but you're still scheduling them blindly, and you still need the same synchronization primitives to coordinate the threads. And that coordination is expensive. Spawning a green thread (a task on a threadpool) running it and returning a result over an MPSC queue is still orders of magnitude slower than modifying a variable on caller's stack directly (which Rust async/await often reduces to).

But anyway, Rust has also green threads. Just use tokio::spawn + channels and those are your lightweight threads just as in Go.


> a side effecting, async, infallible function that doesn't return anything (so it doesn't return potentially an Err) is IMHO quite a weird thing in this world. You're much better off spawning a task in such case. And spawned tasks are executed eagerly, they don't have to be awaited to run.

I don't know. It just doesn't seem like "all async functions return a future by default" is useful enough feature to warrant the async/await keyword littering. It seems to me that Futures are themselves a sync concept, which is used to fuse the sync world with the async. Future methods (get, await, whatever) are all blocking.

If you discard "true" sync functions (everything that blocks an OS thread), then there's no need to fuse sync world with the async, so everything async can behave like sync, just like we're used to.

> So an occasional single await here and there is a syntactic burden

It's not the frequency of awaits that's the problem - it's the existence of await itself. The language is pushing the burden of differentiating between sync and async methods to the programmer. Of course, you can get more performance that way, but so can you by writing in assembly.

> but having to create explicit channels to communicate the results back, instead of using function return values, is suddenly not a syntactic burden?

You create channels only when you want to communicate with another (concurrent) goroutine. It seems fair to me - wanna communicate? Create a channel. Otherwise, all goroutines keep to themselves.

> Not mentioning a performance impact of it.

Channels are not queues. They are very lightweight and fast. They just work like queues.

> Which works only at runtime and detects only some races

> and also beware of the fact that your coroutines execute in parallel (not just concurrently) and the risk of data races is higher.

You can pass around pointers through channels, implicitly creating the "ownership" pattern that Rust is so famous for. Of course, there's no compiler support for it, so Rust wins this round. But in my experience working with Golang, as long as you're not trying to be too clever, data races are quite rare. Channels are really good for many communication patterns, thanks to the `select` keyword, so most of the tricky patterns that lead to data races don't even have to appear.

> Actually OS threads are suprprisingly fast and lightweight these days and this claim could be challenged.

I think this is completely false. Every new CPU vulnerability discovered slows down context switching. Default thread stack size for Windows, Linux and OS X is 1MB [0], 8MB [1] and 8MB [2], respectively (in contrast, default goroutine stack size is 8KB). Add to that the overhead of pre-emptive scheduling of threads, and you can see how they don't scale at all. After a certain fixed number of threads (depending on your hardware), your system will either run out of memory or grind to a halt, wasting all its time context switching. Meanwhile, on my laptop, Go doesn't have a problem working with hundreds of thousands of goroutines concurrently.

Here's the code if you wanna try it out for yourself:

    import "fmt"

    func doNothing(ch chan int) { <-ch }
    func main() {
        i := 0
        for {
            ch := make(chan int)
            go doNothing(ch)
            fmt.Println(i)
            i++
        }
    }
It took ~4 million goroutine+channel pairs to fill up my 16GB of RAM. How many OS threads + queues would it take to fill up 16GB?

The optimal way to do concurrent programming is to create as many threads as there are CPU cores, and cooperatively multiplex coroutines on them. That's what the Go runtime does.

> But anyway, Rust has also green threads. Just use tokio::spawn + channels and those are your lightweight threads just as in Go.

Do those use the async/await keywords too, or are they somehow made sync-like?

[0] https://docs.microsoft.com/en-us/windows/win32/procthread/th...

[1] https://unix.stackexchange.com/questions/127602/default-stac...

[2] https://developer.apple.com/library/archive/documentation/Co...


I think you focus a lot only on one aspect of thread performance - when you need millions of threads. Yes, I agree, if you need millions of concurrent things running, then you quite likely don't want to use millions of threads. But many apps don't need such a high degree of concurrency, and OS threads are very performant up to a few thousand threads.

And in cases where you really want to run millions of concurrent things, and where performance and scalability are the top concern, Rust gives you a plethora of other tools.

> Do those use the async/await keywords too, or are they somehow made sync-like?

Sure they do, but now you seem to be dismissing a useful feature just because it has slightly different syntax than in your favorite language. But this is essentially the same feature.


I'm sorry if it sounded like I'm dismissing it. Async/await is very useful for massive IO in many programming languages. I've personally worked with a lot of async Python, which can be painful to write. Rust's static typing does make things much better.

However, the reason I like Go's approach so much is because it abstracts away all the mundane details about scheduling, and lets us focus on what we want. If we want 1000 threads of execution running concurrently and doing something complex, we can just `go` 1000 functions and pass the corresponding channels to them - the optimal way (MxN coroutine-thread mapping) is already invented and it's been implemented inside the language.

Rust will probably always win when it comes to extreme optimizations. Golang's strength is in simplicity and straightforwardness in writing everyday concurrent code.


> in Java when a feature sucks, we just say it sucks. Even the architect that approved it will say it sucks.

Yup isn’t that the truth. I like Rust but this aspect is very off putting, the author is one of the worst offenders.

It reminds me a lot of the Scala community where the detractor was always just not smart enough to understand their perfect abstraction.


> the author is one of the worst offenders.

Please explain? The author labors the point that all of this is way past what anyone should be expected to do to diagnose a compiler performance bug.


You might want to look at the work the Async Foundations Working Group[1] is doing, there are indeed lots of things in flight to make the "shiny future" experience[2] better than it is today.

When async/await landed in 2018, it was an MVP. Some things have improved since, but it is incomplete. I believe the pushback you see is not "denial that it sucks", but rather an attempt to add nuance to critiques. Claiming that it is unusable or useless, like some do, is hyperbolic. There are plenty of things to fix, and people are working on some of them[3].

I also believe that landing the feature in 2018 was the right call. The parts that are pain points are features that are missing, but nothing in the currently stabilized version of the feature preclude filling those gaps. If it hadn't stabilized, what you would see is 1) many more people on nightly (which isn't ideal) and 2) a less battle tested language feature and crate ecosystem around this.

The developer experience of using async/await isn't as good as sync Rust, but it'll get better. I can point at a ton of merged PRs over the past four years as historical evidence to back that claim. And any improvement done for the underlying features that async/await uses, the result is much better experience in sync Rust too! I remember back in 2017 I spent a ton of time on improving trait bound diagnostics because until then I saw them as an "advanced feature" which didn't need immediate focus, but async/await put them front and center. The resulting work improved things for everyone, and is still ongoing to this day.

[1]: https://rust-lang.github.io/wg-async/welcome.html

[2]: https://rust-lang.github.io/wg-async/vision/shiny_future/use...

[3]: https://rust-lang.github.io/wg-async/vision/roadmap.html


What's so wrong with Rust's async / await?


It is much more complex to use compared to GC languages.


You can use Servlin to make an HTTP server with sync Rust. I made it.

https://crates.io/crates/servlin


Could you be more specific about what’s bad about it?


Yesterday it took me and my pair like 3 hours to figure out how to iterate over a vector of things asynchronously. Eventually we found collect'ing into a FuturesUnordered, after already implementing a repeated collect into vector of futures and then futures::join_all them.

The error messages we encountered while going through this were not the amazing one's we're used to from sync rust, these were... cryptic, to say the least.


> The error messages we encountered while going through this were not the amazing one's we're used to from sync rust, these were... cryptic, to say the least.

Please file tickets when you encounter these. A lot of the "great errors in rustc" are by their nature reactive: we need to see what people try when they hit corners that haven't gotten as much love. async/await uses multiple somewhat advanced features under the covers that have a tendency to provide either verbose or confusing errors. We're slowly improving them, but having good examples of real world cases is incredibly useful to speed up that process. This is what allows us to have extra information with recommendations.


It's been 7 months since I didn't write async code so it will be difficult for me to give u a proper answer but I'll try.

the learning curve, the cryptic lifetime compiler errors, the lack of async trait, no async Fn but async block instead, differents libraries using different runtimes, spending hours to compile a trivial code, the box pin Dyn boxfuture thingy, the compile time, issues with analyzer etc.

But most important the fact that you couldn't write code just as you would in sync rust. Most of the features of sync rust don't compose well with async rust. At every step, async rust feels like it's a proof of concept, an half baked feature that should never had reached stable in that state.

Rushing it was a mistake to me. You shouldn't need a PhD in async await to use it properly. It should feel like writing normal rust code. Unfortunately, it's not.


Async Rust is still considered MVP, perhaps surprisingly.

> Most of the features of sync rust don't compose well with async rust.

There are two parts to that. Missing features that are required to compose the features (GATs are a good example), and bugs/incoherencies/just inconveniences.

The latter will eventually get fixed hopefully, although some of them are really hard the recent work on a-mir-formality is going to help this. The former are probably going to be solved too, but it'll be harder and perhaps require more time. As far as I can tell, nothing is impossible by the design of the async/await MVP (it was designed for that, so this is not really surprising).

FWIW, both happen in non-async code too. It's just that async code takes the type system to its extreme, and so it has more surprises.


I hope you are right (about the MVP being flexible enough to be fixed in further releases)

the problem is that the ecosystem is already built on top of multiples workarounds to overcome the problem of async rust (a.k.a the "higher level libraries"), it will be for sure a challenge to fix all the legacy code already written with the current MVP.

I don't know how they're gonna do that. that's also a reason why I don't want to have dependencies on things that may be broken in a couple of year.

I hope they will never do the same mistake. It's better to keep a half baked feature in nightly for a decade than pushing it to stable and have to deal with it forever.


Different libraries using different runtimes is quite unfair criticism, the actual criticism could be "no abstraction over the runtimes".

> Most of the features of sync rust don't compose well with async rust

True, but what do you suggest? Adding an effect system? Removing async?


So you wouldn't say you feel fearless about concurrent code?


> in Java when a feature sucks, we just say it sucks. Even the architect that approved it will say it sucks.

I mean, in my opinion the whole language Java sucks. Why anyone keep using it is beyond me.


Just to put your opinion in perspective: Java is unique in being huge in both academia and the industry, which bred very novel tooling around the language much more than in any other’s case (JML, many many static analyzers, etc). Java is also unique in being far the best in the tuple of performance, observability and maintainability. Its ecosystem is one of the top 3 in size, and in many niches there is simply no alternative library for other platforms.

The language itself underwent huge changes while it remained basically fully backwards compatible. The same is true of the bytecode format.

All these make the language and platform the backbone of basically all Fortune 500 companies.


True, but I will always refuse to work with it, there's no amount of money you can pay me in to work with Java.

I get that it became the labour programming language, but I would expect people on HN to be in it for the love of computer science. I really don't think there's pleasure in writing Java.


> I mean, in my opinion the whole language Java sucks. Why anyone keep using it is beyond me.

That's a hip opinion to hold, I know, but difficult to back up with facts.

If you don't need|want C|Rust and want to write code that is extremely high performance in a language that has excellent tooling (debuggers, profilers, code analysis, etc) and top-shelf libraries and is broadly known and understood (so you can actually hire people), what are your options? Basically just one: Java.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: