Hacker Newsnew | past | comments | ask | show | jobs | submit | amluto's commentslogin

I, too, can easily use more tokens to achieve the same task. I can give worse prompts. I can fail to make it clear to the tools where to find the information they need. I can ask them to think hard when the don’t need to ask tell them not to think when they do need to. I can give vague, open ended instructions. I can generate code that sucks and throw it away.

If I do all of this, do I get a promotion?


Even if I'm in the middle of using the AI seriously but then want to rename a variable, I can't do that myself because it'll confuse the AI, so I'll tell it to rename. That seems pretty wasteful.

That sounds like too much effort. Better to have the AI write you a 20k word manifesto about how much you love your employer and then include that in the context of every request.

How much would you pay for 2 months of infrastructure engineer time? And how many millions / tens of millions / hundreds of millions are you imagining being spent on overpriced AWS services?

(Also, those AWS services are not engineering-free. I tried to migrate a system to RDS once and gave up after quite a few hours when I got to the part of the documentation that suggested that I edit my sql dump using sed to get it into a form that RDS would accept. No, thanks.)


> but fundamentally it is "what does this role have access to doing (action + resource)" + "who has access to this role". That is really it from a 10k foot level.

Hahahaha. No, fundamentally it is one input into a huge mess that you cannot actually see or audit from a 10k foot level.

AWS has produced a long, rambling and imprecise description of (some of?) what’s actually going on. You can read it here:

https://docs.aws.amazon.com/IAM/latest/UserGuide/access_poli...

Some of what they’re describing doesn’t even live within the IAM umbrella as far as I can tell. I’m not convinced that a concise, formal and unambiguous specification exists anywhere, even within AWSes own development teams.

I’ve asked LLMs to write AWS “policy”. They get the grammar mostly right. They cannot explain what the effects are in a manner that they will stand by after they search the web for documentation. Since I have never found good documentation despite looking, I can’t personally do any better than the LLMs. I’d love to be pointed at real documentation or specs.


They are just some slight variation of the fundamental idea. For example resource policy and org SCP are just the same check on a different level (e.g. more of who has access to what). They are attached to Organization and individual resource respectively (vs Account) so they need to exist in a separate place. And then in use they are ALL checked before an access is granted.

I don't work for IAM but I worked for several other teams over the years and IAM is actually one of the least confusing services. But I am definitely biased and have more than average amount of experience on this particular subject. I still think the general idea is more sane than Azure Account for example. I do think this reflect on the philosophical level where whether cloud are building blocks or are they consulting projects. I personally think IAM is done right in that regard.


> And then in use they are ALL checked before an access is granted.

I know they’re all checked. What don’t know is how the results of those checks are combined to get the final result. As far as I can tell, the result is not something like OR or AND — it seems like it’s something exceedingly complex and that the output of the policy part may be more complex than just a Boolean value.

Maybe the underlying implementation is fantastic (and my distinct impression is that AWS takes this stuff far more seriously than Azure), but that doesn’t mean that the docs are easy to find or that the system actually makes sense in anything other than an agglomeration-of-backwards-compatible-layers sense.


I will add: if you have an operation that adds a record to a database (like a payment in the OP example), don’t forget to have some field that the client specifies that can be used to find and query the status of that record later. This field can be the idempotency key itself or another field.

Or you can completely forget this feature and make it really awkward for the client to reconcile their view of the world with yours and/or to check in the request later. cough Mercury cough.

It is, just barely, acceptable to generate the identifier server side and return it to the client.


My current favorites on AWS, in no particular order:

1. IAM and policies. I’m not convinced that anyone knows how IAM rules and policy rules interact. There’s a flow chart that appears to be incomplete. There is not obviously a complete enough spec that one could, say, write a test suite to confirm that the actual behavior follows the spec. LLMs, of course, don’t know either because the training data does not exist.

2. Utter nonsense pricing. The cost of listing an S3 bucket goes up by an order of magnitude if you set the default storage class to archive despite this having nothing whatsoever to do with the operation in question. (But GCS adds two orders of magnitude for the same offense.) Conclusion: NEVER EVER set your default storage class to an archive tier.

3. Boto. It’s an Unbelievable Piece Of Crap. It’s not a library at all — it’s a meta-library that generates itself at runtime because someone had fun doing that and because Python didn’t stop them. Python type checkers, of course, just give up. And Boto is, um, a community project that AWS claims not to care about. Which is, of course, why its maintainers refused to fix an interop bug with GCS (I fully documented the entire bug for them, and the fix would have been the removal of a bit of pointless code).

4. Egress pricing. And the way it multiplies if you use any advanced VPC features. Why on Earth is it cheaper to sent an object to S3 from my own machine than to send the same object to the same endpoint from within a different AWS region nearby?

5. Authentication. It’s so bad that they invented Identity Center to try to unsuck it. But if you use Identity Center you get logged out even while actively using the console, and you get a helpful link to the WRONG PLACE to sign back in. Because of course core AWS isn’t even aware that Identity Center exists.

I don’t even use AWS very much. I’m sure I would fall in love with more of it if I did.


An amusing, moderately expensive solution that might actually work would be to have a weekday system and a weekend system. Think of it as a spare D/R system that you intentionally swap twice a week :)

If done right, it would be a complete separate system. Separate IP addresses and all.


That's effectively time-based request sharding which seems sensible but you'd still have to reconcile trades and any open positions (etc) across the time boundary where one system stops accepting requests and the other one starts. And keep the databases synchronous (ie have some system to make sure they're in sync at the changeover time) - or have a few minutes/hours of downtime between weekends and weekdays while you copy the whole production database from one system to another. The devil is in the details!

For what it’s worth, in some financial markets, there is a sort of natural daily cutover time [0] across which you are often not trading quite the same instrument. For example, the settlement date may roll over, etc. And a lot of Very Serious Finance is already built on the idea that most parties do not instantaneously reconcile anything and don’t depend on real-time trade lists.

I really can imagine a system in which the Monday trading system runs all day and then turns off at a predetermined time. Then it has 15 minutes to produce and disseminate a final list of all transactions, after which it becomes completely unavailable and is ready for maintenance. Any subsequent amendment to Monday’s trading would be done out of band. Open orders at the end of Monday do not carry over immediately to Tuesday, although front ends are welcome to recreate them. Everyone would understand that liquidity would be thin for the first few seconds after the system rolls over.

For added fun, Monday and Tuesday could actually be allowed to overlap in a hypothetical trading system, although the market participants might not love this.

[0] which is not the same for all instruments, and holidays mean that not every instrument rolls over meaningfully every day.


Do you also think that f(x) = x^1 is exponential? How about f(x) = x^0?

Kind of irrelevant, because you could also ask "Do you also think that f(x) = x^1 is polynomial? How about f(x) = x^0?" The distinction was clearly between polynomial (specifically quadratic) and exponential, leaving those trivial cases out.

No. These are polynomials (in x).

I would love to see someone challenge this as an anti-trust violation. Google is using its market power (as the provider of reCAPTCHA) to actively prevent devices that don’t use Google Play Services from competing effectively.

I'm not sure the definition of anti-trust matches what you're saying. Are there any retail android devices for sale without Google Play Services? Also, notably iPhones will be able to still work despite not having Google Play Services.

Retail phones for sale without Google Play Services:

All Huawei phones, which uses Huawei AppGallery after sanctions

FairPhone 6 /e/OS

Practically all modern feature phones: Nokia phones, HMD phones, etc. As I understand it, predominantly used by elderly and kids. But it's also gaining traction among millennials and Gen Z for digital detox and defeating mobile addiction.

Linux phones (Jolla Phone, PinePhone, FuriPhone, etc) - these you probably won't find in your local retail store but this is another competing platform being built from effectively an entirely different lineage minus the kernel


They're using their position to force people to buy a certified Android phone or iPhone in order to use millions of websites Google doesn't even own. People without a phone, people with dumb phones and alternative operating systems (deGoogled Android being just one example) can be totally cut off.

It's worse than forcing the Play Services: strict Play Integrity requires your system to be signed by Google. So if you use the Play Services on GrapheneOS, you're still locked out.

They're only doing that because the EU currently doesn't want to antagonize US any more with their tech fines. Noticed how there hasn't been any as of recently?

https://www.cnbc.com/2026/04/10/google-meta-big-tech-6-billi... :

> April 2025: Apple fined €500 million for failing to comply with "anti-steering" obligations. Meta fined €200 million under the Digital Market Act for requiring users to consent to sharing their data with the company or pay for an ad-free service.

> December 2025: X fined €120 million under the Digital Services Act for breaching transparency obligations.

(Sure, not this year, but that's pretty recent by most standards. And not sure if they're still being contested and unpaid)

And recently, Google is working with the EU to avoid a fine: https://www.bloomberg.com/news/articles/2026-05-06/google-ma...


> because the EU currently doesn't want to antagonize US any more with their tech fines

Yeah, I say it as "because the US bully the EU to prevent them from doing it".


Alternative explanation: they're following the Meta playbook of releasing surveillance features during a "dynamic political environment" that's keeping their opponents distracted.

https://www.nytimes.com/2026/02/13/technology/meta-facial-re...


But one would have to explicitly choose to use unsafe Rust for this instead of ordinary safe Rust. And safe Rust has no particular difficulty writing to slots in an array or slice or vector specified by their index.

except nearly everyone uses unsafe rust

No they really don't. 95% of rust is safe rust[1].

Also unsafe rust doesn't remove bounds checks. arr[idx] is bounds checked in every context.

You can opt out of array bounds checking by writing unsafe { arr.get_unchecked(idx) } . But thats incredibly rare in practice.

[1] https://cs.stanford.edu/~aozdemir/blog/unsafe-rust-syntax/


> 95% of rust is safe rust.

Based on the raw number of assorted crates, which has no bearing on kernel code. The more relevant question is, can a performant, cross-architecture, kernel ring-buffer be written in safe Rust?


Hubris, an embedded RTOS-like used in production by Oxide, has ~4% unsafe code in the kernel last I checked. There’s a ring buffer implementation that has one unsafe, for unchecked indexing: https://github.com/oxidecomputer/hubris/blob/master/lib/ring... (this of course does not mean that it is the one ring buffer to rule them all, but it’s to demonstrate that yes, it is at least possible to have one with minimum unsafe.)

It’s always a way lower number than folks assume. Even in spaces that have higher than average usage.


I've always had the impression that people who haven't actually tried to write low-level code in Rust to try to find out where the actual boundary of where they would need unsafe is tend not to realize how far you can push something and build safe abstractions on top of it. Almost every time I've had to wrap an unsafe API, I've been able to find a way to eliminate at least one of the invariants that are documented as needed for safety from propagating upwards, and there have been plenty of times that the specific circumstances of my use-case allowed me to eliminate it entirely.

The entirety of safe Rust is built upon unsafe Rust that's abstracted like this. The fact that you sometimes need unsafe isn't a mark against Rust, but literally the entire premise of the language and the exact problem it's designed to solve.


I doubt it, but you can probably get pretty close.

This is something a lot of people misunderstand about unsafe rust. The safe / unsafe distinction isn't at the crate level. You don't say "this entire module opts out of safety checks". Unsafe is a granular thing. The unsafe keyword doesn't turn off the borrow checker. It just lets you dereference pointers (and do a few other tricks).

Systems code written in rust often has a few unsafe functions which interact with the actual hardware. But all the high level logic - which is usually most of the code by volume - can be written using safe, higher level abstractions.

"Can all of io_uring be written in safe rust?" - probably not, no. But could you write the vast majority of io_uring in safe rust? Almost certainly. This bug is a great example. In this case, the problematic function was this one:

    static void io_zcrx_return_niov_freelist(struct net_iov *niov)
    {
        struct io_zcrx_area *area = io_zcrx_iov_to_area(niov);

        spin_lock_bh(&area->freelist_lock);
        area->freelist[area->free_count++] = net_iov_idx(niov);
        spin_unlock_bh(&area->freelist_lock);
    }
At a glance, this function absolutely could have been written in safe rust. And even if it was unsafe, array lookups in rust are still bounds checked.

"unsafe Rust" is not a binary; you don't opt into it for every single line of code. Given that the entire premise behind the idea that using C instead of Rust is fine is that people should be able to pay close attention and not make mistakes like this, having the number of places you need to look be a tiny fraction of the overall code that's explicitly marked as unsafe is a massive difference from C where literally every line of the code could be hiding stuff like this.

> except nearly everyone uses unsafe rust

Really? Why? I've not used Rust outside of some fairly small efforts, but I've never found a reason to reach for unsafe. So why is "nearly everyone" else using it?


Let's say you want to call win32 (or Mac) OS functions, all of a sudden you're doing all kinds of wonky pointer stuff because that's how these operating systems have been architected. Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.

> Doing unsafe stuff is pretty inevitable if you want to do anything non-hello-world-ish.

So the vast majority of Rust projects involve writing at least one unsafe block? Is that really your claim?


And even if you do end up writing an unsafe block, that should be a massive flag that the code in said block should deserve extra comments on why it is safe, and extra unit tests on verifying that it does not blow up.

How do you know the unsafe operation is safe? What are the preconditions the code block has? Write it down, review it, test it.


Exactly; I feel like a lot of people seem to misunderstand what Rust is trying to solve. It's fundamentally not trying to make unsafe code impossible; it's making the number of places you need to audit it a tiny fraction of your codebase compared to needing to audit the entirety of a C or C++ codebase. When I'm doing code reviews, you'd better believe I'm going to spend some extra time on any unsafe block I see to figure out if it's necessary and if so, if it's actually safe safe (with the default assumption for both of those being that they're not until I can convince myself otherwise).

The thing is you can actually write quite good C code (see OpenBSD project). The power of C is that it's pragmatic. It lets you write code with you taking the full responsibility of being a responsible person. To err is human, but we developed a set of practices to handle this (by making sure the gun is unloaded and the safety is on before storing it to avoid putting holes in feet).

I like type checking and other compile time checks, but sometimes they feel very ceremonial. And all of them are inference based, so they still relies on the axiom being right and that the chain of rules is not broken somewhere. And in the end they are annotations, not the runtime algorithm.


> To err is human

Yes, which is precisely why I write in Rust, because the compiler errs less than I do.


It may, but it still requires careful annotations. So you should hope that you have not made an error there and described the wrong structure for the code.

It seems like you have this backwards. Messing up lifetimes in safe Rust can't cause unsafety; the compiler checks if the lifetimes are valid, and if they're not, you get a compiler error. You don't need to "hope" you did it right because the entire point is that you can't compile if you didn't.

On the other hand, when you're relying on your ability to "actually write quite good C code"...you'd better hope that you have not made an error there. In practice, some of the most widely used C libraries in the world still seem to have bugs like this, so I don't really understand why you'd think that's a winning strategy.


Making use of win32 functions doesn't turn off bounds checking in your rust code.

A tiny fraction of programs need to use win32 or Mac OS functions beyond the standard library or other safe wrappers for said functions.

And even in those programs, only a fraction of the code in them is actually directly making calls to those APIs! Having everything else in safe code still makes it easier to audit than if the entire codebase is in C or C++.

So what? Just because you used the keyword `unsafe` to call an unsafe API does not mean that you are going to use unsafe pointer access to write to a vector.

Mythos might be good at finding holes in an actual defined security boundary. But trying to audit Claude Code would be like trying to find the holes in Swiss cheese. Of course they’re there!

Oh is THIS the real reason they haven't released it?

It's the first thing people will point mythos at.


I’m fairly confident that almost any decent LLM trained on agentic command line workflows could break out of Claude Code. It’s really not hard.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: