Hacker Newsnew | past | comments | ask | show | jobs | submit | raesene9's commentslogin

This is kind of an odd article to me. The point that podman may provide better isolation that Docker is made, but copy fail part focuses on the sample exploit (that overwrote su) which is not super applicable to containerised environments, and not the general effect of exploiting the vulnerability, which is to allow the user to overwrite a file that they should only have read-only access to.

https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber... - This PoC has a good example of how Copy Fail might have an impact in a container based environment, it's exploiting the shared layers in a pair of container images, to overwrite a file in one image based on the running of an exploit in another.

Whilst I've not directly tested podman for that kind of attack, I'd be a bit surprised if it stopped it, given how this vuln works.


The k8s exploit requires stars to be aligned for an external attacker: the pod you exploit (via an RCE?) must share some layers with a privileged pod running in the same node. Or alternatively, you should have the ability to run an arbitary image as an unprivileged pod. Which is surely, some environments where internal teams might have such access. But getting there for a remote attacker isn't easy.

Thanks for the link. I tried the copyfail PoC in rootless podman yesterday and it didn't work, but I hadn't dug into it yet. This is great info.

I've had claude knock up a basic podman PoC, that seems to work ok https://github.com/raesene/vuln_pocs/tree/main/CVE-2026-3143... . It just uses a read-only mount and then demonstrates overwriting that read-only file.

Key point for testing exploitability is kernel version, package versions (in case they ship a patch) and loaded kernel modules. Some stripped down environments don't have the relevant modules available.


I've not looked for podman but moby/docker I believe does now block this https://github.com/moby/profiles/commit/7158007a83005b14a24f...

There is A/B testing going on and for a while several pages on Anthropic's site did remove Code from pro (https://old.reddit.com/r/ClaudeAI/comments/1srzhd7/psa_claud...) if you want a lot more details.


The install mechanism for the superpowers plugin for codex and opencode is .... interesting. From https://github.com/obra/superpowers

Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/head...

it's like curl|bash but with added LLM agents...


there are a .....lot of forks already, no putting the genie back in the bottle for this one, I'd imagine.


Forks are easy for Github to shut down simultaneously. What you really want is to upload the code as a new repo (ideally a different name from the original one). But it shouldn't be too hard in practice to detect uploading the same codebase as one that's taken down if that's desired.


but once forked people will have local copies, that can be put up onto other sites, if GH take them down.


I think the original repo OP mentioned decided not to host the code any more, but given there are 28k+ forks, it's not too hard to find again...


+1 to this we had a set of HomePod minis for intercom and not only do they not work reliably, but the diagnostics provided when they fail are non-existent, making it hard to improve the setup.


One of my main lessons after a decent long while in security, is that most orgs care about security, *as long as it doesn't get in the way of other priorities* like shipping new features. So when we get something like Agentic LLM tooling where everything moves super fast, security is inevitably going to suffer.


And it's not just BYD. A couple of brands I'd literally never heard of till a year ago, Jaecoo and Omoda now seem to be getting pretty popular, saw quite a few when I was over in Glasgow.


There's a massive difference between launching a piece of software and launching a successful business.

Over the last couple of months I've seen a load of new "product launches" in my niche but when you look at them they're largely vibecoded and don't show deep understanding and sustainability, so it's pretty likely you'll never see them as successful businesses.

Looking at some of the related places like /r/sideproject/ there's a lot of releases and I'd be willing to suggest that most of them are using LLMs


Then, respectfully, what is the point? Does the trillions-of-dollars AI industry exist to support a few hobbyists building niche products to scratch their own itch? I thought the promise here is increased productivity, presumably in the economic sense.

There seems to be a lot of hype, and has been for years, but I’m not seeing it materialize as actual economic output. Surely by now there should be lots of businesses springing up to capture all of this value created by vibecoded software.


Whilst I have no special knowledge, my expectation is it'll do both. If you reduce the barriers to coding you'll get more code, both at the hobbyist/one-person level and also at the large corp level.

Whether that translates into more value for those larger corps is the trillion dollar question :) Writing code is a small part of the process of finding and shipping features that customers want, so it remains to be seen how much LLM tools translate it.

I think it's fairly widely accepted that from a financial standpoint we're in an AI/LLM bubble. There has been more investment than we're likely to see financial benefits, but it's impossible to predict to what degree (if you can predict that and the timing you can make a lot of money!!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: