Hacker Newsnew | past | comments | ask | show | jobs | submit | BobbyTables2's commentslogin

Those things scare the crap out of me…

Even worse are the “extension packs” that combine some normal things and one wonky thing nobody’s ever heard of…


I doubt your “distroless” container is any safer for this vulnerability .

Infecting sudo just makes for a quick demo.

If your container has different processes at different user ids, the exploit would still be effective.

It would likely also be able to “modify” read only files mapped from the host.


distroless rootless containers don't have the syscalls enabled to do anything reasonable with this exploit

United’s pre-flight safety notices make it appear as if they spared no expense…

Wouldn’t a guard page be readable in Linux with /proc/self/mem ? (at least read only pages are writable with it)

That’s a neat feature, thanks for sharing.

Unfortunately I find that code bases lacking auto formatting are often littered with non functional changes as developers temporarily instrument code, remove it, but leave whitespace changes behind.

In terms of tracking code changes, one really would have to rewrite the entire history with each commit reformatted.


Feels like they were too nice. After 90 days of no response, why not just go full disclosure on them?

The CEO seems more interested in insulting people than securing his company’s product.


How is it that LLMs aren’t good at rendering the sequence of numbers but can reliably put the supplied pieces all in the right order?

Because the image generation is powered by a diffusion model that is only guided by the transformer model and still has somewhat vague spatial representation especially when it comes to coupling things like counting and complex positioning.

But by using the LLM to generate code like an SVG graphic is made up of, and then using a rasterized image of that SVG as an input to the diffusion model, this takes place of the raw noise input and guides the denoising process of the diffusion model to put the numerical parts in the right spots.

The LLM is putting the SVG in the right order because the code that drives the SVG is just that - code - and the numerical order is easily defined there, even if it has to follow something like a spiral.

Edit: although LLMs now also may be using thinking modes with their feedback during generation to help with complex positioning when drawing something like an SVG, as I just asked claude to generate me one such spiral number SVG and it did so interactively via thinking, and the code generated is incredibly explicit with positions, so, that must help. But the underlaying idea to two-step SVG-to-diffusion model is the real key here.


Multi-tenant hosting using containers? Thanks, I really needed a good laugh today…

Can’t even install Firefox without them…

One might wonder why LLMs were even trained with this information in the first place…

It wouldn’t need guardrails if the people training it had any of their own…


Because "put in all knowledge of chemistry that we have, except this specific recipe" isn't how knowledge works

The training data is not so specifically filtered at least in pre training. The point is to give them as much world knowledge as possible

The OP is saying maybe that was a bad idea. I tend to agree given how badly these companies manage to sanitize outputs.

May be they want to sell it to law enforcement as a model that can identify suspicious activities. It needs to know how and why something is suspicious to flag.

or its just lets gobble everything and figure out the guardrails later kind of approach.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: