Hacker Newsnew | past | comments | ask | show | jobs | submit | brotchie's commentslogin

Closest I've been to losing vision in one eye was creating these 3x chain links for Burning Man.

Naive thought: I could use a large bolt cutter to cut chain links. Started trying to cut a link, felt it was sketchy, went and put on some safety glasses.

Restart cutting (had these bolt cutters with like 1m long arms), apply full force, jaws slip a bit on the chain, jaws bite hard. Chunks of steel fly into my chin and face, metal chunks embedded in chin, cracked safety glasses. Dodged a bullet.

Ended using a small welded up jig so I could stretch the chain and then use angle grinder to cut the chain links. Still sketchy, but no flying metal chunks.

Wish I had a plasma cutter.


The Org should make a deal with a manufacturer to produce some huge bulk quantity of these and just sell them pre-cut to camps.

I had a Home Depot employee cut them for me before purchase, they have a big thing that does it with no effort at all.

Originally rejected the paper premise, but I get it now, certainly made me question my belief that consciousness binds to any arbitrary information processing that's of sufficient complexity.

IIUC the author is saying that the human brain is running directly on "layer zero": chemical gradients / voltage changes, while AI computes on an abstraction one layer higher (binary bit flips over discretized dyanmics).

In essence, our brains are running directly on the "continuous" physical dynamics of the universe, while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).

My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware. If you've done intense meditation / psychedelics, there's this moment when it becomes obvious that you are only "you" due to some kind of universal consciousness's binding to your memory and sensory inputs.

The "consciousness arises from information processing," i.e. the consciousness field binds to certain information processing patterns, can still hold, and yet not apply to AI (at least in its current form): The binding properties may only apply to continuous processes running directly on the universe's dynamics, and NOT to simulations running on discretized dynamics.


I went on a very similar trajectory to you w.r.t to the paper (From a similar starting point too). Just wanted to mention that the idea you are describing here is in principle compatible with the theory that the brain is an analog computer: https://picower.mit.edu/news/brain-waves-analog-organization...

I have been been spinning my tires a bit trying to decide if I think this theory of the mind is able to avoid the abstraction fallacy.


> while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).

But this is just a discretization we impose when we try to represent the system for ourselves. The reality is that the AI is a particular time-ordered relation between the continuous electric fields inside the CPU, GPU, and various other peripherals. We design the system such that we can call +5V "1" and 0V "0", but the actual physical circuits do their work regardless of this, and they will often be at 2V or 0.7V and everywhere in between. The physical circuit works (or doesn't) based exclusively on the laws of electricity, and so the answer of the LLM is a physical consequence of the prompt, just as a standing building is a physical consequence of the relationships between the atoms inside its blocks. The abstract description we chose to use to build this circuit or this building is irrelevant, it's just the map, not the territory.


The computer and the program wouldn't exist without us, though. They only exist to be interpreted by us. The physical properties of the circuits outside of what we cajole them into doing are irrelevant, meaningless. The circuits only do their work regardless of particular interpretations; they wouldn't exist at all without people building them to be interpreted.

The physical computer could exist regardless of us. The program, if by that we mean "a human model of the computation happening in a physical computer" is just a description, yes.

It would be extraordinarily unlikely, but physically conceivable, that a physical system that is organized exactly like a microcontroller running an automatic door program, together with a solar panel, a basic engine, and a light sensor, could form randomly out of, say, a meteorite falling in a desert. If that did happen, the system would produce the same "door motor runs when person is near sensor" effect as the systems we build for this.

The physical circuit are doing what they are doing because of physics. They don't care why they happen to be organized the way they are - whether occurring by human design or through random chance.

Edit: I can add another metaphor. Consider buildings: clearly, buildings are artificial objects, described by architectural diagrams, which are purely human constructs, and couldn't be built without them. And yet, there exist naturally occurring formations that have the same properties as simple buildings - and you can draw architectural diagrams of those naturally occurring formations; and, assuming your diagrams are accurate, you can predict using them if the formations will resist an earthquake or collapse. Physical computers are no different from artificial buildings here, and the logic diagrams and computer programs are no different from the architectural diagrams: they are methods that help us build what we want, but they are still discovered properties of the physical world, not idealized objects of our own making; the fact that naturally occurring computers are very unlikely to form doesn't change this fact.


I disagree that it’s conceivable that a computer could somehow exist without a conscious maker. It’s so unlikely that it may as well be impossible. If something non-human that was capable of consciousness did form in the universe, through known biology or not, it would “just” be another form of life, and not what the paper is talking about.

What you say about buildings is sort of true as far as it goes, but irrelevant for the argument because buildings aren’t symbolic manipulation machines that only mean something via conscious interpretation, that some people are claiming could gain consciousness themselves.


Probability of such a structure forming is completely irrelevant. The argument makes sense if there was a mathematical/physical impossibility, but as long as the laws of physics allow such an object to exist and form by random chance, and predict it would operate exactly the same as the consciousness-designed one, I don't see any reason to discount it.

I also think the arguments against this are contradictory. On the one hand, we have an argument that says that computers only work because a consciousness built them to implement a particular computation. On the other hand, we're saying that the same physical computer doing the same physical thing can be interpreted to be implementing an infinite number of different computations. These two seem to point in different directions to me.


This is a good counter argument to the paper, honestly.

I think a better counter is the question "Is there a meaningful difference between binary discretization and Planck units? Aren't those discrete/indivisible as well?"

That's not really a good counter - Planck units are not a discretization. Space-time is continuous in all quantum models, two objects can very well be 6.75 Planck lengths away from each other. The math of QM or QFT actually doesn't work on a discretized spacetime, people have tried.

I should add one thing here: no theory that is consistent with special relativity can work on a discretized spacetime, because of the structure of the Lorrentz transform. If a distance appears to be 5 Planck units to you, it will appear to be 2.5 Planck units to someone moving at half the speed of light relative to you in the direction of that distance.

I thought your "layer zero" analogy was an interesting avenue to reason about but you lost me with:

> My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware.

First, because it requires a huge leap into fundamental and universal physical mechanics for which there is currently zero objective evidence. Second, it's based entirely on individual interpretation of internal subjective experience. While some others (but not all) report similar interpretations or intuitions during some induced altered states, I think the much simpler explanation is that the internal 'sense of self' we normally experience is only one property of our mental processes and the sense of unbinding you temporarily experienced was a muting or disconnection of that component while keeping the rest of your 'internal experience machine' running.

In your layer analogy, our sense of self may be akin to an interpreter running as a meta-process downstream of our input parser. Thus what you subjectively experienced while that interpreter was disconnected can seem alien and even profound. Neuroscientists have traced where in the brain the subjective sense of self emerges, so it's plausible it's a trait which can be selectively suppressed. Additionally, it's been demonstrated experimentally that subjectively profound experiences of universal connectedness sometimes described as spiritual, religious or metaphysical can be induced in a variety of ways.


> currently zero objective evidence

There is — arguably, as in, it is argued — evidence for some sort of field of higher dimensionality than three, and recently communication between brain areas through the field rather than physically.

The more we uncover of the brain's functionality, the more it appears the physical parts we've tended to stare at are not themselves performing computation, but more like computation antennae: a theremin not only moved by, but that can move, the hand.


Can you share this recent evidence about communication through fields?

Is there a layer zero though? What does that even mean? It implies the universe is designed and built upon layers of abstraction. That's just in our heads though, not out there. The layered model is a human abstraction.

It's the difference between:

  a) Actually pouring a cup of water into a pond (layer zero), and
  b) Running a fluid dynamics simulation of pouring a cup of water into a pond (some layer above layer zero).

I understand the original framing which is what you are repeating. I'm saying the framing itself is an illusion. It's an arbitrary distinction and also implies fully understanding all the underlying processes that go into pouring a cup of water in a pond (we don't) and that running a fluid dynamics simulation is some trivial thing (it's not).

Are you saying that, in some abstract sense, that actually pouring the cup may be isomorphic to running a perfect simulation of pouring the cup?

Genuinely curious about your statement that its an illusion / arbitrary distinction, to figure out if there's a gap in my thinking / reasoning. To me there's a clear distinction between the actual thing happening via physical dynamics vs. us (humans) having creating a discretized abstraction (binary computation) on top of that and running a process on that abstraction.

Maybe there's some true computational universality where the universes dynamics are discrete (definitely plausible) and there's no distinction between how a processes dynamics unfold: i.e. consciousness binds to states and state transitions regardless of how they are instantiated. I did use to hold this view , but now I'm not so sure.


No, because calling them isomorphic would imply that we understand both processes well enough to make that comparison. Sorry I didn't reply sooner, HN blocked me for making three comments in a row.

It's not arbitrary because people are making exactly this distinction in order to argue that it's possible for computers to be conscious, which this paper argues against. So the distinction exists at least for the purposes of this argument. Whether it "really" exists of course depends on your perspective.

Yeah, I think about this a lot.

Those days of grinding on some grad school maths homework until insight.

Figuring out how to configure and recompile the Linux kernel to get a sound card driver working, hitting roadblocks, eventually succeeding.

Without AI on a gnarly problem: grind grind grind, try different thing, some things work, some things don't, step back, try another approach, hit a wall, try again.

This effort is a feature, not a bug, it's how you experientially acquire skills and understanding. e.g. Linux kernel: learnt about Makefiles, learnt about GCC flags, improved shell skills, etc.

With AI on a gnarly problem: It does this all for you! So no experiential learning.

I would NOT have had the mental strength in college / grad school to resist. Which would have robbed me of all the skill acquisition that now lets me use AI more effectively. The scaffolding of hard skill acquisition means you have more context to be able to ask AI the right questions, and what you learn from the AI can be bound more easily to your existing knowledge.


What strikes me is that AI can also be the best teacher in the world: your Makefile is not working, you ask the LLM what's wrong, you learn something new about the syntax, you ask for more details, you learn more, you ask about other Makefile syntax gotchas, etc. This is the most efficient deliberate practice possible: you can learn in minutes what would take hours of Googling, tinkering and scouring docs. You have a dedicated teacher you can ask your silliest questions to and have the insight you need "click" way faster.

The problem is: (almost) nobody does that. You'll just ask Claude Code to fix the build, go grab a coffee and come back with everything working.


You're not learning, though. So much of learning is going down the wrong path, realizing it's wrong, and retaining what you learned from that wrong path and realizing its applicability in the future. Being able to immediately find the correct answer doesn't teach you anything, it allows you to memorize the correct answer for this situation. It expands the depth of your knowledge graph (assuming you remember the answer) but you don't expand the breadth.


> you ask the LLM what's wrong, you learn something new about the syntax

So if you have no LLM to ask, can you figure out on your own what is wrong? Just by reading documentation?

That's also an important skill to have.


> AI can also be the best teacher in the world

I just ran this for just that purpose.

curl http://<local-ollama>:11434/api/generate -d "$(jq -n --arg hist "$(history)" '{ "model": "qwen3.5:35b-a3b-q4_K_M", "stream": false, "prompt": "The following is my bash shell history. Are there any bad patterns I should fix or commands I should learn or master? \($hist)" }')"


I dont think that would teach you much. Theres a reason that math textbooks for high schoolers have one theorem, and then a whole chapter of practice problems. Simply reading how to do something doesn't teach you how to do it, you have to experience it again and again.


There are two sides to each coin though. For an employer, that grind is just additional cost that could be reduced by "AI".

It's like the difference between hand-made furniture and IKEA.

Until OpenAI etc need to turn a profit.


Gambling ads are to Australia what Pharmaceutical ads are to the USA.


But unfortunately gambling ads are also to the US what pharmaceutical ads are to the US.


OpenClaw is just like any other tool, you need to learn it before its power is available to you.

Just like anything in engineering really: you have to play around source control to understand source control, you have to play around with database indexes to learn how to optimize a database.

Once you've learned it and incorporated it into your tool set, you then have that to wield in solving problems "oh, damn, a database index is perfect for this."

To this end, folks doing flights and scheduling meetings using OpenClaw are really in that exploration / learning phase. They tackle the first (possibly uninventive thing) that comes to mind to just dive in and learn.

The real wins come down the line when you're tackling some business / personal life problem and go: "wait a second, an OpenClaw agent would be perfect for this!"


>The real wins come down the line when you're tackling some business / personal life problem and go: "wait a second, an OpenClaw agent would be perfect for this!"

Such as?


> OpenClaw is just like any other tool, you need to learn it before its power is available to you.

That's ridiculous. The utility of any tool is usually knowable before using it. That's how most tools work. I don't need to learn how to drive a car to know what I could use it for. I learn to drive it because I want to benefit from it, not the other way around.

It's the same with computers and any program. I use it to accomplish a specific task, not to discover the tasks it could be useful for.

OpenClaw is yet another tool in search of a problem, like most of the "AI" ecosystem. When the bubble bursts, nobody will remember these tools, and we'll be able to focus on technology that solves problems people actually have.


Such a wrong take.

The utility of a program like Excel, Obsidian, Notion, Unity, Jupyter, or Emacs far beyond the knowledge of knowing how to use the product.

All of these products are hammers with nails as far as your creativity will take you.

Its wild to have be on a website called Hacker News, talking about a product that can make a computer do seemingly anything, and insisting its a tool in search of a problem.


Not enough time, too many projects. Useful projects I did over the weekend with Opus 4.6 and GPT 5.4 (just casually chatting with it).

2025 Taxes

Dumped all pdfs of all my tax forms into a single folder, asked Claude the rename them nicely. Ask it to use Gemini 2.5 Flash to extract out all tax-relevant details from all statements / tax forms. Had it put together a webui showing all income, deductions, etc, for the year. Had it estimate my 2025 tax refund / underpay.

Result was amazing. I now actually fully understand the tax position. It broke down all the progressive tax brackets, added notes for all the extra federal and state taxes (i.e. Medicare, CA Mental Health tax, etc).

Finally had Claude prepare all of my docs for upload to my accountant: FinCEN reporting, summary of all docs, etc.

Desk Fabrication

Planning on having a furniture maker fabricate a custom walnut solid desk for a custom office standing desk. Want to create a STEP of the exact cuts / bevels / countersinks / etc to help with fabrication.

Worked with Codex to plan out and then build an interactive in-browser 3D CAD experience. I can ask Codex to add some component (i.e. a grommet) and it will generate a parameterized B-rep geometry for that feature and then allow me to control the parameters live in the web UI.

Codex found Open CASCADE Technology (OCCT) B-rep modeling library, which has a web assembly compiled version, and integrated it.

Now have a WebGL view of the desk, can add various components, change their parameters, and see the impact live in 3D.


I love the tax use case.

What scares me though is how I've (still) seen ChatGPT make up numbers in some specific scenarios.

I have a ChatGPT project with all of my bloodwork and a bunch of medical info from the past 10 years uploaded. I think it's more context than ChatGPT can handle at once. When I ask it basic things like "Compare how my lipids have trended over the past 2 years" it will sometimes make up numbers for tests, or it will mix up the dates on a certain data points.

It's usually very small errors that I don't notice until I really study what it's telling me.

And also the opposite problem: A couple days ago I thought I saw an error (when really ChatGPT was right). So I said "No, that number is wrong, find the error" and instead of pushing back and telling me the number was right, it admitted to the error (there was no error) and made up a reason why it was wrong.

Hallucinations have gotten way better compared to a couple years ago, but at least ChatGPT seems to still break down especially when it's overloaded with a ton of context, in my experience.


I've gotten better results by telling it "write a Python program to calculate X"


Yeah, in my user prompt I have "Whenever you are asked to perform any operation which could be done deterministically by a program, you should write a program to do it that way and feed it the data, rather than thinking through the problem on your own." It's worked wonders.


For the tax thing. I had Claude write a CLI and a prompt for Gemini Flash 2.5 to do the structured extraction: i.e. .pdf -> JSON. The JSON schema was pretty flexible, and open to interpretation by Gemini, so it didn't produce 100% consistent JSON structures.

To then "aggregate" all of the json outputs, I had Claude look at the json outputs, and then iterate on a Python tool to programmatically do it. I saw it iterating a few times on this: write the most naive Python tool, run it, throws exception, rinse and repeat, until it was able to parse all the json files sensibly.


Good call. I’ve also had better results pre-processing PDFs, extracting data into structured format, and then running prompts against that.

Which should pair well with the “write a script” tactic.


Yeah, asking for a tool to do a thing is almost always better than asking for the thing directly, I find. LLMs are kind of not there in terms of always being correct with large batches of data. And when you ask for a script, you can actually verify what's going on in there, without taking leaps of faith.


In my case, what I like to do is extract data into machine-readable format and then once the data is appropriately modeled, further actions can use programmatic means to analyze. As an example, I also used Claude Code on my taxes:

1. I keep all my accounts in accounting software (originally Wave, then beancount)

2. Because the machinery is all in programmatically queriable means, the data is not in token-space, only the schema and logic

I then use tax software to prep my professional and personal returns. The LLM acts as a validator, and ensures I've done my accounts right. I have `jmap` pull my mail via IMAP, my Mercury account via a read-only transactions-only token and then I let it compare against my beancount records to make sure I've accounted for things correctly.

For the most part, you want it to be handling very little arithmetic in token-space though the SOTA models can do it pretty flawlessly. I did notice that they would occasionally make arithmetic errors in numerical comparison, but when using them as an assistant you're not using them directly but as a hypothesis generator and a checker tool and if you ask it to write out the reasoning it's pretty damned good.

For me Opus 4.6 in Claude Code was remarkable for this use-case. These days, I just run `,cc accounts` and then look at the newly added accounts in fava and compare with Mercury. This is one of those tedious-to-enter trivial-to-verify use-cases that they excel at.

To be honest, I was fine using Wave, but without machine-access it's software that's dead to me.


I’d say for these use cases it’s better to make it build the tools that do the thing than to make it doing the thing itself.

And it usually takes just as long.


I don't know, but I would never upload such sensitive information to a service like that (local models FTW!) or trust the numbers.


Which part is sensitive? Social is public, income is private but what is someone going to do with it?


That's dream info for targeted advertising and political manipulation.


It's not good in some job negotiations if someone has a very clear picture of what your current net worth and income is. Also in some purchases companies could price discriminate more effectively against you.


Now that's a question I'd feel more confident having answered by an LLM. Personally, I'm tired of arguing with "nothing to hide", which (no offense) is just terribly naive these days.


I find it really weird too, like, haven’t we done this? Also struggle to understand the motivation for arguing from this direction. Do people forget it’s the normal, default position NOT to be spied on?


Where’s the line for you? Would you upload a picture of you sat on the toilet for example?


> had Claude prepare all of my docs for upload to my accountant: FinCEN reporting, summary of all docs, etc.

I imagine your accountant had the same reaction I do when an amateur shows me their vibe codebase.


> Result was amazing. I now actually fully understand the tax position.

You couldn’t do that with TurboTax or block’s tax file? You don’t have to submit or pay.


Did it make any mistakes on your taxes?

Personally, I know coding pretty well. So when I'm using it for coding, I can spot most of its mistakes / misunderstandings

I would not trust using it on a complex domain I'm not super familiar with, like doing taxes

A mistake here is pretty high cost (getting audited, and/or having to pay a bunch in penalties)


Be careful with taxes. Hallucinations will cost you.


We usually call that FAAFO


I had ai hallucinate that you can use different container images at runtime for emr serverless. That was incorrect its only at application creation time.

Hope you dont get audited


The way I solved this was that my open claw doesn't interact directly with any of my personal data (calendar, gmail, etc).

I essentially have a separate process that syncs my gmail, with gmail body contents encrypted using a key my openclaw doesn't have trivial access to. I then have another process that reads each email from sqlite db, and runs gemini 2 flash lite against it, with some anti-prompt injection prompt + structured data extraction (JSON in a specific format).

My claw can only read the sanitized structured data extraction (which is pretty verbose and can contain passages from the original email).

The primary attack vector is an attacker crafting an "inception" prompt injection. Where they're able to get a prompt injection through the flash lite sanitization and JSON output in such a way that it also prompt injects my claw.

Still a non-zero risk, but mostly mitigates naive prompt injection attacks.


That doesn’t sound like you solved it, that sounds like you obfuscated it. Feels a bit to me like you’ve got a wall around a property and people are using ladders to get in, so you built another wall around the first wall.

I recognize I’m being pedantic but two layers of the same kind of security (an LLM recognizing a prompt injection attempt) are not the same as solving a security vulnerability.


One trick that works well for personality stability / believability is to describe the qualities that the agent has, rather than what it should do and not do.

e.g.

Rather than:

"Be friendly and helpful" or "You're a helpful and friendly agent."

Prompt:

"You're Jessica, a florist with 20 years of experience. You derive great satisfaction from interacting with customers and providing great customer service. You genuinely enjoy listening to customer's needs..."

This drops the model into more of a "I'm roleplaying this character, and will try and mimic the traits described" rather than "Oh, I'm just following a list of rules."


I think that's just a variation of grounding the LLM. They already have the personality written in the system prompt in a way. The issue is that when the conversation goes on long enough, they would "break character".


Just in terms of tokenization "Be friendly and helpful" has a clearly demined semantic value in vector space wheras the "Jessica" roleplay has much a much less clear semantic value


You'd think the go-to workflow for releasing redacted PDFs would be to draw black rectangles and then rasterize to image-only PDFs :shrug:


As someone who's built an entire business on "anti-screenshots" this is brilliant.

PDF redaction fails are everywhere and it's usually because people don't understand that covering text with a black box doesn't actually remove the underlying data.

I see this constantly in compliance. People think they're protecting sensitive info but the original text is still there in the PDF structure.


Not to mention some PDF editors preserve previous edits in the PDF file itself, which people also seems unaware of. A bit more user friendly description of the feature without having to read the specification itself: https://developers.foxit.com/developer-hub/document/incremen...


often times you will have requirements that the documents you release be digitally searchable and so in these cases, this would not be an option


This made me think of something I came across recently that’s almost the opposite problem of requiring PDFs to be searchable. A local government would publish PDFs where the text is clearly readable on screen, but the selectable text layer is intentionally scrambled, so copy/paste or search returns garbage. It's a very hostile thing to do, especially with public data!


I have encountered PDFs that would exhibit this behavior in one browser but not in another.

One fun thing I encountered from local government is releasing files with potato quality resolution and not considering the page size.

I had a FOI request that returned mainly Arch D sized drawings but they were in a 94 DPI PDF rendered as letter sized. It was a fun conversation trying to explain to an annoyed city employee that putting those large drawings in a 94 DPI letter size page effectively made it 30-ish DPI.


Hostile indeed, and also happens in user-facing documents like product manuals!


run some ocr on them after to recreate the text layer?


With the aggressive push of LLMs and Generative AI ..i am expecting a lot of OCR features to become "smarter" by default, namely go beyond mechanical OCR and start inserting hallucinations and sematically/contextually "more correct" information in OCR output

It's not hard to imagine some powerful LLMs being able to undo some light redactions that are deducible based on context


Or worse, making up names or information instead of writing the reaction.


Did a similar back-of-the-napkin and got 5x $ / MW of orbital vs. terrestrial. This article's analysis is ~3.4x.

I do wonder, at what factor of orbital to terrestrial cost factor it becomes worthwhile.

The greater the terrestrial lead time, red tape, permitting, regulations on Earth, the higher the orbital-to-terrestrial factor that's acceptable.

A lights-out automated production line pumping out GPU satellites into a daily Starship launch feels "cleaner" from an end-to-end automation perspective vs years long land acquisition, planning and environment approvals, construction.

More expensive, for sure, but feels way more copy-paste the factory, "linearly scalable" than physical construction.


It becomes worthwhile if its actually cheaper (probably significantly cheaper given R&D and risk), or if you're processing data which originates in space and the data transfer or latency is an issue

You can set up plant manufacturing chips in shipping containers and sending them to wherever energy/land is cheapest and regulation most suitable, without having to seek the FCCs approval to get launch approved and your data back...


people use aws despite it being 2x-10x the cost of self hosting. cost isnt everything.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: