Hacker Newsnew | past | comments | ask | show | jobs | submit | dariusj18's commentslogin

That was the hardest part of learning PHP, all the code examples online were just awful.

Worked on a PHP project once. Every time I asked why something was done a certain way the answer was "dunno, we copy pasted this code snippet."

Certain popular PHP codebases appear to use a similar methodology.


I think that is the role speculative fiction, like sci-fi, takes.

As Asimov and Roddenberry envisioned it, yes. Certainly not as the drivel that carries the sci-fi label today.

In first read I thought this was an operator for k8s, but it is just comparing itself.to k8s as an orchestration system.


Correct, it's not a k8s operator. Standalone binary, zero dependencies. Just uses the same mental model because clusters and namespaces map really well to multi-team agent management.


There are a number of other aspects that make kubernetes what it is. In particular the API surface is well defined at a very good abstraction level.

For Ai, I'm using ADK. It's my "kubernetes for agents" from this perspective of abstractions and common task management handled by the system.

https://google.github.io/adk-docs/get-started/about/


Given the water needs of data centers and the ongoing and upcoming water scarcity, I imagine the problem of heat dissipation seems easier to solve, long term, in space.


We can and do build data centres that don't use evaporative cooling, evaporation is just often the cheapest option in places with large natural water sources.


Wut?


But then if yoy pay for support it only works in one account


Assuming you're playing the "only pay for a business support plan when you actually need to file a ticket" game like me, with a very slight amount of effort this works in your favor instead of being a downside. Put your expensive-but-reliable stuff (e.g. large 24/7 EC2 instances, your S3 buckets) in one account and your cheap-but-fiddly stuff (e.g. your EKS cluster) in another account. When you need support on the fiddly stuff you're only paying a percent of that account.

At work we did not follow this advice, so we have a single account and we're vulnerable to an unnecessarily high support bill if we happen to need to file a ticket in an expensive month. We could have avoided this with account segmentation; our expensive stuff tends not to be the stuff we need support on.


This seems like a ton of work


That's always the case with AWS: reducing costs takes legwork, and by the same token, you can avoid legwork by accepting a higher bill.


Enterprise support agreements are organization-wide.

Although, you can gamify Business support (which is priced as a percentage of your bill) to not include things like your CloudTrail account, which probably never require support, but can get expensive across a large enough organization.


I had a similar issue recently and was able to convince the AI agent to give me a phone number to talk to a support representative. They manually fixed my accout and key and gtg in a few minutes.

What a PITA it took until I got a human though.


Not a big fan of this kind of promo article that just links to several of their own learning resources instead of giving some actual examples inline.

I've worked in Mongo enough to know that whatever decision I make will end up being wrong.

What i will never understand is why mongo doesn't have some simple means of document referencing that automatically updates documents a doc is embedded in. If it's such an important pattern that every app needs to reinvent for itself, just add it to the system.


I think there is confusion because coding is easy, software engineering is hard.


Coding was never the hardest problem. And it is hard to say why people are taking so long to realise it


People who don't know how to code know they don't know how. They can look over your shoulder and see that it looks like gibberish, and they also have no interest in understanding it even if they could.

On the other hand, designing the software or engineering a solution to the problem seems like something they could do, as far as they know, because it's not something concrete that they can look at and see is beyond their abilities.


I've been wanting a local LLM appliance.


Tech is evolving too quickly; every year the hardware will be much more powerful at the same price (as LLM optimizations reach hardware), so you’d end up replacing the device frequently.


Not convinced. Are CPUs and GPUs killing it %/$ wise each year like it's 1996?

Models are killing it but that is just an "ollama run" command away.


GPUs and NPUs are gaining optimizations for the transformer architecture. It’s not “GPU is 3x faster this year”, it’s “GPU has gates specifically designed to accelerate your LLM workload”

See for instance [0], which is just starting to appear in commercial parts.

This is continuing; pretty much every low cost SoC maker is racing to build and extend ML optimizations.

0. https://www.synopsys.com/blogs/chip-design/best-edge-ai-proc...


Like phones?


One of the most important jobs in an institution is knowing things. "Who do I talk to about ...?", "Why is X like this?"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: