Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Getting proper (and official) ROCm support across their consumer GPU line will be big as well. Hobbyists aren't buying MI300's and their ilk. And surely AMD is better off if a would be hobbyist (or low budget academic/industrial researcher) chooses a Radeon card over something from NVIDIA!

I'm about to buy a high-end Radeon card myself, gambling that AMD is serious about this and will get it right, and that it won't be a wasted purchase. So yeah, if I seem like an AMD fan-boy (I am, somewhat) at least I'm putting my money where my mouth is. :-)

AMD’s software stacks for each class of product are separate: ROCm (short for Radeon Open Compute platform) targets its Instinct data center GPU lines (and, soon, its Radeon consumer GPUs),

They've been saying this for a while, and I'm encouraged by reports that people "out there" in the wild have actually gotten this to work with some cards, even in advance of the official support shipping. So here's hoping they are really serious about this point and make this real.



Yeah, don't. Buy an Nvidia and get shit done.


For some people, it's not just about getting results or "get shit done" but about the journey and learning on the way there. Also, AMDs approach to openness tends to be a bit better than NVIDIA, so there's that too. And since we're on HackerNews after all, an AMD GPU for the hacker betting on the future seems pretty fitting.


For someone using Linux, an AMD card may be even better suited for 'getting things done'

Wayland and many things outside of GPGPU are much better; ie: power control/gating/monitoring are all available over sysfs. You can over/underclock a fleet of systems with traditional config management.

GPGPU surely deserves some weight given the context of the thread, but let's not ignore the warts Nvidia shows elsewhere.


> For someone using Linux, an AMD card may be even better suited for 'getting things done'

It seems like that on paper, but in practice I've been getting constant GPU crashes and freezes on both my personal and work pc. No one seems to know what this is about and may be multiple issues, but it's been like this for a long time now.

https://gitlab.freedesktop.org/drm/amd/-/issues/1974#note_21...


I'm sorry to hear about the troubles you've seen. I did hedge slightly with 'may' :p

I've had the exact opposite experience; from way back since the 4870 series was common to now with RX6000, AMD has been great for me with Linux. More systems than I can really count, Intel/AMD have been great - while Nvidia, not so much.

Most recently I've not used the 'auto' method of DPM (mentioned in that issue).

I've deliberately set this to 'manual' since at least picking up RX6000 for undervolting/overclocking. Perhaps this is part of why I've been so pleased.

I'm curious on the software levels you run - what distributions do you tend to prefer?


Agreed, AMD and Intel are much easier to rely on. I’ve never had it nicer on Linux than I do now with a primary AMD GPU and a secondary NVIDIA that I can use for games or CUDA, or pass to a VM.

It feels great finally having bleeding edge kernels and Wayland compositors, with the guarantee of a Linux or Windows VM’s stable driver if something breaks for the NVIDIA blob, and my desktop stays operational regardless.


That setup is really nice, I miss doing VFIO. The demarcation point is truly a delight, and with hugepages/CPU pinning, the performance cost is negligible.


In principle I'm all for openness, but it doesn't mean anything if the thing doesn't work. I just haven't found AMD drivers to be reliable enough to use, on any platform, whereas with NVidia I install the proprietary drivers and then it just works, on both Linux and FreeBSD.


That's a shame. Do you tend towards the mobile side, by chance?

The vast majority of my experience has been with discrete (desktop) cards and very new kernels/mesa. It's been great, here - on a number of hardware configs.


Mostly laptops, but generally the chunky "gaming" kind with discrete GPUs, so IDK.


Ah, yea those 'dual GPU' systems have been truly awful for me; discrete + integrated.

I gave Linux/the ecosystem at large a chance with a couple of those and was generally disappointed.

No good way to be sure which card was used... the control mechanism was a bunch of glue/tape.


Nvidia is still much more reliable than Radeon on Linux.


That hasn't been my experience, but like with choices - experiences vary. In my case... this has mostly been with desktop/discrete GPUs.

I've been burned by enough laptops with mobile cards that I just stick with integrated; Linux does/did so poorly with Optimus or whatever dual high/low power GPU tech that I never bought another.

I'm a little doubtful, largely because AMD contributes to the kernel/mesa far more than Nvidia. There's no Linux monolith to support this; not all distributions are equally current.

I've had discrete cards from all of the major vendors for the last few generations for VFIO testing on Linux on mainline kernels.

Intel/AMD have generally been more reliable (for me) and quicker to adopt standards.

If you run an LTS or something with generally older software, Nvidia is probably fine and dandy.

It's a regular routine to have to wait for them to support new kernels. Yes, I know about DKMS, no it isn't always sufficient.


AMD's debuggers and profilers let you disassemble kernel/shader machine code and introspect registers and instruction latency. That's something at least that Nvidia doesn't do with Nsight tools.


I get where you're coming from, and in fact I am planning to also build an NVIDIA based ML box as well. But I pointedly want to support AMD here for a variety of reasons, including an ideological bias towards Open Source Software, and a historical affinity for AMD that dates back to the mid 90's.


Oh, if you can afford it, of course, go for it. I was just afraid you spend money on a high-end card, and are then disappointed.


Having come from Nvidia before recently switching to AMD, this is a naive take on it. Their compute software might be better but their Linux driver is abysmal to manage and takes the fun out of owning a PC. Never again. I'd take AMD over them even if the card burned my house down each time I used it.


A bit harsh but I agree in that I only believe it when I see it. Have been burned by empty promises by AMD before.


Easier said than done, at least for H100.


They're talking about consumer cards, which is the point. You can learn CUDA off any consumer nvidia card and have it translate to the fancier gear, that's part of why nvidia has so much mindshare.

Eg I can write my cuda code with my 3090s, my boss can test it on his laptop's discrete graphics, and then after that we can take the time to bring it to our V100s and A100s and nothing really has to change.


Apologies for the snark, but maybe it's better that so far AMD has had terrible consumer card support. What little hardware they have targeted seems to be barely stable & barely work for the very limited workloads that are supported. If regular consumers were told their GPUs would work for GPGPU, they might be rotten pissed when they found out what the real state of affairs is.

But if AMD really wants a market impact - which is what this submission is about - getting good support across a decent range of consumer GPUs is absolutely required. They cannot win this ecosystem battle with only datacenter mindshare.


Good luck man! Its your money to waste.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: