It provides a filesystem abstraction, which agents are really good at interacting with. Because it's just a POSIX filesystem - you can put a sqlite database directly on it and get those same transactional capabilities for that too.
As that has done both sides of games, I would like to propose some doubts for people to consider on that is dissimilar to the standard b2b saas; for to clarity I'm not saying 30% is good
- One chargeback for your 5$ game can consume you 55$ or more, handful and you permanently lose the ability to accept the payment anywhere including future businesses outside of games
- Amount of people that will take parents cards is eye watering
- The value of offline payment acceptance in the form of physical cards (kids do not possess standard payment rails but can acquire your game on steam in the cash)
- They don't take flat 30% for almost a decade now
- You don't often get to use Stripe or 2-3%. Your cost closer about 15% if you choose to process you own payments
Publisher gives you development budget because most games arent made by one person and you need money. At least $50,000 - $150,000 for a small PC games.
Then publisher takes 70-90% before recoup and 50% afte of what remain after VAT, refunds and Valve's 30%.
Problem is when you spent $100,000 and sold lets say for $400,000:
* Valve gets $133,000
* Publisher gets $100,000 + $90,000
And you get $90,000 and real number would be much worse because VAT, refunds, etc.
Oh, dont forget to pay your taxes on $90,000. Good luck!
> One chargeback for your 5$ game can consume you 55$ or more, handful and you permanently lose the ability to accept the payment anywhere including future businesses outside of games
This sounds like personal experience. Can you elaborate?
Edit: OHH perhaps you are saying this is one of the benefits of Steam; that it shields you from all this.
> Edit: OHH perhaps you are saying this is one of the benefits of Steam; that it shields you from all this.
Yes. In a sijmilar way: regular companies get Stripe at commodity pricing, games get xsolla, paysafe, tebex, and a massive compliance questionnaire, games are software (to you) but closer to porn or gambling on risk (to MoRs and processors).
People are less "likely" to charge back Steam because of their other games being frozen and Steam has volume to dilute chargebacks whereas you starting out may hit double digit dispute rates in one. Whether this is fair is an exercise best left to the reader ;.
likely what they are implying is that chargebacks have indirect costs that you can ballpark around $50 per chargeback. So steam would likely take back the $5 revenue from the developer for the $5 chargeback, but the costs of processing the chargeback are absorbed by steam. i do not know if they have a separate chargeback fee they charge developers for it but it wouldnt make sense to as steam is the one validating and processing payments
> but no one does this for regular apps these days - never heard of it before
Everyone does this to match files as identical, be it sha, md5, or something else. I cannot imagine any other method such that it would first come to mind easily you would be doing to check if two files are the same.
I don't mean to offend but I quite literally mean everyone does this. Every software updater, game patcher, checking if two binary files are identical (pixel perfect/lossless in this case: BMP, PNG created by same encoder off same inputs would qualify, JPG would likely not), all of them do exactly this.
GPT-Analysis or a similarity and image chunk hashing would not be the first thing you turn to if what you wanted was exact identical pixel perfect. I am curious what your background is if this is the case.
No one that I've seen takes automated screenshots of webapps or games or what have you at pre-determined timestamps to make sure the app looks pixel-identical with every change.
(regardless of the method; the SHA'ing isn't the point here, the point is that it's a shortcut instead of "inspect the image for any regressions", since we don't need to inspect the image at all if it is identical)
> No one takes automated screenshots of webapps or games or what have you at pre-determined timestamps to make sure the app looks identical with every change.
I'm confused. We have done this at every place I have ever worked, it's very standard. Set timestamps, post-action, pre-action & on dozens to hundreds of combinations of OS and rendering engines. This includes pre LLM, using similarity and perceptual hashing, screenshot-ing single DOM elements during hover and off hover, both fuzzy and pixel perfect.
Huh! Well, I stand corrected. I've never seen that done (but I've only worked at startups with < 20 headcount for my entire software career so far, so that might be why).
Huh. Were they anywhere that pixel perfection was necessary such as games, or required constant browser universal testing for compliance, accessibility, being required to support cross platform?
Have any of your places used a service such as Saucelabs or Browserstack or rolled their own similar inhouse, or seen such as https://percy.io/how-it-works (random example; not affiliated or recommending this)?
I am hope I was not being too rude about it, not my intent, mostly surprising to me because a service like Browserstack is a decade and a half old already and the concept predates that.
I was wrong & you called me out on it, not rude, all good.
My first software job out of college was actually a QA Automation / SDET position, wrote an automated framework with Ruby + Selenium + Browserstack which did take screenshots of the app, but the app loaded dynamic content and there were frequent feature adjustments so no two screenshots were ever identical.
All other jobs I've had since then have been writing smart contracts for Ethereum apps - 100% backend, (I hate having to deal with frontend) so all our tests were just units & coverage & what have you.
I suppose if your environment holds constant and your features don't change frontend structure or behavior (eg refactors), then this is what you should expect.
Though, do note that this only works because my app is based on a tick/game-loop system without callbacks; if this was the standard game-development pattern of callbacks & message handling (especially w/ React / JS) to invoke events, it wouldn't work, because the timing would be slightly different each time, and an enemy would be a few pixels to the left/right of its position in the past run.
Obligatory https://m.xkcd.com/1053/ reference, but you're taking this in good stride and that's excellent. :)
If you want to go further down this direction there are all kinds of cool things you can do. There are ways to like XOR bitmaps so pixels which aren't identical show up as white and the rest are black, and the like; if you're working with something else you can look into perceptual hashing although that's a lot more computationally expensive.
Oh! And edge detection! Canny edge detectors are cheap and deterministic and wonderful for all manner of this storm.
Oh yeah, I did a deep-dive into neural networks (both artificial and human) for vision processing, it's super dope stuff. The human vision processing system is remarkably similar to some of the AI stuff we've built for image processing!
You can run a charge with only the card number if you have sufficient trust. Each additional piece you add reduces liability and transaction fees (add exp, add cvc, add 3ds, ...)
> However, there are lots of people in the world who live their whole life by vibing
Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?
I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.
> Why do they think they are "helping" with hallucinated rubbish that can't even build?
Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.
"Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.
AI is absolutely terrible for people like that, as it's the perfect enabler.
It's not about helping. It's about the feeling of clout. There are still plenty of people who look at Github profile activity to judge job candiates, etc. What gets measured gets repeated.
I believe that most of the ills of social media would disappear, if we eliminated the "like" and "upvotes" buttons and the view counts. Most open source garbage pull requests may likewise go away if contributions were somehow anonymous.
LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.
This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.
Yeah that seems to be their primary use case, if I'm honest. It's possible to use them ethically and responsibly, much in the same way it's possible to write one's own code, and more broadly, do one's own work. Most people however, especially in our current cultural moment and with the perverse incentives our systems have created, are not incentivized to be ethical or responsible: they are incentivized to produce the most code (or most writing, most emails, whatever), and get the widest exposure and attention, for the least effort.
Hence my position from the start: if you can't be bothered to create it, I'm not interested in consuming it.
I think a lot of people who haven't given it more thought might see it as an arbitrary rule or even some kind of gatekeeping or discrimination. They haven't seen why people would want to not deal with the output.
This might not be helped by the fact that there are a lot of seemingly psychotic commenters attacking anything which might have touched an LLM or any generative model at some point. Their slur and expletive filled outbursts make every critical response look bad by vague association.
Having sensible explanations like in TFA for the rules and criticism clearly visible should help. But looking at other similar patterns, I'm not optimistic. And education isn't likely to happen since we're way past any eternal september.
GLM 5.1 and DeepSeek 4 are acceptable, but the cost of hardware and energy cost that depending on your use case you may as well purchase a Tokens. They get useless and stupid rapidilty if you quant enough to run on single 16-24GB GPU style.
reply