The story is only trending because it’s an AI model and the internet is anti-ai right now. It’s a double standard.
It’s like how people are outraged that electricity is being used in data centers to power AI models. When you do the math, the power consumption is far, far less than all the other things you do all day without thinking twice. But again, anti-AI double standard
>A product like Chrome probably has 10,000-ish features, maybe more.
It doesn't have 10,000-ish features that take 4GB of space.
Chrome doesn't take 40TB on my hard drive.
The machine I'm typing it on has 10GB free right now, and that was after I cleaned it up. I noticed the hard drive filling up when I was doing nothing, but I didn't suspect Chrome of all tihngs.
if someone doesn’t want ai on their devices, you think it’s a double standard that they’re annoyed when it’s installed anyway?
i’m not anti-ai by any stretch, but to pretend like their personal choices don’t matter is a bit too dismissive. it’s their choice, we probably shouldn’t imply other people having their own personal taste is hysterical or whatever it is you’re dancing around.
There are many technologies that begin in the corporate world on the enterprise level, and/or in research and education fields, and then trickle down to consumers. And basically anytime a tech reaches consumers, it's a fait accompli; it's ingrained in the business world 100%; scientists and defense contractors have blessed it.
The Avalanche Has Already Started. It is Too Late for the Pebbles to Vote. -- Ambassador Kosh Naranek
The funny thing about "AI Data Centers!!1!" is that they're unsurprising to anyone who knows the progression of this. First there were gigantic computers. Then telecom closets and machine rooms. Those machine rooms and closets got big and hungry! But they were hidden inside drab office space and far inside security perimeters and nobody really paid them mind, because it was part of doing business for the businesses.
Then came the cloud mania and corporations began gutting their machine rooms and migrating to the clouds. So if the consumption and demand for resources ramped up, who knows, but it was transferred from a very distributed, scattered model to centralized in a few big datacenters.
And now those datacenters are becoming an end unto themselves and everyone's gotta get one. Yeah, the scale and consumption of computing increases, but this has been evolutionary and it's only alarming because now, you can drive around a big city and pass several obvious data centers (and a few non-obvious ones) on your way. Did people freak out over AT&T constructing central offices? Dunno, those meant a lot of jobs. We all needed to reach out and touch someone.
The 'internet' is not an entity. Outrage and engagement drive ads. Beyond that 'AI' has very little benefit for most people and it's straight loss if you look at consumer electronics (getting price out of PCs) or energy prices.
I’m actually quite interested in this on device scam detection and might be installing chrome on my aunts computer. She’s an upper 70s millionaire widow who is constantly confused and attacked by a deluge of convincing scam emails.
Just fyi, this is not a temporary phenomenon, not a phase. People dont like spam, robocalls, persistent advertising, even as we use the tools that enable them. They definitely wont like massive job losses, if that actually comes to fruition. Constant surveillance, "slop" news and entertainment, significantly reduced human contact - not popular. Like most technologies, AI benefits a small group - those who control the means of production - but everyone else loses out.
Not just the Internet either. People are actively talking about data centres using available electricity, and the constant push from employers of using AI for things it clearly isn't suited for. Not to mention the constant "Let me talk to a real person" requests -- people see AI's everywhere and often have no desire to interact with them.
It certainly makes me uncomfortable given the current capabilities of AI and what the tech CEOs have said about what they see AI becoming. It's not just like any other feature. Am considering uninstalling and no longer using Chrome on principle now.
Those disks have been too small to be a reasonable default, and getting even more unreasonable by the day, for a decade, so while I agree that's a great reason to be quite peeved about this move, I'd be mad at Apple even more.
I switched to the Kobo ecosystem about a year and a half ago and have been pretty happy. While the book availability and store aren't at complete parity, I've only had one situation where I couldnt get the book I wanted and it was available on the Amazon store (and I read a lot of books).
Modern discourse happens on social media where fear and outrage drive engagement, which drives virality. We have become convinced in a short amount of time that AI is going to take all the jobs and eventually kill us all because that's what people click on.
Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
> Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
It's not an either/or thing though. Compare to something like combustion. Sure it definitely improved productivity but also lead to countless violent deaths.
You say that, but I've created plenty of production bugs because two different implementations diverge. Easier to avoid such bugs if we just share the implementation.
I've also seen a lot of production bugs because two things that appeared to be a copy/paste where actually conceptually different and making them common made the whole much more complex trying to get common code to handle things that diverged even though they started from the same place.
I don't think that's true. Every iPhone user I've texted in the last 6 months at least has had rcs turned on, and that's including some very non tech savvy friends that I doubt did it manually
What's wrong is the micromanaging, and also the "operationalization" of politeness into the metric of "these specific words and these specific times." Both are dehumanizing with or without AI - both on the employee side and my side - what is the point of politeness if it's basically at gunpoint?
I would equally have a problem with a manager who is threatening to write people up if they don't meet some count of saying the words "please" and "thank you."
I don't want AI to enable micromanagement of stuff that doesn't really need to be micromanaged. How it should be done is this: Print a QR code on the receipt. If I feel the drivethru conversation was bad, let me scan it and notify you. Then you can have AI review that conversation, and we'll also find out who the people are that just like to complain too much and ban them from the establishments.
Anyone who's been to a Chik Fila more than once has experienced how weird and off-putting this kind of micro-management to the point of ensuring certain phrases are always used in particular situations is- every conversation with them ends with them saying "my pleasure" in a rote way.
I definitely agree that is weird and off-putting, but I recently moved to an area with a grocery store that is the complete opposite: the cashiers stand there silently through the whole order. That's also off-putting despite my introversion. I think we need a middle ground with a simple mandatory polite greeting like "Welcome to Hank's" and then after that leave it up to being organic/authentic.
When a human makes sure employees are being polite, they're reinforcing the social contract that comes with employment. When you remove the human from the equation it's literally dehumanizing. That's it. Thats the why.
Netscape had a 90% market share in 1995. If OpenAI is metaphorically netscape, what prevents its competitors from prying away customers every day? What prevents google/facebook/microsoft from using their position to bundle chat experiences? Especially if the tech is a commodity and OpenAI's models are about as good as everyone elses?
In 1995 no one used the web still. Sure, we all did, but it was pretty niche. I think you could argue that chatbots are niche as well, but the user base of OpenAI is way larger now than Netscape in 1995. Netscape had probably 25 million users at the end of 1995. ChatGPT has about 800 million.
It’s like how people are outraged that electricity is being used in data centers to power AI models. When you do the math, the power consumption is far, far less than all the other things you do all day without thinking twice. But again, anti-AI double standard
reply