Hacker Newsnew | past | comments | ask | show | jobs | submit | joaogui1's commentslogin

I believe their justification is on the first sentence

> That sort of stuff causes pitchforks to rise up in other countries.

(Not that I agree)


Your countrymen applauded putting chains on foreign workers that were told by their company to build a factory in your country. Your union representative were giddy. You absolutely deserve corpo-speak, that's the least you deserve.


I'm sure you can find some exceptions, but I don't remember anyone who celebrated that or thought it was remotely good policy. Assuming you're talking about the Hyundai plant in Georgia.


3.6 is model number, 35B is total number of parameters, A3B means that only 3B parameters are activated, which has some implications for serving (either in you you shard the model, or you can keep the total params on RAM and only road to VRAM what you need to compute the current token, which will make it slower, but at least it runs)


The models get deprecated after 1-2 years, so reproducibility is pretty hard anyway (but as others pointed out the paper does list the model versions)


During pre-training the model is learning next-token prediction, which is naturally additive. Even if you added DEL as a token it would still be quite hard to change the data so that it can be used in a mext-token prediction task Hope that helps


HN has been used to train LLMs for a while now, I think it was in the Pile even


It has also fetched the current page in background. Because the jepsen post was recently on front page.


I may die but my quips shall live forever


Probably figured out the exact cause of the bug but not how to solve it


It says Gemini App, not AI Overviews, AI Mode, etc


They claim AI overviews as having "2 billion users" in the sentences prior. They are clearly trying as hard as possible to show the "best" numbers.


> They are clearly trying as hard as possible to show the "best" numbers.

This isnt a hottake at all. Marketing (iPhone keynotes, product launches) are about showing impressive numbers. It isnt a gotcha you think it is.


Sure, but the extent to which you bend the truth to get those impressive numbers is absolutely gotcha-able.

Showing a new screen by default to everyone who is using your main product flow and then claiming that everyone who is seeing it is a priori a "user" is absurd. And that is the only way they can get to 2 billion a month, by my estimation.

They could put a new yellow rectangle at the top of all google search results and claim that the product launch has reached 2 billion monthly users and is one of the fastest-growing products of all time. Clearly absurd, and the same math as what they are saying here. I'm claiming my hottake gotcha :)


Also bizarre that it got to the front page of HN while being so low quality :/


Well i think that is why it got there people really love hating :)


Anthropic has amazing scientists and engineers, but when it comes to results that align with the narrative of LLMs being conscious, or intelligent, or similar properties, they tend to blow the results out of proportion

Edit: In my opinion at least, maybe they would say that if models are exhibiting that stuff 20% of the time nowadays then we’re a few years away from that reaching > 50%, or some other argument that I would disagree with probably


It's their lab notes, so it's exploring a general idea, but they're also referencing previous software they've built (like crosscut)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: