Hacker Newsnew | past | comments | ask | show | jobs | submit | tomku's commentslogin

There's a section of his pay package that says:

  The Performance Hurdles will be adjusted by the Committee equitably and proportionately as determined by the Committee in a manner designed to preserve the economic opportunity provided under the Award, (a) higher to account for acquisition activity for which stock is provided as consideration; and (b) lower to account for a split-up, spin-off, dividend or other distribution (whether in the form of cash, shares, other securities, or other property) or divestiture activity, in each case, that could be considered material to the achievement of the Performance Hurdles, as applicable.
Matt Levine's recent opinion piece for Bloomberg ("GameStop Doesn’t Have Enough Stock", https://archive.ph/3h8wf) goes into a bit more detail about it, including why such an acquisition might still help him get there even if it doesn't instantly get him halfway.

HN has a "second chance queue" that sometimes revives posts and gives them another shot. Not entirely sure how it works, but it sounds like what happened here.

Click on the home page and look at their other headlines, read the "About" description. It's just an endless stream of clickbait AI slop. Their "archive" goes back two weeks and has over a hundred articles with the same stilted "Reasonable statement. Controversial twist." headline format. Please stop falling for this trash.


It's absolutely shocking how many people think that inverting all the quality metrics that we've traditionally used "because LLMs" will lead to good things. Nothing about this will end well.


I went down a bit of a rabbit-hole trying to figure out exactly who Matt Shumer is and why anyone should care what he thinks. The best information I found came from this article, which was from before he pivoted to being an AI startup bro:

https://www.newsweek.com/i-couldnt-play-rules-so-i-became-en...

It's kind of a sad read. He would benefit a lot from getting outside the startup bubble and talking to some people who do useful work for a living instead of riding internet fads and growthmaxxing via viral social media posts.


Thought this name sounded familiar... Matt Shumer was one of the people responsible for the "Reflection 70b" hoax a few years ago. There is no reason to take anything he writes seriously, he has a history of flat-out lying to go viral.

Edit: Summary for anyone who didn't follow this saga at the time: https://www.ignorance.ai/p/the-fable-of-reflection-70b

Shumer is at best a fool and at worst a con artist.


https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

Note that nothing about that depends on it being a local or remote model, it was just less of a concern for local models in the past because most of them did not have tool calling. OpenClaw, for all the cool and flashy uses, is also basically an infinite generator for lethal trifecta problems because its whole pitch is combining your data with tools that can both read and write from the public internet.


It's using "Fortnite" as a synecdoche for Epic Games, because "I have to give an age verification company owned by Epic Games my passport to use Bluesky" isn't quite as effective at revving the outrage engines, even if it has the benefit of being true. Personally, I don't think people who are willing to do that are showing themselves to be trustworthy but you might feel differently.


It is pretty funny how Epic Games is a platform empire now such that they provide an authentication system used beyond gaming.


Two years ago there were also hundreds of people constantly panic-posting here about how our jobs would be gone in a month, that learning anything about programming was now a waste of time and the entire profession was already dead, with all other knowledge work guaranteed to follow. People were posting about how they were considering giving up on CS degrees because AI would make them pointless. The people who used language like "stochastic parrots" were regularly mocked by AI enthusiasts, and the AI enthusiasts were then mocked in return for their absurd claims about fast take-off and imminent AGI. It was a cesspool of bad takes coming from basically every angle, strengthening in certainty as they bounced off each other's idiocy.

Your memory of the discourse of that era has apparently been filtered by your brain in order to support the point you want to make. Nobody who thoughtlessly adopted an extreme position at a hinge point where the future was genuinely uncertain came out of that looking particularly good.


Bro. You’re gonna have a hard time finding people panic posting about how they’re going to lose their jobs in a month. Literally find me one. Then show me that the majority of people posting were panicking.

That is literally not what happened. You’re hallucinating. The majority of people on HN were so confident in their coding abilities that they weren’t worried at all. Just a cursory glance at the conversations back then and that is what you will see OVERALL.


I was in one of those early cohorts that used Octave, one of the things the course had to deal with was that at the time (I don't know about now) Octave did not ship with an optimization function suitable for the coursework so we ended up using an implementation of `fmincg` provided along with the homework by the course staff. If you're following along with the lectures, you might need to track down that file, it's probably available somewhere.

Using Octave for a beginning ML class felt like the worst of both worlds - you got the awkward, ugly language of MATLAB without any of the upsides of MATLAB-the-product because it didn't have the GUI environment or the huge pile of toolbox functions. None of that is meant as criticism at Octave as a project, it's fine for what it is, it just ended up being more of a stumbling block for beginners than a booster in that specific context.


I did that with Octave too. I didn't mind the language much, but it wasn't great. I had significant experience with both coding and simple models when doing it, so I wasn't a beginner; I can see it being an additional hurdle for some people. What are they using now? Python?


Believe Andrew Ng's new course is all Python now, yeah. Amusingly enough another class that I took (Linear Algebra: Foundations to Frontiers) kinda did the opposite move - when I took it, it was all Python, but shortly after they transitioned to full-powered MATLAB with limited student licenses. Guess it makes sense given that LAFF was primarily about the math.


It’s nice to know that someone else suffered this pain. And that i bet on PGMs which really turned out to be the wrong horse…


ha! I took at least one PGM class myself. I had a difficult time with the material.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: