Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The clear move by OpenAI’s board is to let everyone resign to Microsoft and then release the tech as FOSS to the public. Any other move and Altman/Microsoft wins. By releasing it you maintain the power play and are able to let the world control the end result of whatever advances come from these LLMs.

Why this happened and whatever plans were originally planned is irrelevant.



AI safety types don't want to release models (or in this case models, architecture, IP) to the public.


That doesn’t make sense. You mean companies like Microsoft and business types like Altman don’t want to release infra to the public. Microsoft and Altman may hide under the guise of “it’s not safe to share this stuff” but they’re intent is capital gain not safety.

True safety believers understand that safety comes from a general understanding by everyone and audit-able infra otherwise you have no transparency about the potential dangers.

Releasing the tech is only unsafe to those trying to create platform lock-in. By releasing it FOSS you equalize the playing field and destroy Altman’s billions.


You seem to be using a definition of "AI safety believer" that you've reasoned and arrived at yourself. It seems like a reasonable definition and I personally agree with it - the best path to maximal social good is for the ability to build and run these models to be as open and public as possible.

I think the "AI safety believers" actually sitting on the board of OpenAI, as well as in other adjacent influential positions, have a different view. I'm probably being a bit uncharitable since it's not a view I share, but they think that AI is so dangerous that the unwashed masses can't be trusted with it - only the elite and enlightened technocrats can handle that responsibility. "Let[ting] the world control the end result of whatever advances come from these LLMs" is a nightmare scenario for them.

If it's correct that this second "elitist" type of AI safety believer accurately describes the OpenAI board members, releasing anything out into the open as FOSS is a non starter.


Do you have sources for these beliefs of the board or is this something ascertained through the huffing and puffing the last few days? Because I do understand that perspective but to me it’s been one painted by the capitalists trying to gain the tech for themselves and we haven’t heard much from the board itself about safety.

From my understanding was that the board wanted a company to safely develop the tech and define what safety means. This would materialize over the next decade once the dangers understood. Eventually Altman convinced them to go private instead of the original open source plans. This change is where the “safety by elites” comes from IMO. We to this day have no real provided definition of what is exactly so unsafe about GPT’s current very easy to identify offering, yet “safety” is thrown around a lot (again primarily by Altman/Microsoft supporters).


You have to read between some lines and connect some dots here, since nothing is really publicly known and it's difficult to know if we can trust public statements anyway. Which of course means I could be completely wrong.

Still, an example I'd point to is this article where Dustin Moskovitz, who has close ties to the OpenAI board [1], is quoted saying "The thing I’m most interested in is making sure that state-of-the-art later generations, like GPT-5, GPT-6, get run through safety evaluations before being released into the world". [2] I'm not sure how else to read that other than that he doesn't think normal peons should have unfiltered access to this tech, and it would be unsafe if they did.

Beyond that, if the board shared my view (perhaps yours as well?) that the safest thing would be to make all of this FOSS, and either profits or elitist safetyism weren't important to them, why wasn't everything FOSS already?

Less concretely - the kind of elitist "we know best" attitude is exactly what I've come to expect from the effective altruism crowd, which several of the players involved here have publicly expressed alignment with.

[1] https://news.ycombinator.com/item?id=38353330

[2] https://www.cnbc.com/2023/06/24/asanas-dustin-moskovitz-is-b...


Thanks for the links. I think overtly that yes it’s all about dot connecting.

I think a lot of the definitions of safety is early days and has been mangled by OpenAI’s privatization.

I also think the GPT vX formula will only ever be original for so long. Eventually Facebook or Google will catch-up with something just as comparable and if it’s Facebook they will nullify the value of GPT by open sourcing it anyways.

This to me is the reasoning that downplays the safety by elites angle as valueless however you look at the word safety. In the end this battle is about two things, betterment of humanity or making money. One will trump the other it’s just about how much you want to gain in the short term. I’d personally love to see all these already wealthy people hang their heads about a smaller paycheck if it means humanity gets a leg up a little earlier, safe or not. FOSS the damned thing.

PS - What a world where Zuckerberg may be the great equalizer…


The impression I get from what I read is that AI safety people want to have large cutting edge foundational models they control, can do testing and research on. They have no interest in open sourcing those cutting edge models. Whether this makes sense in the long run I don't know, but it is what I meant by AI safety types.

Of course money makers want to keep it closed source to extract rent. In some ways that is an area of agreement between the capitalists and the safety people.


I agree with the definition everyone seems to be eating up, but I think it’d be healthier if we all started calling bs on the double speak and hidden profit motive of the use of “safety” and make them be more upfront about the intent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: