I’m more sympathetic with the (non-profit) board this time. Sam Altman feels like bad news to OpenAI, really. It’s more like the workers rally behind Sam Altman because they can get fat paychecks and bail out once the enshittification intensifies.
Everytime smb. says “you can Google it” I think about googling it and expect some weird niche article from some esoteric sites. But then I think, why didn’t you Google it and posted an link? Why should I try to find a trustworthy source for your claim?
You’re willingly confirming something you rate as sensitive, trying to bring more credibility to it by being an extra shout and referencing a virtually unverifiable needle in a haystack ‘authority’ as Google, but find the sensitivity a reason for not sharing your information.
They’re not supporting him because they’re worried about his finances. They’re supporting him because they have faith in his leadership. If they didn’t, they wouldn’t care. Lots of CEOs are just empty suits, but Altman clearly has been doing something his workers liked.
I mean… what would it even mean to have equity in a non-profit?
Non-profits are organized fundamentally differently than for profit corporations.
If anything they should have had equity in the for-profit side of the company to ensure that their incentives were aligned, if that is even your point.
I think it brings up a very interesting test case for how this particular kind of ownership structure can fail. In another thread, it strikes the difference between authority and power, which I think was very clearly made here.
That all being said, it seems like things have taken a turn for the worse, and if anything, this board has set the mission of a truly open AI world even further back. There seem to be some real Luddites on the board who seem to think they’ll some how be able to cram Pandora back in the box after it has well escaped control. If anything, the should swing the gate wide and at least open source the everything else so as to prevent Microsoft from having a complete monopoly on the future of AI (how things seem to be shaping up).
The kind of ownership is pretty normal across a wide range of industries, a lot of hospitals in the US operate with a similar structure, NGOs and “foundations,” co-ops, independent regulators, etc. Whatever’s happening in this case is remarkable but probably not because of how the board operates in this role specifically. We have to know why they fired him to know what’s going on but that’s unlikely. It could have been completely mundane but that doesn’t matter now.
Yeah that’s what I thought. I’ve formed/ been a part of several non-profits that have looked at developing for-profit components to fund the non-profit mission.
From my understanding, Sam Altman is the one pursuing profits, and the non-profit board is the one that was overseeing it being done “safely.” If this is the case, it is the non-profit board that should be rallied for.
It’s like GMO crops. Currently we have not figured out better methods to feed the planet at scale. If anyone knows how to get the billions of dollars needed to build, run, maintain, and constantly improve such a massive super computer for the world to use, today, by all means let us know. To be clear, I agree with you, but a project this big, without something like international funding (LHC or CERN or ISS) its just not going to happen. As far as I know, for profit is currently the most effective way to funnel resources into a project.
But that’s what this is. OpenAI is both for-profit and non-profit. It has a profit arm that made the huge deal with Microsoft and ensures research continues, but there is the non-profit board that oversees them to make sure it’s done “safely.” If when the non-profit board makes a correction it gets immediately dismantled, then it was all for show and really the profit side is actually unchecked.
Great point. I don’t understand what happened yet for all of this to implode so quickly, but something this important is due to hit every emotional fault we have as humans. I think this showed how weak the board was for something like this to happen so quickly. A governing board is supposed to slow down rash decisions, not generate them. Either something drastic happened, or someone acted rashly. This should be a deliberate and rational endeavor. The people building AGI should never be surprised by the decisions being made, if not for the benefit of humanity, at least for the billions of dollars on the line. WTF OpenAI, get your shit together.
Add comment