print-icon
print-icon

Blain: OpenAI's FUBAR Corporate Governance Is A Lesson For Every Company!

Tyler Durden's Photo
by Tyler Durden
Authored...

Authored by Bill Blain via MorningPorridge.com,

“He hath a lean and hungry look; he thinks too much. Such men are dangerous….”

Every single corporate on the planet should be looking at its board composition and corporate governance structure to figure out just how vulnerable they may be. OpenAI’s self-immolation should be a lesson.

“I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and will do everything to re-unite the company.” Thus spake Ilya Sutskever, OpenAI’s chief scientist. He told Co-founder and CEO Sam Altman to sling his hook on Friday, hired a replacement on Sunday, and pleaded with the board to take Altman back and sack themselves on Monday. So much for board level consistency.

Readers might prefer if I was writing about serious finance this morning; like the UK’s autumn statement and likely tax-bribes to the electorate, the fact Joe Biden turned 81 yesterday – but looks and sounds older, or what the Fed, BoE and ECB might be thinking… but who can resist watching a train wreck!

This one matters.

The effective corporate suicide of OpenAI has serious implications across the global economy – there are lessons every single corporate board on the planet should be considering. Does the board understand their role, and do they understand their business and employees? Open AI’s Board has failed on all counts.

I am genuinely surprised OpenAI remains apparently functional as I write this at 7.00 am on Tuesday morning. I am sure that will change during the course of today. At close of play last night, 747 of the firm’s 770 staff, including stab-in-the-back Sutskever, had signed a letter of no-confidence in the board and warning they are collectively off to Microsoft unless the board resigns.

All of which raises some critical issues around Corporate Governance.

Call it what you will, but what we are witnessing in the spectacular immolation of OpenAI is corporate insurrection, revolution, or mutiny. This path leads to destruction and madness. How will we survive in a world where workers call the shots? When mere technicians tell august board members what wrong-headed dunderhieds they are? How will companies survive when Workers Soviets take control of the shop-floor?

Bandiera Rossa, la Triumfera!  

There is no point blaming the mob. It’s the conditions that trigger revolution that matter. I can’t help but picture the board of OpenAI as the Tzar in 1917 wondering what the serfs are upset about…

The convention – enshrined variously in corporate law – is the Board runs the company, and is responsible for its actions. What the board says is what the company does. A company is not a democracy. It exists for a purpose – defined loosely or succinctly by its documents or vested in the board. That purpose can be as clear and simple as make money for the shareholders, or as complex as “do good.” Any sergeant will tell you badly defined mission statements inevitably lead to bloody failure.

One question to ask is what is the properly constituted board of OpenAI and did it have the right to kick off this governance disaster? The bylaws of OpenAI are quite clear. The Board has the exclusive right to hire and fire directors and set the board size and composition. It has the right for a minority of the board to take action without notice. 4 of the 6 board members were therefore empowered to sack Altman and demote company president Greg Brockman (who then resigned).

It was no secret Altman and his party wanted to commercialise their valuable expertise in AI through OpenAI’s commercial subsidiary founded in 2019 (attracting $10bln for a 49% Microsoft stake). Yet, the 3 independent board members on the board – 2 of them fully paid-up “effective altruists” – were fully invested in the not-for-profit set up of OpenAI, and their fiduciary duty was to its mission statement to “develop AI that will be safe and beneficial for humanity.”

There was clear conflict between the mission statement and the commercial ambitions of the insiders. And that is no surprise… back in 2016 Sam Altman and Elon Musk set up OpenAI because they feared other AI firms – notably Google – were already leading the race to the commercialisation and thus dominance of AI. Proposing a grandiose “save-humanity” mission statement was classic Musk gaslighting.

Musk – who later backed out and subsequently slammed the door on Altman when he declined to sell OpenAI lock, stock and barrel to Musk – wanted to Open AI to be his public avatar in the AI space. (And, possibly because access to its’ bright minds might have helped his still struggling self-driving car effort..)

At one point Musk’s partner and mother of his twins, Shivon Zilis, was on the board. Others with varying degrees of knowledge and experience cycled through. Earlier this year the board had shrunk to 6 – the three insiders: Altman, Brockman and Sutskever, plus three independents, Adam D’Angelo founder of Quora, Tasha McCauley an entrepreneur, and Helen Toner, an AI safety researcher. These three are likely to find their roles in the destruction in OpenAI’s value under very close scrutiny.

Whatever happened during the conspiracy phase of the coup against Altman involved the three independents suborning Sutskever on to their side – which was made easy by his feelings of hurt at being somewhat side-lined by Altman. When they struck on Friday morning, they triggered a corporate failure of stunning degree. A $86 billion valuation destroyed in the space of a few days ranks as about the swiftest self-destruction I’ve seen in the past 40 years. A “Ratners Moment” par excellence.

It would seem the conspirators has little idea of how to run their uprising. The first rule of a palace coup is to capture the king and control the key pieces. Don’t let them ferment dissent. Yet, within moments of his dismissal by the board on Friday, Sam Altman had seized the headlines – his supporters dominated the news flow. Even the replacement temp CEO remained a consistent supporter of Altman, among the first to demand his reinstatement. The board said nothing except to issue a statement saying he had not been “consistently candid”.

Because the board has no stated duty to shareholders in its commercial subsidiary, investors in the commercialisation of OpenAI’s skillsets, like Sequioa, Andreessen and Khosla, had little influence or say on the board – but has access to Altman and clearly share his ambitions. If the whole of OpenAI now moves to Microsoft – they are the financial losers.

The real issue is not just the fact the board of  “effective altruists” were so ineffective they failed to anticipate the consequences of their actions, but clearly had absolutely no feel or understanding of their employees. While they want to keep the world safe from the potential ravages of AI, they seem to have been blithely unaware of ambitions of the staff to monetise their skillsets, and their expectation of selling some $1 bln of their employee shares to become rich in time for Christmas.

The takeaways are clear.

  • In complex new businesses the expertise and experience of independents on the board is critical. OpenAI was short of both – and critically seems to have zero appreciation of the staff’s ambitions or motivations.

  • The need for oversight to ensure safety in new technologies, including AI, is clear. We need to design governance structures fit for purpose.

  • Any company is only as good as the quality of its board.

Anthropic is another AI firm that has spotted the potential governance issues, and has set up a structure to address these inconsistencies, see this article on its website, or this from the Harvard Law School. Not entirely sure they are there yet.

The bottom line is D’Angelo, McCauley and Toner, the independent board members will go down in history as numpties, and Altman and crew will ultimately get very, very rich. But who really wins?

And who will be protecting us from AI?

0
Loading...