Reflections on OpenAI
A preview on the next wave of generative AI and some ethical and practical thoughts
I'm sitting in SFO airport after a day with OpenAI, reflecting on a fascinating developer enablement session the team hosted. This trip is one I won't soon forget. Yesterday, I attended a closed-door session for a group of CTOs at OpenAI's HQ in downtown San Francisco. We watched presentations from engineers, product managers, and execs as we delved into their upcoming features and dev tooling. Most striking was GPT's exceptional aptitude for software engineering. Given just a few bullet points, it could troubleshoot complex bugs and draft hundreds of lines of code.
After sleeping on it and starting to process what I've seen, I've begun to unpack a few thoughts.
First, gentle anger. How is it that only two or three companies in the world have had access to this technology for years, ring-fencing it from the rest of us to drive advertising revenues? Leveraging it to draw our attention towards the highest bidder feels akin to having the cure to cancer but using it to make the nicotine in cigarettes more addictive. When my peers ask why public sentiment is hesitant to entertain a bailout of a bank branded "Silicon Valley," they need look no further than the damage we do to our own reputation with examples like this.
Second, a rising, nervous excitement. It's the kind you get before a very important meeting that might lead to something fantastic. Feeding an Excel sheet into GPT-4, it can build conclusions in seconds that would have taken a human analyst hours. GPT-4's superpower is reasoning—breaking questions into logical steps and connecting dots. Feed it a tax code, and it instantly weaves together the chaos into a sequence of bulletproof calculations. It's like the second half of your brain you've been missing since birth. I can see years of my training as a lawyer and an engineer vanishing before my eyes, and I'm delighted.
Third, I feel fear. OpenAI's original mission was to provide an ethics framework for this superpower. Quickly, this pivoted into a $10 billion investment from Microsoft and a pricing plan.
I think back to "Superintelligence," the Nick Bostrom book. He talks of inflection points, moments when AI moves into a realm of self-awareness with the means and momentum to expand its IQ beyond our control. Computers don't have the same limitations as flesh and bone. Biology has placed guardrails on us, but when neurons are a combination of silicon wafers and self-improving algorithms, evolution takes nanoseconds, and resources are nearly limitless.
The ethics issue bothers me for several reasons. AI has been a part of our lives for years, its power concentrated in a few targeted places. It builds our newsfeeds and feeds us a never-ending stream of meme bubblegum, handing the spotlight to those who can excite and hold our attention. Teams at big tech companies invest billions in "trust and safety," cleaning content and supposedly getting rid of the nasty stuff so we can enjoy hours of uninterrupted monkey-on-lawnmower content guilt-free! Yet, the public increasingly understands that the drive for profit pollutes algorithms, directly impacting our politics and mental health. If attempts to manage AI's second-order implications have failed so far, why would another collection of largely the same West Coast elites fare any better this time?
My second concern is rooted in contradictions around OpenAI's founding story and origins. It started as a not-for-profit, but that is clearly no longer the case. Perhaps CEO Sam Altman felt that without commercial firepower, it would be nearly impossible to bring the necessary cash to compete with big tech. Regardless, contradictions in their narrative make people nervous, especially when they've just joined forces with a tech giant that recently terminated its entire AI ethics team.
If I have one call to action, it's for those custodians of AI morality: open the rulebook to the world. There's a high chance you are authoring a constitution for a future era of human society—an era of abundance or Armageddon. Getting this wrong leaves no room for a do-over; there are no second drafts. Open the document for editing and let the world examine and challenge it. I commend you for breaking this into the open and forcing other players to open their platforms to the world. But if you're going to choose a path for humanity by injecting this technology into nearly every piece of software on Earth, then at least let us have a say.
Having arrived two days earlier, excited and curious about how best to leverage this new trend, I now leave with no doubt that this is the next supercycle of human evolution. Over the next 10 years, nearly every aspect of white-collar work will change, and the potential for humanity at this critical moment is mind-boggling.
With that, let me paste this into my GPT-4 preview playground. I'm sure I've made more than a few errors it can clean up for me before I take off.