Closed Source AI = Neofeudalism
Many of the best people working in AI did not join the field because they wanted power over others.
So this isn’t the original post I had here. The original post was AI slop, and let this be a lesson to me for posting it. It doesn’t matter if you read it and think it looks good. It’s still AI slop, and everyone else can see that. This rewritten post is the same idea, but slop-free.
Besides, “the master’s tools will never dismantle the master’s house”
Look, if you work in a frontier lab, I don’t blame you. You have a front row seat to the hinge of history. But consider what you are building and who it’s for.
A small handful of secretive closed source labs with a concentration of compute, talent, and deployment power will lead to a concentration of political legitimacy. You may think you want this and you are the good guys who will wield power well, but you won’t and you aren’t. Absolute power corrupts absolutely.
AI safety was always a question about if safe AI could be built in theory, not if a small group of anointed people could keep it safe for us. At least I respect Yudkowsky, consistently saying “If Anyone Builds It, Everyone Dies”
The cat is out of the bag. We are building it. Either if anyone builds it everyone dies, or it’s safe enough for everyone to have. That’s a fact about the world. I don’t accept a middle ground where the chosen few can have it – this isn’t like nuclear weapons, this is intelligence itself. A nuclear weapon can only destroy; intelligence is the greatest creative force in the world. If a small group of people have a monopoly on it, you are the permanent underclass in the same way animals are.
From a more practical perspective, even if the APIs stay open, you aren’t going to be able to build a stable business on top of them. These companies have raised so much money that they aren’t going to be happy with a cut of your business, they are going to come for the whole thing. This is why I maintain that the application layer will be worthless, it’s deployed intelligence itself that has value. They are happy to offer you the API for negative ROI activities, but as soon as something is positive ROI, they’ll adjust the deal until it’s just marginal for you. Like a peasant working his plot of land. Why would they share?
Open source AI isn’t anti-safety. It’s anti-feudal. Every time some AI guy blathers on about how open source is dangerous but he can build AI and make it safe (but only if you purchase it through his API), he is calling you a serf.