AI Progress is about to get Brake Checked in the West
A couple of big trends are trying to pump the brakes on AI innovation in the West.
First – data hoarding and the smell of money
With the initial astonishment wearing off over generative AI, content creators now see big monetization opportunities via litigation. Organizations like the Wall Street Journal and Getty Images are among the latest to realize their copyrighted content has been scraped and used to train models like ChatGPT and Stable Diffusion. Naturally, they want to get paid. This would not present a problem if it were just the Wall Street Journal and Getty, but when it is necessary to consume all available text and imagery in the solar system to support a business model, a lot of people and organizations will expect to be paid. Attorneys around the world are already drafting their pay or else letters. OpenAI could find itself spending the summer opening ransom notes and responding to injunctions requiring them to expunge copyrighted material from their training datasets, dramatically reducing the size and quality of training data. If they acquiesce and pay the buddy money, expectations will be set high for those seeking compensation. AI training data will forever be something only billionaire organizations can acquire at scale. However, if they don’t pay and are forced to expunge significant sources of data, then we might never see the next major evolution of these large models. With today’s AI architectures, it doesn’t matter if someone can build a trillion-parameter model if they cannot also scale training data tonnage.
Of course, there’s nothing wrong with content creators asserting their ownership rights. However, we must expect the next epoch of AI innovation to advance slower than the previous epoch as large data sources get hoarded more jealously.
Second - regulation
Steely-eyed government regulators have their sights on AI. These regulators are most active in Europe, but their U.S. counterparts are salivating at the opportunity to be on the ground floor regulating a young industry before it has time to develop bad habits. Incidentally, this will kill AI in the U.S. for all but a few big firms; but that is not the concern of regulators. Shockingly, OpenAI is encouraging regulation. Having achieved alpha leader status with zero regulation, economic headwind, or copyright enforcement, they now support restricting competition by creating regulatory barriers for small companies and others yet to rise.
The calls for regulation are echoing despite the small number of serious problems arising from the industry thus far. Regulation will punish entrepreneurism in this young AI industry, insuring it remains mostly in the realm of billionaire organizations and adversarial foreign governments. This denies the U.S. a major historical advantage enjoyed in prior conflicts with strategic adversaries, the power of industry. Americans risk falling behind in AI if regulators suppress progress at a time when adversaries are fully committed. Even the U.S. military appears more concerned with controlling AI than exploiting it. At a time when adversaries see an advantage in militarized AI systems, the Pentagon is expending resources reassuring everyone that human-in-the-loop (HITL) will remain the policy throughout the 2020s. Make no mistake, China has a different policy. The PRC is not holding back AI development, not for marketplace regulators and not for battlefield nostalgia. Adversarial militaries are not squeamish about fully autonomous combat equipment (battle bots) in both kinetic and cyber domains. In the near-term, the U.S. could find itself in a battlespace with adversarial AI systems that are impossible to counter with HITL systems. What do we do then? Holding the line with HITL systems is like issuing new swords and muskets after the invention of the machine gun, stupid.
Conclusion
In the past decade, AI engineering has advanced tremendously. Last year, the public got firsthand exposure to multi-billion parameter generative models. Popular culture abruptly realized what those of us inside the industry have long known – everything is about to change. However, if we are not careful, the biggest advances of the next decade will take place in decidedly undemocratic places. Whether by ambitious regulation, ultra-sensitivity to AI ethics, or hoarding data – AI development in the West could get brake checked while nations in the East ramp capability. Some of you might believe slowing AI could be a good thing, but I’ll remind you that extremely mean people are rapidly acquiring skill with the most advanced AI. History will not be kind to the West (especially the U.S.) if we do not lead the AI transformation.
Â
Â
Thanks for staying to the end, I hope it was interesting.