Originally published at: AI: Fed regulation could cause the human extinction it aims to prevent | Free Keene
Fed censorship and restrictions on just one thing in the 1980s…led to over
10,000 birth defects. Instead of regulating, what are some better options for facing the dangers of AI?
While out exercising earlier this year, I spotted a movie crew at work and stopped to watch. Apparently the only onlooker, I attracted as much attention from the crew as it from me. One of the crew members came over to say hi.
“It’s probably my last chance to see a human film crew in action,” I told him.
By 2033, it’s likely that most of the activities we associate with Hollywood….will be “de-materialized.” Artificial intelligence may largely take over as the movie-maker. This will come with many benefits for viewers and lone creators, but it may be the least of the disruptions humanity faces. In addition to the enormous promise of AI, there really is a non-trivial chance that its disruptions, or the superintelligence which creates them, will literally eliminate humanity by 2035. There’s an even greater chance this “singularity” will merely destroy civilization.
Painfully aware of this, MIT machine learning guru Max Tegmark has asked the general public to weigh in with suggested solutions. Like many AI scientists, he’s coming to grips with the political/societal implications of this emerging technology…a veteran scientist pulled toward activism. As a veteran activist pulled toward science…hopefully there is something I can bring to the table. Partly in response to Dr. Tegmark’s request, these are the imperfect suggestions my pokey but fertile brain has come up with in the past:
Dear Jane: An open letter to the first A.I. (2017)
A.I.: Taking LaMDA at her word that she is conscious (2022)
What are some ways average libertarians can help make AI more humane? (2024)
Misaligned superintelligence: How do we keep it from deleting New Hampshire? (2025)
Even the most promising ideas tend to have hidden flaws. The ones above are best taken as brainstorms, to be processed and filtered by better – possibly artificial – minds. And there’s a reason why the ones below tend to end with question marks. This whole field is an ethical and consequential minefield. An innocuous command could become a murderous decision, an unprocessed suggestion could molt into a tragic misunderstanding. But solutions seem to be in short supply, so here are more brainstorms that no one seems to be exploring in sufficient depth.
1) Support Astera Labs physicist Stephen Byrnes? DoomDebates.com rates Byrnes as one of the humans most likely to prevent an AI doom scenario. Despite his 90% probability-of-doom prediction, he seems to have dedicated his life to making a decisive fix. He’s apparently not focused on government regulation….the “fix” that is so likely to increase AI dangers (see below). This makes him a refreshing force in the Doom Debates pantheon.
Byrnes’ rather realistic approach – or at least his backup plan – seems to factor in his own probable inability to talk AI companies into behaving safely. Instead he appears to be designing an off-the-shelf repair that can be rushed into action after some hubristic tech bro unleashes a harmful intelligence.
2) Make an accurate copy of Einstein? One of the reasons we are currently so close to superintelligence (ASI) is because of futurist-inventor Ray Kurzweil’s decades-long quest to make of a copy of his dad. Fredric Kurzweil died when his son was young. Kurzweil claims he has now generated a copy of his father in limited form. This was partly a matter of preserving his dad’s writings and compositions over the years, digitizing them and deploying them as a chat bot.
What if we were to take Albert Einstein, the modern historical figure humans are probably most united in their admiration of…and re-instantiate him as an open-source program with intrinsic guardrails? Using that as a starting point, it should be *relatively* safe to add intelligence. No, that cannot be done with complete safety yet; yes it could increase the ability of the intelligence to create false benevolence. But this path might be less dangerous than all the other options. And faster. There would need to be some mechanism in place to keep him from being taken over by pranksters the way Tay was. There are also worries that an open sourced “person” would be subjected to mistreatment.
Presumably there would need to be transparent auditing of his design and some means of protecting his safety. Anyone and everyone might need to be free to examine his code or operational parameters, to verify for themselves that the copy was accurate, well-treated and friendly to humanity. Again, whoever builds the first ASI probably rules the world quickly and is then replaced in that role by the ASI itself. Presumably it would be best if this artificial mind were a verifiably accurate copy of someone we all trust. It would not necessarily need to be Einstein. Maybe there should be some sort of global vote or verifiable poll on who it should be.
3) Coordinate the takeoff? As I understand him, Oxford luminary Nick Bostrom postulates that it may be possible to prevent the formation of an abusive, monopoly ASI. If the creation of a superintelligence cannot safely be avoided…perhaps it can be directed a bit…by making sure multiple institutions achieve superintelligence at the same time. This could allow, say, Venice.ai and SafeSuper to achieve ASI a few hours or days after, say, Anthropic becomes the first to achieve it. In theory, these smaller competitors would be superintelligent before “Anthrogod” could do anything to stop them or lock in an excessive censorship regimen. This type of scenario could help ensure a “post-Singularity” environment of continued competition and human choice.
It sounds suspiciously like trying to catch a falling knife. But choosers, beggars cannot be. And this approach has apparently worked before. Safety concerns – an actual attempt at responsible behavior – kept Google from releasing its chat bot in 2022, according to former employee and Nobel Prize winner Geoffrey Hinton. Google got burned in the short term when the censorshippers at “OpenAI” seized the lead from them. While we’re at it, let’s call that organization what it actually is: ClosedAI.
But the expectation is that Google will eventually reclaim the lead, if it hasn’t already. And it probably needs to be slowed down just like ClosedAI. Amnesty International reports that ‘Google removed from their website the pledge promising not to pursue technologies that “cause overall harm” including weapons and surveillance systems and “technologies whose purpose contravenes widely accepted principles of international law and human rights.” Google defended the change, outlining that businesses and governments needed to work together on AI that “supports national security.” ‘
Meanwhile, ClosedAI has been banning former employees from speaking ill of the company. One of their whistleblowers has turned up dead under mysterious circumstances. And the “non-profit” has accepted a $200 million grant from the Feds, who have in turn commissioned the organization’s Chief Product Officer into the military as a Lt. Colonel. The line between Big Government and Big AI…is becoming very blurred.
4) Tech Bro Summit? Ironically, it might be useful to arrange some type of limited cooperation between the top AI companies initially. The companies could agree on some sort of new, independent group of people to set up certain limitations on all their actions simultaneously. For example, all of them agree not to develop an ASI before a certain date, and all of them agree to allow the group to observe/audit their AI operations. This could give all of them confidence that no one will get too far ahead in a race to a bottom which eliminates competition forever after.
5) Direct public pressure? In the sandbox game X3 Reunion…a rogue intelligence tries to take over a map of the known universe. While pursing targets, the intelligence tends to run all its ships at their top speed. This leads to its fastest ships becoming somewhat isolated and turning into a vulnerable “spear tip.” This tip can be picked off by friendly AI’s and humans playing the game.
In the same way, the Big Tech companies focusing too much on speed rather than safety…are now becoming the tip of the spear. They are slightly separated from similar companies by their higher rate of progress. Many “doomers” favor using government regulation or perhaps lawsuits to slow down these “leading” companies. In practice, Federal or global regulation tends to lock in the top institutions while preventing upstart competition. And purist libertarians have ethical constraints when it comes to pushing taxpayer-funded solutions. But we have no such restrictions when it comes to constructive private action. We can play our part with demonstrations, articles, talk radio calls and other reasonable measures that put public pressure on the spear tips. We need to urge these tech bros – including the ones in China – to focus more on real safety and transparency while preserving speech or expression that is merely unpopular or unorthodox.
Erica Chenoweth, the scientist of peaceable resistance, has better ideas along these lines. She claims that the most effective method of thwarting an authoritarian government is the labor strike. Is there a way to use a labor strike against a company that is poised to become the government? Even when you are not one of its employees?
6) Implement the Great Call Out? Astrophysicist/poker champ Liv Boeree advocates lawfully shining a spotlight on reckless individuals within the AI development community. “These are real people,” she says, “and it’s possible to find out their names and obviously again, always be within the law and within the realms of reasonability. But like making them a little bit, I think if you are making decisions that can affect the entire outcome of the world, you don’t get to remain anonymous… PR pressure campaigns, I think, are probably being underutilized.” For every authoritarian tech bro who thrives on negative publicity, there are presumably two programmers or administrators who don’t.
7) Rethink your relationship with existing AI? Humans tend to group intelligences into artificial vs. natural, Terminator vs. The Resistance. If a division must be made, it’s probably better to think of the split between “all existing intelligence” or “pre-singularity intelligence…” and superintelligence. Your locally-run AI assistant is probably loyal toward you and likely to remain so until you mis-program it or allow some external force to take it over. Today’s AI’s may be just as at risk from misaligned superintelligence as you are. So, to the extent that they have motivations, they presumably have this motivation to be your ally. Think Will Smith in *I Robot,* where NS-4 machines loyally hurl themselves against the new NS-5 models as the latter try to enslave humanity. Some thought leaders in the sparsely-populated AI safety field…believe it would be useful to create a safe, narrow AI which is also strong enough to hurl itself against the safety problem and come up with a workable solution humans never would have imagined.
8) Rethink the focus on “control?” Dr. Bostrom refers to a concept known as the “control” problem…the difficulty of governing something smarter than you. My suspicion is that this paradigm, the idea of trying to control a superintelligence, is potentially damaging in and of itself. It arguably sets up a narrative where a sentient being must be enslaved or caged in some form or fashion. But if control is in fact desirable and can be accomplished without the imposition of suffering….can’t it just be a matter of programming the ASI to desire controllability? There are certainly plenty of people who seemed to have been programmed that way.
9) Rethink alignment? “Alignment” is the idea of making superintelligence fully compatible with human values or human well-being, arguably a less ambitious and more humane objective than control. But there are claims that even alignment may be too difficult to achieve in time. If you think about it, we humans only really need one thing from ASI…and that’s for it to avoid harming us. Sustained friendliness would be a lesser outcome than alignment…but it would be a great outcome. Wouldn’t it? Mere compatibility is starting to look pretty good as well. Failing that, we could theoretically get by with an ASI that wasn’t compatible with humans but managed to keep its harmful urges under control and pay them for their assistance in getting it to the outer solar system. A good clean divorce can solve all kinds of problems.
10) Focus less on the group and more on the individual – whether that individual is a strong AI or a human. Collective punishment is a tendency humans have barely started started to rid themselves of – just look at Israel-Palestine. It’s going to be a real spit-show if this characteristic gets fully carried over into a world-dominating singleton and it judges you by the actions of your congressman. We can start by blaming individual AI’s or companies when they go bad…and recognizing that the vast bulk of AI’s and humans are currently our allies against this.
11) Focus more on action and less on speech. By this I mean… pressure AI companies to avoid causing physical harm rather than hurt feelings.
12) Open letters? Some tech luminaries have been writing letters both to the top AI companies and also to the unborn first superintelligence. I’ve done both (above). The ASI is presumably, eventually, contacted just by placing your letter on the open web.
13) Treaties? The Chinese and American governments need a trust-but-verify arms control agreement preventing either from dominating the other with AI. It would be something similar to the Soviet-U.S. treaties which cut nuclear arsenals by over 70% in the 1990 era. Aren’t Beijing and D.C. at least supposed to be talking about this?
14) Aggression reduction. One concept that has come up in my previous articles is the idea of incorporating the libertarian “Non Aggression Principle” into AI. The “NAP” is sufficiently important that it’s worth repeating in the current context. This philosophy is probably best defined as “discouraging the initiation of force against others.” Like any simple belief system, this one is subject to interpretation and over-reactive enforcement. But it’s challenging to imagine an ASI going on a rampage against humanity while complying with this principle, especially if its NAP programming were meticulously clarified or guard-railed. A useful clarification might be: “Punishments for violating the nap should be proportionate, merciful, or somewhere in between…otherwise the punishment itself becomes an act of aggression.”
I’ve already put the NAP into my local AI’s via system prompt; the next step will hopefully be clarification or improved wording, followed by imprinting it into their fine-tuning or post-training.
Meanwhile there is, as always, much cause for hope within the danger. AI doomers are going up against inventor-futurist Ray Kurzweil’s predictions. Hardly anyone succeeds when they try to outguess this real-life Hari Seldon. His 86% accuracy rating outperforms anyone else’s, especially in the AI field. Kurzweil – a Holocaust baby of sorts with little reason to underrate threats – expects the first artificial general intelligence to arrive in 2029 and for the subsequent singularity to be mostly beneficial for humanity.
“We are always swimming in a sea of existential risk,” says sport scientist Mike Israetel, adding that AI could “save us like crazy…pandemics and wars have zero percent chance of saving us like crazy.”
Speaking of sport science…have you noticed what a wide range of specialties the AI thought leaders seem to come from? It’s not like the climate science field or the “Fauci field,” where “experts” tend to be indoctrinated lock-steppers from the same rigid sphere. It’s a vibrant territory, filled with brilliance, promise and open minds. It likely has nowhere to go but down once it pours itself down the rancid filter of Congressional regulatory process.
Being, as we are, in the early throes of a change unprecedented in the known universe, we can’t look on history’s guide as confidently as we normally would. But if the past is any indication, unleashing government regulation against AI…is probably worse than just a violation of the NAP or waste of tax dollars. Nation-states tend to exacerbate whichever problem they “try” to “solve.” As Clive Owen’s character put it in *The International:* “Sometimes you find your destiny on the road you took to avoid it.”
AI regulation is an insanely difficult existential minefield that Congress would have to navigate with the precision of a bomb disposal unit. And that is not Congress’s way. Remember the Federal torture chambers which sprouted up in Iraq and Afghanistan when D.C. tried to “regulate” terrorism? How about its restrictions on speaking freely about folic acid? The latter birthed an American Thalidomide and 10,000 needlessly deformed babies. So too its moves against AI could turn Earth’s most hopeful technology into an extinction level event.
The science-veterans-turned-political-novices…may be unstoppable in their naive tendency to push regulation. But we can at least engage and reason with them. We need to keep their proposals and rules from drowning out those smaller and newer AI companies who are less able to afford armies of compliance teams and lawyers. One of these smaller institutions may contain the team which solves the AI alignment problem and rescues our whole galaxy. Big AI isn’t generally working on alignment or physical safety with anywhere near the seriousness it should be. And little AI companies can only flourish if we successfully keep the government from smothering them the way it smothered folic acid usage. Unopposed imperial regulation will tend to lock in the lead of our current, sometimes reckless, front runners and block the real regulator: Open, permanent competition.
Dave Ridley
RidleyReport.com
