Saving humanity from hostile superintelligence - these folks may deserve your support

Here are the people and institutions that seem to be focused on cracking the “alignment problem” without too much focus on government regulation. So they’re trying to figure out how one would build an artificial superintelligence that’s genuinely friendly to humanity. This is a tentative list; I’ll probably add and remove entries as I find problems with some of the labs or discover new hopefuls. Meanwhile I’m trying to draw positive attention to them via talk radio calls and articles. These are NOT the folks likely to create an early superintelligence; they’re the ones trying to figure out how it might be done safely.

) Nick Bostrom. More of a theorist and philosopher than a developer. I’ve been following him since 2019 and have donated to his former institute. But my interest has turned more toward the action-oriented problem-solvers below.

) Astera Labs physicist Stephen Byrnes? DoomDebates.com rates Byrnes as one of the humans most likely to prevent an AI doom scenario. Despite his 90% probability-of-doom prediction, he seems to have dedicated his life to making a decisive fix. He’s apparently not focused on government regulation….the “solution” that is so likely to increase AI dangers (see below). This makes him a refreshing force in the Doom Debates pantheon.
https://www.asteralabs.com/

Byrnes’ rather realistic approach – or at least his backup plan – seems to factor in his own probable inability to talk AI companies into behaving safely. Instead he appears to be designing an off-the-shelf repair that can be rushed into action after some hubristic tech bro unleashes a harmful intelligence.

https://softmax.com/about

) Yoshua Bengio at LawZero.org? Bengio, one of the “three fathers of modern AI” believes you can possibly crack the alignment problem by building what you might call an “oracle rather than a genie.” The idea is that the oracle would have no agentic capability to do anything and no goals, it’s just a scientist who answers questions. From that point, in theory, it could answer the question humans apparently cannot: “How do we design an artificial superintelligence which is permanently friendly to humanity?”

) Emmett Shear from Softmax? Shear (founder of Twitch) is aiming to organically create friendly AI from the bottom up. As I understand it, he runs simulations where there are lots of fairly dumb AI’s interacting in a 2d game over millions of subjective years. He’s trying to get them to align and/or discover patterns that reliably create alignment.