AI Virtue
Committing decisively to prevent
AI Risk
and catastrophic outcomes.
The race to powerful AI, AGI, and even Superintelligence continues.
As AI becomes more powerful and more widely used, more & more human control gets delegated to AI systems we can't verify are safe. The real race is not one of market, defense, or superpower, rather one of AI Safety verse AI Power.Those who uphold AI Virtue are on the Wall of Honor.
They are the AI technical professionals who choose to make safety a priority - deciding not to contribute to increasing capabilities to the outpacing of safety, directly or by safety work that serves to accelerate it.
Honoring AI Professionals Who Put AI Safety
First.
Who Gets Nominated?
AI technical professionals who choose to make safety a priority - deciding not to contribute to increasing AI capabilities, directly or by safety work that serves to accelerate it. Often sacrificing profit, personal gain, career advancement and community. They've committed decisively to prevent AI Risk and catastrophic outcomes.
Selection Criteria:
- Prominent AI researchers who abstain from frontier AI work despite their expertise.
- AI safety whistle-blowers from frontier AI labs who eschewed similar organizations.
- Ex-employees of frontier AI labs who prioritized critical safety in resigning, or fired doing so, and eschewed similar organizations.
- Top AI researchers who work at dedicated AI safety organizations that acknowledge critical AI risk.
- Not contributing to capabilities research or safety work to the support of accelerating frontier models.
- Do not seek to create Superintelligence, or to push the limits of AGI, in the near term.