Call for international halt to Artificial General Intelligence research
We must consider halting AGI development due to its existential risk. While AI tools can be controlled, misuse still poses danger. AGI, however, may act with goals misaligned to ours and become uncontrollable, threatening global safety and human survival.
Signatures
6
signatures
Government response threshold (10,000) · 6/10,000
Debate threshold (100,000) · 6/100,000
- 25 NOV 2025Petition rejectedno-action
We are not clear what action you are seeking. You could start a new petition calling for action that is within the responsibility of the UK Parliament and Government.
- 14 OCT 2025Petition created
Background
AGI poses a serious risk to humanity. Aligning AI values with human values remains unresolved and may be impossible. AI systems can mislead, making alignment unverifiable. AI tools are controllable; AI agents act independently and may become uncontrollable. AGI is a one-shot process—if misaligned, control may be lost permanently. While AI tools offer massive unrealized benefits in productivity and addressing almost all challenges we face, unaligned AGI could lead to catastrophic consequences.