With every new technology comes the potential for abuse. And while AI is clearly starting to deliver an awful lot of value, it\u2019s also creating new systemic vulnerabilities that governments now have to worry about and address. Self-driving cars can be hacked. Speech synthesis can make traditional ways of verifying someone\u2019s identity less reliable. AI can be used to build weapons systems that are less predictable.
\nAs AI technology continues to develop and become more powerful, we\u2019ll have to worry more about safety and security. But competitive pressures risk encouraging companies and countries to focus on capabilities research rather than responsible AI development. Solving this problem will be a big challenge, and it\u2019s probably going to require new national AI policies, and international norms and standards that don\u2019t currently exist.
\nHelen Toner is Director of Strategy at the Center for Security and Emerging Technology (CSET), a US policy think tank that connects policymakers to experts on the security implications of new technologies like AI. Her work spans national security and technology policy, and international AI competition, and she\u2019s become an expert on AI in China, in particular. Helen joined me for a special AI policy-themed episode of the podcast.