OpenAI CEO Sam Altman has sounded the alarm on the increasing integration of advanced artificial intelligence systems in everyday life, urging policymakers in the United States to introduce proactive measures. Altman highlighted that AI is no longer a purely theoretical concept and is now delivering tangible outcomes across numerous sectors, including the economy.
AI heightens cybersecurity threats and reveals new vulnerabilities
Modern AI models are now capable of performing various tasks independently, ranging from coding to complex research assignments. Altman explained that the next generation of AI technologies may empower scientists to achieve groundbreaking discoveries. In addition, tasks traditionally handled by teams may soon be accomplished by individuals working alone, thanks to sophisticated AI tools.
The impact of AI in the field of cybersecurity is already being felt. Charles Guillemet, Chief Technology Officer at hardware wallet manufacturer Ledger, noted that AI-powered tools are making it both cheaper and easier to identify and exploit software vulnerabilities. Where once reverse engineering and vulnerability analysis required labor-intensive months, competent AI prompts can now achieve comparable results in mere seconds.
Last year, losses and attacks in the cryptocurrency ecosystem surpassed $1.4 billion. Experts caution that this figure could climb even higher in the future. Despite the growing reliance on AI-generated code among developers, there is concern that this trend could inadvertently introduce new, large-scale security risks.
Guillemet emphasized that, to counter emerging threats, the use of mathematically verified code should be prioritized. He also advocated for widespread adoption of secure hardware to keep private keys offline, and stressed the importance of designing systems with the expectation that failures can occur at any moment.
Call for coordinated action against AI-powered attacks
While Altman acknowledged AI’s potential to accelerate breakthroughs in fields like drug discovery and materials science, he also warned that the same technology poses a serious threat by enabling malicious biological research and highly effective cyberattacks. He underscored the need for rapid cooperation between governments, technology firms, and cybersecurity organizations in light of the possibility that such risks could materialize as soon as next year.
“We are not far from a world where open-source models capable of advanced biological applications are readily available,” Altman stated. He argued that strengthening societal resilience against terrorist organizations exploiting these tools is no longer just theory, but an urgent practical necessity.
Altman went on to cite the plausible scenario of a “globally impactful cyberattack” occurring within the year, highlighting the critical need for robust prevention efforts. He indicated that current policy proposals are intended to spark discussion regarding oversight of rapidly evolving, multi-capable AI systems. Moreover, Altman pointed out that AI technologies themselves could play a pivotal role in fortifying defenses against potential attacks.
Addressing the prospect of OpenAI becoming a state-owned entity, Altman argued that the best case for keeping the company private is ensuring the United States develops superintelligent AI aligned with its democratic values before rival nations. He dismissed the feasibility of such a transformative initiative being managed as a government project.
Altman’s comments are also closely tied to his own financial interests in the evolving AI sector. This connection informs his stance on accelerating regulatory reforms and recognizing the crucial role of the private sector in risk management. He also pointed out that, as AI adoption increases rapidly, managing energy costs could soon become a major concern for both the industry and policymakers.




