Rethinking AI Safety in the Age of Agentic Systems

15 min
June 19, 2025
WEKA Stage

About

In a compelling session at SuperAI Singapore 2025, Luke Soon delves into the pressing issue of AI safety in a world increasingly dominated by agentic systems. As an expert deeply involved in AI development, Soon examines the fine line between utopian and dystopian futures created by the rapid advancement of AI, urging immediate action from governments and the tech industry alike.

Highlighting the transformative potential of AI systems, Soon illustrates how agentic technologies, designed to perform complex tasks autonomously, can efficiently augment the workforce. However, he raises caution about the insufficient attention paid to the potential pitfalls of these systems, particularly when it comes to decision-making and resource allocation. The talk stresses the importance of scrutinizing AI decision-making processes, emphasizing terms like agent autonomy, contextual memory, and the potential for data poisoning or other malign influences.

Soon also touches on the broader societal implications of AI, questioning how humanity might redefine its purpose if productivity, the primary driver of human endeavor for centuries, becomes largely automated. With the horizon of human-level AI intelligence approaching within a decade, he advocates for a concentrated effort on AI safety across global policy landscapes.

The engaging discourse not only highlights existing gaps in policy and thought but also calls for proactive monitoring and responsible prompting of AI models to prevent catastrophic outcomes. The urgency for a structured approach to AI safety is paramount as we navigate this complex evolution in technology.

Stay informed on AI's future by liking, commenting, and subscribing to our channel. Your engagement supports us in bringing vital discussions to the forefront.

Moderator

No items found.
Secure your spot at SuperAI 2026

Super Early Bird pre-sale now available

US$999
US$199