Skip to the content.

AI Safety Forum Logo

Click here for Q+A on FLUX

Day 1: Keynote Day - Thursday 7th November

Time Room Session Speakers
08:00   Registration and free barista coffee cart  
09:00 277 WELCOME: Opening of the inaugural Australian AI Safety Forum Helen Wilson, Deputy Secretary, Science and Technology, Commonwealth Department of Industry, Science, and Resources
09:05 277 WELCOME: Welcome to the venue, Sydney Knowledge Hub Rupal Ismin, Sydney Knowledge Hub
09:10 277 WELCOME: Welcome to AI Safety Forum Liam Carroll, Gradient Institute / Timaeus
09:30 277 INTRODUCTION: State of AI Tiberio Caetano, Gradient Institute
10:00 277 INTRODUCTION: State of Technical AI safety Daniel Murfet, University of Melbourne
10:30 277 INTRODUCTION: State of AI Governance Kimberlee Weatherall, University of Sydney
11:00   Morning tea  
11:30 277 KEYNOTE: Red-Teaming for Generative AI - Silver Bullet or Security Theater Hoda Heidari, Carnegie Mellon University
12:15 277 KEYNOTE: Accelerating AI Safety Talent Ryan Kidd, MATS Research
13:00   Lunch  
14:00 277 KEYNOTE: Frontier AI Safety Governance: Open Questions Seth Lazar, Australian National University
14:45 277 KEYNOTE: ASI Safety via AIXI Marcus Hutter, Australian National University
15:30   Afternoon tea  
16:00 277 Panel discussion TBC
17:00   Networking and drinks  
18:30   END Day 1  

Day 2: Forum Day - Friday 8th November

Time Room Session Convenors Description
08:30   Doors open    
09:00 277 Introduction to Day 2 Liam Carroll, Gradient Institute / Timaeus  
09:05 277 WORKSHOP: State of the science: The Interim International Scientific Report on the Safety of Advanced AI Daniel Murfet, University of Melbourne The Interim International Scientific Report on the Safety of Advanced AI describes the capabilities, risks, and technical approaches to address risks of increasingly capable general-purpose AI systems. In this session participants will explore the best current scientific understanding of AI safety including critical challenges and emerging approaches for making progress.
10:30   Morning tea    
11:00 277 WORKSHOP: International governance of AI safety: a role for Australia? Johanna Weaver, Tech Policy Design Centre
Chelle Adamson, Dept. of Industry, Science & Resources
Many countries and institutions are enacting laws or multilateral agreements about how to develop and use AI. In this session, learn about the latest international processes, laws and proposals, explore their relevance to the Australian context, and discuss how Australia might participate in international governance for AI safety.
11:00 275 WORKSHOP: Unpacking “Safe” and “Responsible” AI Qinghua Lu, CSIRO
Alexander Saeri, MIT FutureTech
Responsible AI and AI Safety are often discussed together, but what is the relationship between these concepts and communities? An overview of recent progress in the science of Responsible AI will be provided, and then space will be cultivated to discuss cross-pollination between Responsible AI and AI Safety research & practice communities.
11:00 273 WORKSHOP: Perspectives on generalisation in the science of AI safety Daniel Murfet, University of Melbourne
Marcus Hutter, Australian National University
Understanding how AI models learn and generalise is necessary for designing, building, and deploying AI safely. In this session, Daniel Murfet and Marcus Hutter explore mathematical frameworks that illuminate AI behaviour, with implications for technical approaches to AI safety.
12:00 277 WORKSHOP: New Governance Proposals for Frontier AI Safety KEYNOTE: Atoosa Kasirzadeh, Carnegie Mellon University
Seth Lazar, Australian National University
Governing the most advanced ‘frontier’ AI systems presents unique challenges beyond general AI governance. This section will cover responsible scaling policies, compute governance, open source development, governing autonomous agents, and pre-release testing. Participants will collectively examine the political and social dimensions of frontier AI governance and explore trade-offs in governance strategies for highly capable AI systems.
12:00 275 WORKSHOP: Emerging practice in technical AI safety Soroush Pour, Harmony Intelligence
Ryan Kidd, MATS Research
Karl Berzins, FAR AI
This session explores concrete initiatives in technical AI safety through AI evaluations development, talent cultivation, and research acceleration. Join Soroush Pour from Harmony Intelligence, Ryan Kidd from MATS Research and Karl Berzins from FAR AI as they share their experiences building concrete AI safety projects and programs, followed by open discussion.
13:00   Lunch    
14:00 277 WORKSHOP: What could an Australian AI Safety Institute look like? KEYNOTE: Nitarshan Rajkumar, University of Cambridge
Greg Sadler, Good Ancestors Policy
The UK, US, Japan and others have established AI Safety Institutes to research and support action on risks from AI. In this session, participants will discuss what an Australian AISI could do, how this could advance AI safety in Australia and internationally, and how such an Institute could operate.
15:50 277 Concluding remarks Liam Carroll, Gradient Institute / Timaeus  
16:00   END Day 2