Thank you to everyone who attended the inaugural Australian AI Safety Forum!
Event recordings will be made available soon. Please check back for updates.
About the Forum
With the establishment of state-backed AI Safety Institutes in the UK, US, and other countries, and the release of the Interim International Scientific Report on the Safety of Advanced AI, the global focus on AI safety has recently intensified.
The Australian AI Safety Forum is a two-day interdisciplinary event in Sydney on 7-8 November 2024 that builds on this momentum and focuses on the subset of AI developments and AI policy that are addressed by the AI Safety Institutes and Report. This forum, the first of its kind in Australia, aims to catalyse conversations on technical AI safety and governance while exploring Australia’s unique role in the global AI safety landscape.
The forum takes the Interim International Scientific Report on the Safety of Advanced AI as its scientific foundation, using its technical findings to frame and advance discussions on policy and governance within the Australian context.
By grounding discussions in facts and stimulating dialogue among researchers, policymakers, and industry leaders, the forum seeks to establish an Australian community working on the challenges of technical AI safety and governance, with the intention of becoming a regular event. By bringing together diverse perspectives, the forum aims to inspire new collaborations and serve as a starting point for those in Australia interested in contributing to AI safety.
Program
The program for the 2024 Forum can be found here.
Key Details
Date: 7th-8th November 2024.
- Day 1 (Thursday 7th Nov): Forum 9am-5pm. Networking drinks reception 5pm to ~7pm (details to be confirmed).
- Day 2 (Friday 8th Nov): Forum 9am-4pm.
Location: Sydney Knowledge Hub, The University of Sydney
Format: Two-day interdisciplinary conference, a mix of talks, panels and workshops, with a detailed program to be confirmed.
- Day 1 (Thursday 7th Nov): Single-track program introducing AI safety from both technical and governance perspectives. The day will feature keynote speeches from international AI safety experts, interspersed with panel discussions.
- Day 2 (Friday 8th Nov): Mainly parallel workshop sessions where participants will collaborate on addressing key questions in AI safety, with a focus on Australia’s potential contributions to the field.
Cost: Free
Speakers
We are delighted to announce the following group of distinguished keynote speakers for the event:
- Marcus Hutter, Australian National University
- Hoda Heidari, Carnegie Mellon University
- Atoosa Kasirzadeh, Carnegie Mellon University
- Ryan Kidd, ML Alignment & Theory Scholars (MATS)
- Seth Lazar, Australian National University
- Nitarshan Rajkumar, University of Cambridge
Participants
This forum is designed for a diverse group of participants, including:
- Researchers in mathematics, computer science, natural sciences and the social sciences
- Policymakers
- Legal experts and public policy professionals
- National security and cybersecurity specialists
- Industry professionals (e.g., finance, telecommunications)
- Students and early-career professionals interested in AI safety and governance
Forum Topics
A non-exhaustive list of topics for discussion includes:
- Foundations of AI safety: What is AI safety, what is “alignment”, and why does it matter?
- International AI safety landscape: Introduction to the International Scientific Report on the Safety of Advanced AI.
- Australia’s role: How can Australia contribute to global AI safety efforts? What are its unique strengths?
- Technical AI safety: What are the central technical research questions and how do they relate to policy? Are there blindspots in the current paradigm?
- Science of AI: What are the avenues to better understanding AI scientifically?
- AI governance: How does product regulation work for AI systems? What are the open problems in technical AI governance?
- Evaluations for AI: What are capabilities and how do we measure them?
- Safety engineering: What can AI safety learn from other fields of product safety engineering?
- Risk assessment: What is “high-risk AI” and how do we address it?
- Cross-sector communication: What communication channels should exist between technical researchers, policymakers, and industry? How can we talk to one another?
Organisers
This inaugural event is organised by the following committee:
- Tiberio Caetano, Gradient Institute
- Liam Carroll, Gradient Institute / Timaeus
- Daniel Murfet, The University of Melbourne
- Greg Sadler, Good Ancestors Policy
- Alexander Saeri, The University of Queensland / MIT FutureTech
- Kim Weatherall, The University of Sydney
- Geordie Williamson, The University of Sydney
Partners
This event is made possible thanks to the support of the following organisations:
- Sydney Knowledge Hub
- Digital Sciences Initiative at the University of Sydney
- Faculty of Arts & Social Sciences (FASS) at the University of Sydney
- Faculty of Engineering at the University of Sydney
- Open Philanthropy
- Gradient Institute
- Sydney Mathematical Research Institute
- Timaeus
Contact
If you have any questions, please contact Liam Carroll at liam.carroll@gradientinstitute.org .