Applications open on September 26th. Acceptance notifications will begin from October 10th.
About the Forum
The Australian AI Safety Forum is a two-day interdisciplinary event scheduled for November 2024. This forum, the first of its kind in Australia, aims to discuss perspectives on technical AI safety and governance, and explore Australia’s unique role in the global AI safety landscape. The event will be anchored around the International Scientific Report on the Safety of Advanced AI, highlighting its key content and examining its implications for Australia.
The forum will encourage conversation and education between researchers, policy makers, and industry leaders. It aims to establish an Australian community working on problems of technical AI safety and governance, with the intention of becoming a regular event. By bringing together diverse perspectives, the forum hopes to inspire new collaborations and engage researchers who are interested in AI safety but have been unsure how to contribute.
Key Details
Date: 7th-8th November 2024.
- Day 1 (Thursday 7th Nov): Forum 9am-5pm. Networking drinks reception 5pm to ~7pm (details to be confirmed).
- Day 2 (Friday 8th Nov): Forum 9am-4pm.
Location: Sydney Knowledge Hub, The University of Sydney
Format: Two-day interdisciplinary conference, a mix of talks, panels and workshops, with a detailed program to be confirmed.
- Day 1 (Thursday 7th Nov): Single-track program introducing AI safety from both technical and governance perspectives. The day will feature keynote speeches from international AI safety experts, interspersed with panel discussions.
- Day 2 (Friday 8th Nov): Mainly parallel workshop sessions where participants will collaborate on addressing key questions in AI safety, with a focus on Australia’s potential contributions to the field.
Cost: Free
Participants
Who is this for: This forum is designed for a diverse group of participants, including:
- Researchers in mathematics, computer science, natural sciences and the social sciences
- Policymakers
- Legal experts and public policy professionals
- National security and cybersecurity specialists
- Industry professionals (e.g., finance, telecommunications)
- Students and early-career professionals interested in AI safety and governance
Forum Topics
A non-exhaustive list of topics for discussion includes:
- Foundations of AI safety: What is AI safety, what is “alignment”, and why does it matter?
- International AI safety landscape: Introduction to the International Scientific Report on the Safety of Advanced AI.
- Australia’s role: How can Australia contribute to global AI safety efforts? What are its unique strengths?
- Technical AI safety: What are the central technical research questions and how do they relate to policy? Are there blindspots in the current paradigm?
- Science of AI: What are the avenues to better understanding AI scientifically?
- AI governance: How does product regulation work for AI systems? What are the open problems in technical AI governance?
- Evaluations for AI: What are capabilities and how do we measure them?
- Safety engineering: What can AI safety learn from other fields of product safety engineering?
- Risk assessment: What is “high-risk AI” and how do we address it?
- Cross-sector communication: What communication channels should exist between technical researchers, policymakers, and industry? How can we talk to one another?
Organisers
This inaugural event is organised by the following committee:
- Tiberio Caetano, Gradient Institute.
- Liam Carroll, Gradient Institute and Timaeus.
- Daniel Murfet, The University of Melbourne.
- Greg Sadler, Good Ancestors Policy.
- Kim Weatherall, The University of Sydney.
- Geordie Williamson, The University of Sydney.
Partners
This event is made possible thanks to the support of the following organisations:
- Digital Sciences Initiative at the University of Sydney
- Gradient Institute
- Sydney Knowledge Hub
- Sydney Mathematical Research Institute
- Timaeus
Contact
If you have any questions, please contact Liam Carroll at liam.carroll@gradientinstitute.org .