The AI Safety Initiative (AISI) at Georgia Tech is hosting the AI Safety Fellowship this Spring and we would like to invite you to apply!

We’re at a unique point in history where AI will shape humanity’s future. As the growth of model capabilities far outpaces our understanding of AI systems, we need to think crucially about how our AI systems are developed.

In our 6 week program, you’ll join weekly meetings to read and discuss the motivating arguments and technical evidence for AI safety.  By the end, you’ll have the opportunity to work on a capstone project ranging from a blog post examining a topic in AI safety to a workshop paper. You will learn to answer questions such as:

  • How can we specify our values in objective functions?
  • How can we efficiently implement human oversight of models?
  • How can we ensure models are robust to adversarial inputs?
  • How can we develop mechanistic understandings of model behavior?

If you want to learn more about how we can align machine objectives with human values, open problems in AI alignment, and potential failure modes, apply before Feb 2!

APPLY HERE

» Learn more at aisi.dev

» Join our Discord server

We look forward to seeing your application!

AISI @GT