Safe-AI-training

We’re introducing a new training program: Safe AI — a practical course on building and operating AI systems with a safety mindset, from requirements to evidence and argumentation.

This training is designed for teams who develop, integrate, validate, or approve AI/LLM-based functions in regulated or risk-critical contexts.

What you will learn

  • Norms & standards landscape relevant to AI safety and assurance
  • Trends in robust LLMs and what actually matters for engineering practice
  • AI safety life-cycle: from concept and requirements to validation and change management
  • AI architecture from a safety perspective: system boundaries, safety mechanisms, monitoring, fallbacks
  • Safety analysis for AI-enabled systems: hazards, failure modes, misuse/abuse cases, and mitigation strategies
  • Safety argumentation: how to build credible assurance cases and evidence chains

Hands-on, not theory-only

Every module comes with:

  • Demonstrations (live or recorded)
  • Real-life examples from engineering projects
  • Case studies that walk through decisions, trade-offs, and evidence you’d need in practice

Who it’s for

  • Safety engineers, system architects, AI/ML engineers
  • Product owners and technical leads
  • Verification/validation, quality, compliance, and program managers

Format

  • Instructor-led training with structured materials and practical exercises
  • Suitable for company-internal sessions or open enrollment (depending on offering)

Training flyer