Vol 5, No 1 (2020)

AI Safety, Robustness & Verification

ABSTRACT

Artificial Intelligence (AI) systems are rapidly becoming part of critical domains such as healthcare, finance, transportation, governance, and defense. While these systems provide significant advantages in automation and decision making, concerns about safety, reliability, robustness, and trustworthiness have also increased. AI failures can result in serious consequences, including biased decisions, system crashes, adversarial exploitation, and unintended behavior. This paper presents a comprehensive review of AI safety, robustness, and verification techniques that aim to ensure reliable performance of AI systems in real-world conditions. It discusses challenges associated with adversarial attacks, distribution shifts, model uncertainty, and interpretability. Further, the paper reviews verification and validation approaches including formal methods, testing strategies, explainability tools, and runtime monitoring. Practical frameworks, evaluation metrics, and recent research trends are summarized to guide researchers and practitioners in designing dependable AI systems. Tables and figures are provided for better understanding of techniques and comparisons.

KEYWORDS: AI Safety, Robustness, Verification, Adversarial Attacks, Explainable AI, Formal Methods, Trustworthy AI, Model Validation

Full Issue

View or download the full issue PDF 23-34

Table of Contents