Cybersecurity for Artificial Intelligence

We are on the cusp of a revolution in artificial intelligence (AI). Today, AI plays a significant role in daily life, and the impact of AI is sure to increase dramatically over the coming years. Perhaps surprisingly, the net effect of this AI revolution on cybersecurity is, at present, unclear, as both the “good guys” and the “bad guys” can employ such technology. If cybersecurity is to reap major benefits from AI, the technology itself must be better understood—black boxes are inherently the enemy of security.

Models used in AI are notoriously opaque, which creates numerous potential problems. From a cybersecurity perspective, one of the greatest of these problems is the threat of adversarial attacks. It follows that “explainable AI,” for example, is of fundamental importance in information security.

This book includes chapters that attempt to illuminate various aspects of the AI black boxes that have come to dominate cybersecurity. The topics of explainable AI and adversarial attacks—as well as the closely related issue of model robustness— are considered. Most of the chapters explore these and similar topics in the context of specific security threats. The security domains considered include such diverse areas as malware, biometrics, and side-channel attacks, among others. We have strived to make the material accessible to the widest possible audience of researchers and practitioners.

We are confident that this book will prove valuable to practitioners working in the field and to researchers in both academia and industry. The chapters include insights that should help to illuminate some of the darkest corners of popular AI models that are used in cybersecurity.