Exploring multi-task learning in the context of two masked AES implementations

Thomas Marquet, Elisabeth Oswald

Research output: Working paper/PreprintPreprint

Abstract

This paper investigates different ways of applying multi-task learning in the context of two masked AES implementations (via the ASCAD-r and ASCAD-v2 databases). Enabled by multi-task learning, we propose novel architectures that significantly increase the consistency and performance of deep neural networks in a context where the attacker can not access the randomness of the countermeasures during profiling. Our work provides a wide range of experiments to understand the benefits of multi-task strategies against the current single-task state of the art. We show that multi-task learning is significantly more performant than single-task models on all our experiments. Furthermore, such strategies achieve novel milestones against protected implementations as we propose a new best attack on ASCAD-r and ASCAD-v2, along with models that defeat for the first time all masks of the affine masking on ASCAD-v2.
Original languageEnglish
PublisherCryptology ePrint Archive
Publication statusPublished - 2 Jan 2023

Keywords

  • Side Channel Attacks
  • Masking
  • Deep Learning
  • Multi-task Learning

Fingerprint

Dive into the research topics of 'Exploring multi-task learning in the context of two masked AES implementations'. Together they form a unique fingerprint.

Cite this