Tutorial 1: Key Agreement and Secure Identification with Physical Unclonable Functions
Onur Günlü and Rafael F. Schaefer
Information Theory and Applications Chair at the Technical University of Berlin, Germany
Abstract: This tutorial addresses security and privacy problems for digital devices including Internet-of-Things (IoT) and mobile devices, which will provide the basis required for the keynote speech on SRAM physical unclonable function (PUF) based authentication. A PUF is a promising solution for local security and privacy. Low-complexity algorithms, such as the transform-coding algorithm, are illustrated to make the information-theoretic analysis tractable and to motivate a noisy (hidden) PUF source model.
The optimal trade-offs between the secret-key, privacy-leakage, and storage rates for multiple measurements of hidden PUFs are characterized. The first optimal and low-complexity code constructions for secret-key agreement are proposed and implemented by using polar codes. The gains from cost-constrained controllable PUF measurements are briefly illustrated to motivate various extensions.
The recent results on secure identification with PUFs are listed to motivate the research on new code constructions to securely identify digital devices with a doubly-exponentially fast growth in the identification-code size with blocklength, as compared to the exponentially fast growth in the classic authentication-code size.
- PUF Basics and Models
- Information Theoretic Regions for general PUFs
- Optimal Code Constructions for Key Agreement with PUFs
- Secure Identification with PUFs
Tutorial 2: Practical Fully Homomorphic Encryption: Boolean and other arithmetic computations
Sergiu Carpov (CEA, France)
Nicolas Gama (Inpher, Switzerland)
Mariya Georgieva (EPFL and Inpher, Switzerland)
Abstract: Homomorphic encryption is one of the technologies allowing the public cloud to operate on secret data without leaking anything on the data. In this tutorial, we give an general overview of the capabilities of homomorphic encryption, and of the general security model. Then we detail some of the tools and technologies that make it usable for concrete use-cases.
Fast bootstrappable HE schemes unleash the full flexibility of homomorphic computation: they allow an unlimited or unknown depth of circuit with small ciphertext and public key sizes. Implemen- tation with these schemes is similar to digital circuit design where binary gates are built in libraries such as TFHE (CPU). This library supports various plaintext geometries and computational models (circuit, automata, LUT), with discrete and continuous logic (intercompatible with other libraries). In this tutorial, we demonstrate the use of the simple Gate-bootstrapping API, that allows a user to evaluate a Boolean circuit of his choice.
Homomorphic encryption libraries are intrinsically low-level, in the second part we will discuss on compiling higher-level abstractions to HE and automatically executing them. Cingulata is a compiler tool-chain and run-time environment for executing C++-like programs over HE encrypted data. TFHE library is available as a back-end in the Cingulata run-time and will be used in this tutorial.
The last part extends the homomorphic computations beyond Boolean case and introduces Chimera, a common framework for fully homomorphic schemes based on Ring-LWE, that unifies the plaintext space and the noise representation. This hybrid protocol allows to use multiple libraries during the same computation and provides the possibility to take advantage of the best of three schemes (TFHE, HEAAN and B/FV). We review how different strategies developed for each of these schemes, such as bootstrapping, external product, integer arithmetic and Fourier series, can be combined to evaluate the principle nonlinear functions involved in machine learning.
General introduction, what is Fully homomorphic encryption
What is homomorphic encryption, the goals, potential applications. What is a typical setup, and overview of the security model.
TFHE: unbounded homomorphic computations
- Fully homomorphic evaluation: how to evaluate a long chain of operations
- Hands on – Evaluating circuits with the TFHE library
Cingulata: easy implementation of HE applications
- Compiler and tool-chain for homomorphic encryption
- Hands on Cingulata – implementing C++ applications and executing them with HE
Chimera: pushing homomorphic computations beyond Boolean case
- Plaintext models of computations: integer arithmetic, floating and fixed point arithmetic and circuits.
- Ciphertext geometry: Reel/Complex polynomials, Coefficient and Slot packing, torus representation and LWE encryption over the torus
- Switching between plaintext spaces: How to combine fully homomorphic encryption schemes: TFHE (circuits), BFV (integer) and HEAAN (fixed point)
- Application: secure parallel outsourcing solution to compute Genome Wide Association Studies
Tutorial 3: Adversarial Robustness - Theoretical and Practical Perspectives
Alexadru Serban (Radboud Univrsity Nijmegen, The Netherlands)
Machine learning algorithms stand at the core of executing computer tasks where specifying procedural rules is not possible. Instead of manually crafting rules, ML is founded on the philosophy that patterns and rules can be automatically learned from data. Usually, a ML problem is posed as an optimization problem where an objective function describing the performance of an algorithm on a task is minimized by only looking at data describing it. This data-driven approach delivers good results in areas such as object recognition, machine translation or sequential decision making (demonstrated, for example, by defeating human experts in playing complex games) The large array of results enabled thinking about new tasks and applications, many of which are safety- or mission-critical (e.g. autonomous vehicles, UAVs or surgical robots).
Facing commercial deployment in systems where safety and security are paramount, new properties of ML algorithms become important. In particular, their ability to handle uncertainties in the operational environment and maintain performance whenever faced with data coming from slightly different distributions than trained with. Several investigations revealed that ML algorithms exhibit low robustness against intentionally crafted perturbations or high invariance for distinct inputs. From a security standpoint this means an attacker can craft inputs that look similarly but cause ML algorithms to misbehave or find distinct inputs that give the same result. From a safety standpoint, this means that ML algorithms are not robust against perturbations close to an input or are inflexible to changes in the operational environment. This tutorial will give an overview of the unsolved security challenges for machine learning algorithms, focusing on designing malicious inputs (also called adversarial inputs).
1. Introduction to machine learning security (data poisoning, inference attacks, membership attacks, reverse engineer the model, etc.).
2. Ways to generate adversarial inputs.
3. Ways to protect against adversarial inputs.
4. Future steps and security concerns.
For the practical examples, the Google Colab engine  will be used in the following way:
1. Load an already trained model.
2. Generate adversarial examples with different attacks.
3. Evaluate the results and interpret the adversarial examples (e.g. display the adversarial images and the normal onez).
4. Investigate a number of defense approaches.
Using this Colab engine requires no prerequisites and can be ran easily in the browser.