DIGITAL CISPA Summer School 2021
August 30 – September 3, 2021
Online
Free of charge
Coding techniques and related assumptions are at the core of several exciting recent developments in cryptography. In particular, code-based assumptions have attracted attention as they offer conjectured post-quantum security and allow for very efficient and versatile implementations.
During this summer school, leaders of the field will present and discuss new developments concerning constructions of advanced cryptographic primitives based on code-based assumptions, as well as novel cryptanalytic methods to study these assumptions and understand the security they offer.
“On the Cryptographic Hardness of Random Linear Codes”
In this talk, we will present some basic background on the cryptographic hardness of decoding random linear codes. We will highlight some of the features of this problem that are attractive from a cryptographic point of view, and, if time permits, explore the interesting relation between this assumption and the arithmetic cryptography framework presented by Avron, Brzuska, and the speaker.
”Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing”
We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of $1/2-1/\poly(n)$. Thus we can only show that ``very hard'' LPN is harder than some ``very mildly hard'' worst case problem. We note that LPN with noise $1/2-1/\poly(n)$ already implies symmetric cryptography.
Specifically, we consider the $(n,m,w)$-nearest codeword problem ($(n,m,w)$-NCP) which takes as input a generating matrix for a binary linear code in $m$ dimensions and rank $n$, and a target vector which is very close to the code (Hamming distance at most $w$), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $w/m \approx {\log^2 n}/{n}$, $(n,m,w)$-NCP can be solved given oracle access to an LPN distinguisher with noise ratio
$1/2-1/\poly(n)$.
Our proof relies on a smoothing lemma for codes which we show to have further
implications: We show that $(n,m,w)$-NCP with the aforementioned parameters lies in the complexity class $SearchBPP^{SZK}$ (i.e.\ reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be $NP$-hard. We then show that the hardness of LPN with very low noise rate $\log^2(n)/n$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $BPP^{SZK}$).
Joint work with Vadim Lyubashevsky, Vinod Vaikuntanathan and Daniel Wichs.
“Algebraic cryptanalysis of the Minrank, RSD and RSL problems and applications in Post-Quantum Cryptography”
Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. This fundamental problem is called the Minrank problem, and is ubiquitous in rank metric (or even Hamming metric) code based cryptography as well as in multivariate cryptography. For structured instances arising in the former, their security rely on a more specific problem, namely the Rank Syndrome Decoding problem. There is also a generalization called the Rank Support Learning problem, where the attacker has access to several syndromes corresponding to errors with the same support. Those problems have various applications in code-based and multivariate cryptography (KEM and signature schemes), and a precise understanding of the complexity of solving them can help designers to create secure parameters.
In this talk, I will present the three problems and their relations to cryptographic schemes, their algebraic modeling and the recent improvements in the understanding of the complexity of solving those systems using algebraic techniques like Gröbner bases computations.
“Information Set Decoding - Algorithms and Practical Results”
We survey the algorithmic progress for information set decoding within the last decade, and discuss its implications to current NIST candidates like McEliece, BIKE and HQC. Moreover, we show how these techniques tranfer to LWE with small max-norm keys, as used in NTRU-type schemes.
“Low-Complexity Cryptography and Error-Correcting Codes”
The question of minimizing different complexity measures of cryptographic primitives is of both theoretical and practical interest. The talk will survey this line of work, emphasising connections with error-correcting codes and pointing out some remaining challenges.
“How to generate correlated pseudorandomness from (variants of) LPN - Part I” (joint abstract with Lisa Kohl's contribution)
“How to generate correlated pseudorandomness from (variants of) LPN - Part II”
Secure multiparty computation (MPC) often relies on sources of correlated randomness for better efficiency and simplicity. This is particularly useful for MPC with no honest majority, where input-independent correlated randomness enables a lightweight "non-cryptographic" online phase once the inputs are known. However, since the amount of correlated randomness typically scales with the circuit size of the function being computed, securely generating correlated randomness forms an efficiency bottleneck, involving a large amount of communication and storage.
In the first part of this talk, Geoffroy will introduce the concept of pseudorandom correlation generators (PCGs). PCGs allow two or more parties to securely generate long sources of useful correlated randomness via a local expansion of correlated short seeds and no interaction. PCGs enable MPC with silent preprocessing, where a small amount of interaction used for securely sampling the seeds is followed by silent local generation of correlated pseudorandomness. Further, Geoffroy will present PCGs for useful correlations, including oblivious transfer, based on variants of the learning parity with noise (LPN) assumption.
In the second part of this talk, Lisa will show how to obtain efficient PCGs for oblivious linear evaluation correlations (and more) based on different flavors of the ring-LPN assumption. Further, Lisa will introduce the concept of pseudorandom correlation functions (PCFs) that offer the ability to securely generate virtually unbounded sources of correlated randomness using only local computation. Finally, Lisa will present efficient constructions of correlated pseudorandom functions for a broad class of correlations based on a variable-density variant of the LPN assumption.
Based on joint works with Elette Boyle, Niv Gilboa, Yuval Ishai, Srinivasan Raghuraman, Peter Rindal, and Peter Scholl.
“Hardness of decoding random linear codes and advanced cryptographic feasibility”
The hardness of decoding random linear codes has a long history in cryptography. Recently it has been used to help achieve advanced cryptographic objectives like indistinguishability obfuscation of general programs. In this talk, we will discuss how sparse error can be useful in surprising new ways.
“A brief survey on list decoding of classical block codes”
In this talk, we will be interested in codes for adversarial noise, where the channel can arbitrarily corrupt any subset of up to certain number of symbols of the codeword. The goal will be to correct such errors and recover the original message/codeword efficiently. It is easy to see that information-theoretically, we need to receive at least $k$ symbols correctly in order to recover the message, where $k$ is the size of information set. Perhaps surprisingly, in a model called list decoding, recovery up to this information-theoretic limit becomes possible. In the talk, I will survey some recent progress on list decoding.
“An Algorithmic Reduction Theory for Binary Codes: LLL and more. Joint work with Thomas Debris-Alazard and Wessel van Woerden
Lattice reduction is the task of finding a basis of short and somewhat orthogonal vectors of a given lattice. In 1985 Lenstra, Lenstra and Lovasz proposed a polynomial time algorithm for this task, with an application to factoring rational polynomials. Since then, the LLL algorithm has found countless application in algorithmic number theory and in cryptanalysis.
There are many analogies to be drawn between Euclidean lattices and linear codes over finite fields. In this work, we propose to extend the range of these analogies by considering the task of reducing the basis of a binary code. In fact, all it takes is to choose the adequate notion mimicking Euclidean orthogonality (namely orthopodality), after which, all the required notions, arguments, and algorithms unfold before us, in quasi-perfect analogy with lattices.
“Cryptanalysis of algebraic codes endowed with the Hamming metric”
Classical Goppa codes were the first family chosen by McEliece himself to instantiate his famous code based encryption scheme which lead to the recent NIST proposal called Classic McEliece. Goppa codes benefit from an efficient decoding algorithm which corrects an amount t = \Theta(n/log n) errors and never fails: permitting a decryption with no failure.
McEliece's proposal requires however significantly large key sizes to assert a good security level. In the last decades, several variants implying "more strctured" algebraic codes have been proposed in order to address this key size issue.
In this lecture, after shortly introducing McEliece scheme and some famous families of algebraic codes, we will study various algebraic cryptanalysis techniques against these primitives in order to analyze which ones remain the more secure.
Application Deadline: August 19, 2020
Registration closed
There are 2 options to participate.
Registration Deadline: August 30, 2021, at noon (CEST)
Questions? Contact our summer-school team at summer-school@cispa.de