Communications in Cryptology IACR CiC


Dates are inconsistent
13 results sorted by publication date
Franklin Harding, Jiayu Xu
Published 2024-10-07 PDFPDF

Blind signature schemes enable a user to obtain a digital signature on a message from a signer without revealing the message itself. Among the most fundamental examples of such a scheme is blind Schnorr, but recent results show that it does not satisfy the standard notion of security against malicious users, One-More Unforgeability (OMUF), as it is vulnerable to the ROS attack. However, blind Schnorr does satisfy the weaker notion of sequential OMUF, in which only one signing session is open at a time, in the Algebraic Group Model (AGM) + Random Oracle Model (ROM), assuming the hardness of the Discrete Logarithm (DL) problem.

This paper serves as a first step towards characterizing the security of blind Schnorr in the limited concurrency setting. Specifically, we show that blind Schnorr satisfies OMUF when at most two signing sessions can be concurrently open (in the AGM+ROM, assuming DL). Our argument suggests that it is plausible that blind Schnorr satisfies OMUF for up to polylogarithmically many concurrent signing sessions. Our security proof involves interesting techniques from linear algebra and combinatorics.

Ward Beullens, Pierre Briaud, Morten Øygarden
Published 2024-10-07 PDFPDF

Restricted syndrome decoding problems (R-SDP and R-SDP($G$)) provide an interesting basis for post-quantum cryptography. Indeed, they feature in CROSS, a submission in the ongoing process for standardizing post-quantum signatures.

This work improves our understanding of the security of both problems. Firstly, we propose and implement a novel collision attack on R-SDP($G$) that provides the best attack under realistic restrictions on memory. Secondly, we derive precise complexity estimates for algebraic attacks on R-SDP that are shown to be accurate by our experiments. We note that neither of these improvements threatens the updated parameters of CROSS.

Balthazar Bauer, Pooya Farshim, Patrick Harasser, Markulf Kohlweiss
Published 2024-10-07 PDFPDF

The generic-group model (GGM) and the algebraic-group model (AGM) have been exceptionally successful in proving the security of many classical and modern cryptosystems. These models, however, come with standard-model uninstantiability results, raising the question of whether the schemes analyzed under them can be based on firmer standard-model footing.

We formulate the uber-knowledge (UK) assumption, a standard-model assumption that naturally extends the uber-assumption family to knowledge-type problems. We justify the soundness of UK in both the bilinear GGM and the bilinear AGM. Along the way we extend these models to account for hashing into groups, an adversarial capability that is available in many concrete groups—In contrast to standard assumptions, hashing may affect the validity of knowledge assumptions. These results, in turn, enable a modular approach to security in the GGM and the AGM.

As example applications, we use the UK assumption to prove knowledge soundness of Groth's zero-knowledge SNARK (EUROCRYPT 2016) and of KZG polynomial commitments (ASIACRYPT 2010) in the standard model, where for the former we reuse the existing proof in the AGM without hashing.

Soichiro Kobayashi, Rei Ueno, Yosuke Todo, Naofumi Homma
Published 2024-10-07 PDFPDF

This paper presents a new side-channel attack (SCA) on unrolled implementations of stream ciphers, with a particular focus on Trivium. Most conventional SCAs predominantly concentrate on leakage of some first rounds prior to the sufficient diffusion of the secret key and initial vector (IV). However, recently, unrolled hardware implementation has become common and practical, which achieves higher throughput and energy efficiency compared to a round-based hardware. The applicability of conventional SCAs to such unrolled hardware is unclear because the leakage of the first rounds from unrolled hardware is hardly observed. In this paper, focusing on Trivium, we propose a novel SCA on unrolled stream cipher hardware, which can exploit leakage of rounds latter than 80, while existing SCAs exploited intermediate values earlier than 80 rounds. We first analyze the algebraic equations representing the intermediate values of these rounds and present the recursive restricted linear decomposition (RRLD) strategy. This approach uses correlation power analysis (CPA) to estimate the intermediate values of latter rounds. Furthermore, we present a chosen-IV strategy for a successful key recovery through linearization. We experimentally demonstrate that the proposed SCA achieves the key recovery of a 288-round unrolled Trivium hardware implementation using 360,000 traces. Finally, we evaluate the performance of unrolled Trivium hardware implementations to clarify the trade-off between performance and SCA (in)security. The proposed SCA requires 34.5 M traces for a key recovery of 384-round unrolled Trivium implementation and is not applicable to 576-round unrolled hardware.

Dinal Kamel, François-Xavier Standaert, Olivier Bronchain
Published 2024-10-07 PDFPDF

Raccoon is a lattice-based scheme submitted to the NIST 2022 call for additional post-quantum signatures. One of its main selling points is that its design is intrinsically easy to mask against side-channel attacks. So far, Raccoon's physical security guarantees were only stated in the abstract probing model. In this paper, we discuss how these probing security results translate into guarantees in more realistic leakage models. We also highlight that this translation differs from what is usually observed (e.g., in symmetric cryptography), due to the algebraic structure of Raccoon's operations. For this purpose, we perform an in-depth information theoretic evaluation of Raccoon's most innovative part, namely the AddRepNoise function which allows generating its arithmetic shares on-the-fly. Our results are twofold. First, we show that the resulting shares do not enforce a statistical security order (i.e., the need for the side-channel adversary to estimate higher-order moments of the leakage distribution), as usually expected when masking. Second, we observe that the first-order leakage on the (large) random coefficients manipulated by Raccoon cannot be efficiently turned into leakage on the (smaller) coefficients of its long-term secret. Concretely, our information theoretic evaluations for relevant leakage functions also suggest that Raccoon's masked implementations can ensure high security with less shares than suggested by a conservative analysis in the probing model.

Jonathan Komada Eriksen, Antonin Leroux
Published 2024-10-07 PDFPDF

This work introduces several algorithms related to the computation of orientations in endomorphism rings of supersingular elliptic curves. This problem is at the heart of several results regarding the security of oriented-curves in isogeny-based cryptography. Under the Deuring correspondence, it can be expressed purely in terms of quaternion and boils down to representing integers by ternary quadratic forms. Our main contribution is to show that there exist efficient algorithms to solve this problem for quadratic orders of discriminant $n$ up to $O(p^{4/3})$. Our approach improves upon previous results by increasing this bound from $O(p)$ to $O(p^{4/3})$ and removing some heuristics. We introduce several variants of our new algorithm and provide a careful analysis of their asymptotic running time (without heuristic when it is possible). The best proven asymptotic complexity of one of our variants is $O(n^{3/4}/p)$ in average. The best heuristic variant has a complexity of $O(p^{1/3})$ for big enough $n$. We then introduce several results regarding the computation of ideals between oriented orders. The first application of this is a simplification of the known reduction from vectorization to computing the endomorphism ring, removing the assumption on the factorization of the discriminant. As a second application, we relate the problem of computing fixed-degree isogenies between supersingular curves to the problem of computing orientations in endomorphism rings, and we show that for a large range of degree $d$, our new algorithms improve on the state-of-the-art, and in important special cases, the range of degree $d$ for which there exist a polynomial-time algorithm is increased. In the most special case we consider, when both curves are oriented by a small degree endomorphism, we show heuristically that our techniques allow the computation of isogenies of any degree, assuming they exist.

Gil Segev, Liat Shapira
Published 2024-07-08 PDFPDF

In this work we first present an explicit forking lemma that distills the information-theoretic essence of the high-moment technique introduced by Rotem and Segev (CRYPTO '21), who analyzed the security of identification protocols and Fiat-Shamir signature schemes. Whereas the technique of Rotem and Segev was particularly geared towards two specific cryptographic primitives, we present a stand-alone probabilistic lower bound, which does not involve any underlying primitive or idealized model. The key difference between our lemma and previous ones is that instead of focusing on the tradeoff between the worst-case or expected running time of the resulting forking algorithm and its success probability, we focus on the tradeoff between higher moments of its running time and its success probability.

Equipped with our lemma, we then establish concrete security bounds for the BN and BLS multi-signature schemes that are significantly tighter than the concrete security bounds established by Bellare and Neven (CCS '06) and Boneh, Drijvers and Neven (ASIACRYPT '18), respectively. Our analysis does not limit adversaries to any idealized algebraic model, such as the algebraic group model in which all algorithms are assumed to provide an algebraic justification for each group element they produce. Our bounds are derived in the random-oracle model based on the standard-model second-moment hardness of the discrete logarithm problem (for the BN scheme) and the computational co-Diffie-Hellman problem (for the BLS scheme). Such second-moment assumptions, asking that the success probability of any algorithm in solving the underlying computational problems is dominated by the second moment of the algorithm's running time, are particularly plausible in any group where no better-than-generic algorithms are currently known.

Vincent Hwang
Published 2024-07-08 PDFPDF

We survey various mathematical tools used in software works multiplying polynomials in \[ \frac{\mathbb{Z}_q[x]}{\left\langle {x^n - \alpha x - \beta} \right\rangle}. \] In particular, we survey implementation works targeting polynomial multiplications in lattice-based cryptosystems Dilithium, Kyber, NTRU, NTRU Prime, and Saber with instruction set architectures/extensions Armv7-M, Armv7E-M, Armv8-A, and AVX2.

There are three emphases in this paper: (i) modular arithmetic, (ii) homomorphisms, and (iii) vectorization. For modular arithmetic, we survey Montgomery, Barrett, and Plantard multiplications. For homomorphisms, we survey (a) various homomorphisms such as Cooley–Tukey FFT, Good–Thomas FFT, Bruun's FFT, Rader's FFT, Karatsuba, and Toom–Cook; (b) various algebraic techniques for adjoining nice properties to the coefficient rings, including localization, Schönhage's FFT, Nussbaumer's FFT, and coefficient ring switching; and (c) various algebraic techniques related to the polynomial moduli, including twisting, composed multiplication, evaluation at $\infty$, truncation, incomplete transformation, striding, and Toeplitz matrix-vector product. For vectorization, we survey the relations between homomorphisms and vector arithmetic.

We then go through several case studies: We compare the implementations of modular multiplications used in Dilithium and Kyber, explain how the matrix-to-vector structure was exploited in Saber, and review the design choices of transformations for NTRU and NTRU Prime with vectorization. Finally, we outline several interesting implementation projects.

Jules Maire, Damien Vergnaud
Published 2024-04-09 PDFPDF

We present new secure multi-party computation protocols for linear algebra over a finite field, which improve the state-of-the-art in terms of security. We look at the case of unconditional security with perfect correctness, i.e., information-theoretic security without errors. We notably propose an expected constant-round protocol for solving systems of m linear equations in n variables over Fq with expected complexity O(k n^2.5 + k m) (where complexity is measured in terms of the number of secure multiplications required) with k > m(m+n)+1. The previous proposals were not error-free: known protocols can indeed fail and thus reveal information with probability Omega(poly(m)/q). Our protocols are simple and rely on existing computer-algebra techniques, notably the Preparata-Sarwate algorithm, a simple but poorly known “baby-step giant-step” method for computing the characteristic polynomial of a matrix, and techniques due to Mulmuley for error-free linear algebra in positive characteristic.

Charles Bouillaguet, Julia Sauvage
Published 2024-04-09 PDFPDF

Biscuit is a recent multivariate signature scheme based on the MPC-in-the-Head paradigm. It has been submitted to the NIST competition for additional signature schemes. Signatures are derived from a zero-knowledge proof of knowledge of the solution of a structured polynomial system. This extra structure enables efficient proofs and compact signatures. This short note demonstrates that it also makes these polynomial systems easier to solve than random ones. As a consequence, the original parameters of Biscuit failed to meet the required security levels and had to be upgraded.

Décio Luiz Gazzoni Filho, Guilherme Brandão, Julio López
Published 2024-04-09 PDFPDF

Efficient polynomial multiplication routines are critical to the performance of lattice-based post-quantum cryptography (PQC). As PQC standards only recently started to emerge, CPUs still lack specialized instructions to accelerate such routines. Meanwhile, deep learning has grown immeasurably in importance. Its workloads call for teraflops-level of processing power for linear algebra operations, mainly matrix multiplication. Computer architects have responded by introducing ISA extensions, coprocessors and special-purpose cores to accelerate such operations. In particular, Apple ships an undocumented matrix-multiplication coprocessor, AMX, in hundreds of millions of mobile phones, tablets and personal computers. Our work repurposes AMX to implement polynomial multiplication and applies it to the NTRU cryptosystem, setting new speed records on the Apple M1 and M3 systems-on-chip (SoCs): polynomial multiplication, key generation, encapsulation and decapsulation are sped up by $1.54$–$3.07\times$, $1.08$–$1.33\times$, $1.11$–$1.50\times$ and $1.20$–$1.98\times$, respectively, over the previous state-of-the-art.

Subhadeep Banik, Andrea Caforio, Serge Vaudenay
Published 2024-04-09 PDFPDF

The LowMC family of block ciphers was proposed by Albrecht et al. in Eurocrypt 2015, specifically targeting adoption in FHE and MPC applications due to its low multiplicative complexity. The construction operates a 3-bit quadratic S-box as the sole non-linear transformation in the algorithm. In contrast, both the linear layer and round key generation are achieved through multiplications of full rank matrices over GF(2). The cipher is instantiable using a diverse set of default configurations, some of which have partial non-linear layers i.e., in which the S-boxes are not applied over the entire internal state of the cipher.

The significance of cryptanalysing LowMC was elevated by its inclusion into the NIST PQC digital signature scheme PICNIC in which a successful key recovery using a single plaintext/ciphertext pair is akin to retrieving the secret signing key. The current state-of-the-art attack in this setting is due to Dinur at Eurocrypt 2021, in which a novel way of enumerating roots of a Boolean system of equation is morphed into a key-recovery procedure that undercuts an ordinary exhaustive search in terms of time complexity for the variants of the cipher up to five rounds.

In this work, we demonstrate that this technique can efficiently be enriched with a specific linearization strategy that reduces the algebraic degree of the non-linear layer as put forward by Banik et al. at IACR ToSC 2020(4). This amalgamation yields new attacks on certain instances of LowMC up to seven rounds.

Emmanuela Orsini, Riccardo Zanotto
Published 2024-04-09 PDFPDF

In this work we study algebraic and generic models for group actions, and extend them to the universal composability (UC) framework of Canetti (FOCS 2001). We revisit the constructions of Duman et al. (PKC 2023) integrating the type-safe model by Zhandry (Crypto 2022), adapted to the group action setting, and formally define an algebraic action model (AAM). This model restricts the power of the adversary in a similar fashion to the algebraic group model (AGM). By imposing algebraic behaviour to the adversary and environment of the UC framework, we construct the UC-AAM. Finally, we instantiate UC-AAM with isogeny-based assumptions, in particular the CSIDH action with twists, obtaining the explicit isogeny model, UC-EI; we observe that, under certain assumptions, this model is "closer" to standard UC than the UC-AGM, even though there still exists an important separation. We demonstrate the utility of our definitions by proving UC-EI security for the passive-secure oblivious transfer protocol described by Lai et al. (Eurocrypt 2021), hence providing the first concretely efficient two-message isogeny-based OT protocol in the random oracle model against malicious adversaries.