Communications in Cryptology IACR CiC


Dates are inconsistent
17 results sorted by publication date
Sebastian Kolby, Divya Ravi, Sophia Yakoubov
Published 2024-10-07 PDFPDF

YOSO MPC (Gentry et al., Crypto 2021) is a new MPC framework where each participant can speak at most once. This models an adaptive adversary’s ability to watch the network and corrupt or destroy parties it deems significant based on their communication. By using private channels to anonymous receivers (e.g. by encrypting to a public key whose owner is unknown), the communication complexity of YOSO MPC can scale sublinearly with the total number N of available parties, even when the adversary’s corruption threshold is linear in N (e.g. just under N/2). It was previously an open problem whether YOSO MPC can achieve guaranteed output delivery in a constant number of rounds without relying on trusted setup. In this work, we show that this can indeed be accomplished. We demonstrate three different approaches: the first two (which we call YaOSO and YOSO-GLS) use two and three rounds of communication, respectively. Our third approach (which we call YOSO-LHSS) uses O(d) rounds, where d is the multiplicative depth of the circuit being evaluated; however, it can be used to bootstrap any constant-round YOSO protocol that requires setup, by generating that setup within YOSO-LHSS. Though YOSO-LHSS requires more rounds than our first two approaches, it may be more practical, since the zero knowledge proofs it employs are more efficient to instantiate. As a contribution of independent interest, we introduce a verifiable state propagation UC functionality, which allows parties to send private message which are verifiably derived in the “correct” way (according to the protocol in question) to anonymous receivers. This is a natural functionality to build YOSO protocols on top of.

Qian Guo, Erik Mårtensson, Adrian Åström
Published 2024-10-07 PDFPDF

The Module Learning With Errors (MLWE)-based Key Encapsulation Mechanism (KEM) Kyber is NIST's new standard scheme for post-quantum encryption. As a building block, Kyber uses a Chosen Plaintext Attack (CPA)-secure Public Key Encryption (PKE) scheme, referred to as Kyber.CPAPKE. In this paper we study the robustness of Kyber.CPAPKE against key mismatch attacks.

We demonstrate that Kyber's security levels can be compromised if having access to a few mismatch queries of Kyber.CPAPKE, by striking a balance between the parallelization level and the cost of lattice reduction for post-processing. This highlights the imperative need to strictly prohibit key reuse in Kyber.CPAPKE.

We further propose an adaptive method to enhance parallel mismatch attacks, initially proposed by Shao et al. at AsiaCCS 2024, thereby significantly reducing query complexity. This method combines the adaptive attack with post-processing via lattice reduction to retrieve the final secret key entries. Our method proves its efficacy by reducing query complexity by 14.6 % for Kyber512 and 7.5 % for Kyber768/Kyber1024.

Furthermore, this approach has the potential to improve multi-value Plaintext-Checking (PC) oracle-based side-channel attacks and fault-injection attacks against Kyber itself.

André Schrottenloher, Marc Stevens
Published 2024-10-07 PDFPDF

In this paper we study search problems that arise very often in cryptanalysis: nested search problems, where each search layer has known degrees of freedom and/or constraints. A generic quantum solution for such problems consists of nesting Grover's quantum search algorithm or amplitude amplification (QAA) by Brassard et al., obtaining up to a square-root speedup on classical algorithms. However, the analysis of nested Grover or QAA is complex and introduces technicalities that in previous works are handled in a case-by-case manner. Moreover, straightforward nesting of l layers multiplies the complexity by a constant factor (pi/2)^l.

In this paper, we aim to remedy both these issues and introduce a generic framework and tools to transform a classical nested search into a quantum procedure. It improves the state-of-the-art in three ways: 1) our framework results in quantum procedures that are significantly simpler to describe and analyze; 2) it reduces the overhead factor from (pi/2)^l to sqrt(l); 3) it is simpler to apply and optimize, without needing manual quantum analysis. We give generic complexity formulas and show that for concrete instances, numerical optimizations enable further improvements, reducing even more the gap to an exact quadratic speedup.

We demonstrate our framework by giving a tighter analysis of quantum attacks on reduced-round AES.

Haetham Al Aswad, Emmanuel Thomé, Cécile Pierrot
Published 2024-10-07 PDFPDF

The Number Field Sieve and its variants are the best algorithms to solve the discrete logarithm problem in finite fields (except for the weak small characteristic case). The Factory variant accelerates the computation when several prime fields are targeted. This article adapts the Factory variant to non-prime finite fields of medium and large characteristic. A precomputation, solely dependent on an approximate finite field size and an extension degree, allows to efficiently compute discrete logarithms in a constant proportion of the finite fields of the given approximate size and extension degree. We combine this idea with two other variants of NFS, namely the tower and special variant. This combination improves the asymptotic complexity. We also notice that combining our approach with the MNFS variant would be an unnecessary complication as all the potential gain of MNFS is subsumed by our Factory variant anyway. Furthermore, we demonstrate how Chebotarev's density theorem allows to compute the density of finite fields that can be solved with a given precomputation. Finally, we provide experimental data in order to assess the practical reach of our approach.

Soichiro Kobayashi, Rei Ueno, Yosuke Todo, Naofumi Homma
Published 2024-10-07 PDFPDF

This paper presents a new side-channel attack (SCA) on unrolled implementations of stream ciphers, with a particular focus on Trivium. Most conventional SCAs predominantly concentrate on leakage of some first rounds prior to the sufficient diffusion of the secret key and initial vector (IV). However, recently, unrolled hardware implementation has become common and practical, which achieves higher throughput and energy efficiency compared to a round-based hardware. The applicability of conventional SCAs to such unrolled hardware is unclear because the leakage of the first rounds from unrolled hardware is hardly observed. In this paper, focusing on Trivium, we propose a novel SCA on unrolled stream cipher hardware, which can exploit leakage of rounds latter than 80, while existing SCAs exploited intermediate values earlier than 80 rounds. We first analyze the algebraic equations representing the intermediate values of these rounds and present the recursive restricted linear decomposition (RRLD) strategy. This approach uses correlation power analysis (CPA) to estimate the intermediate values of latter rounds. Furthermore, we present a chosen-IV strategy for a successful key recovery through linearization. We experimentally demonstrate that the proposed SCA achieves the key recovery of a 288-round unrolled Trivium hardware implementation using 360,000 traces. Finally, we evaluate the performance of unrolled Trivium hardware implementations to clarify the trade-off between performance and SCA (in)security. The proposed SCA requires 34.5 M traces for a key recovery of 384-round unrolled Trivium implementation and is not applicable to 576-round unrolled hardware.

Avishek Majumder, Sayantan Mukherjee
Published 2024-10-07 PDFPDF

Broadcast Encryption (BE) allows a sender to send an encrypted message to multiple receivers. In a typical broadcast encryption scenario, the broadcaster decides the set of users who can decrypt a particular ciphertext (denoted as the privileged set). Gritti et al. (IJIS'16) introduced a new primitive called Broadcast Encryption with Dealership (BrED), where the dealer decides the privileged set. A BrED scheme allows a dealer to buy content from the broadcaster and sell it to users. It provides better flexibility in managing a large user base. To date, quite a few different constructions of BrED schemes have been proposed by the research community.

We find that all existing BrED schemes are insecure under the existing security definitions. We demonstrate a concrete attack on all the existing schemes in the purview of the existing security definition. We also find that the security definitions proposed in the state-of-the-art BrED schemes do not capture the real world. We argue about the inadequacy of existing definitions and propose a new security definition that models the real world more closely. Finally, we propose a new BrED construction and prove it to be secure in our newly proposed security model.

Damien Vidal, Claire Delaplace, Sorina Ionica
Published 2024-10-07 PDFPDF

The Crossbred algorithm is currently the state-of-the-art method for solving overdetermined multivariate polynomial systems over $\mathbb{F}_2$. Since its publication in 2017, several record breaking implementations have been proposed and demonstrate the power of this hybrid approach. Despite these practical results, the complexity of this algorithm and the choice of optimal parameters for it are difficult open questions. In this paper, we prove a bivariate generating series for potentially admissible parameters of the Crossbred algorithm.

Benjamin Hansen Mortensen, Mathias Karsrud Nordal, Martin Strand
Published 2024-10-07 PDFPDF

Vessels can be recognised by their navigation radar due to the characteristics of the emitted radar signal. This is particularly useful if one wants to build situational awareness without revealing one's own presence. Most countries maintain databases of radar fingerprints but will not readily share these due to national security regulations. Sharing of such information will generally require some form of information exchange agreement.

However, all parties in a coalition benefit from correct identification. We use secure multiparty computation to match a radar signal measurement against secret databases and output plausible matches with their likelihoods. We also provide a demonstrator using MP-SPDZ.

Ruixiao Li, Hayato Yamana
Published 2024-10-07 PDFPDF

To address security issues in cloud computing, fully homomorphic encryption (FHE) enables a third party to evaluate functions using ciphertexts that do not leak information to the cloud server. The remaining problems of FHE include high computational costs and limited arithmetic operations, only evaluating additions and multiplications. Arbitrary functions can be evaluated using a precomputed lookup table (LUT), which is one of the solutions for those problems. Previous studies proposed LUT-enabled computation methods 1) with bit-wise FHE and 2) with word-wise FHE. The performance of LUT-enabled computation with bit-wise FHE drops quickly when evaluating BigNum functions because of the complexity being O(s·2^d·m), where m represents the number of inputs, d and s represent the bit lengths of the inputs and outputs, respectively. Thus, LUT-enabled computation with word-wise FHE, which handles a set of bits with one operation, has also been proposed; however, previous studies are limited in evaluating multivariate functions within two inputs and cannot speed up the evaluation when the domain size of the integer exceeds 2N, where N is the number of elements packed into a single ciphertext. In this study, we propose a non-interactive model, in which no decryption is required, to evaluate arbitrary multivariate functions using homomorphic table lookup with word-wise FHE. The proposed LUT-enabled computation method 1) decreases the complexity to O(2^d·m/l), where l is the element size of FHE packing; 2) extends the input and output domain sizes to evaluate multivariate functions over two inputs; and 3) adopts a multidimensional table for enabling multithreading to reduce latency. The experimental results demonstrate that evaluating a 10-bit two-input function and a 5-bit three-input function takes approximately 90.5 and 105.5 s with 16-thread, respectively. Our proposed method achieves 3.2x and 23.1x speedup to evaluate two-bit and three-bit 3-input functions compared with naive LUT-enabled computation with bit-wise FHE.

Rustem Takhanov
Published 2024-10-07 PDFPDF

Almost pairwise independence (API) is a quantitative property of a class of functions that is desirable in many cryptographic applications. This property is satisfied by Learning with errors (LWE)-mappings and by special Substitution-Permutation Networks (SPN). API block ciphers are known to be resilient to differential and linear cryptanalysis attacks. Recently, security of protocols against neural network-based attacks became a major trend in cryptographic studies. Therefore, it is relevant to study the hardness of learning a target function from an API class of functions by gradient-based methods.

We propose a theoretical analysis based on the study of the variance of the gradient of a general machine learning objective with respect to a random choice of target function from a class. We prove an upper bound and verify that, indeed, such a variance is extremely small for API classes of functions. This implies the resilience of actual LWE-based primitives against deep learning attacks, and to some extent, the security of SPNs. The hardness of learning reveals itself in the form of the barren plateau phenomenon during the training process, or in other words, in a low information content of the gradient about the target function. Yet, we emphasize that our bounds hold for the case of a regular parameterization of a neural network and the gradient may become informative if a class is mildly pairwise independent and a parameterization is non-regular. We demonstrate our theory in experiments on the learnability of LWE mappings.

Gaëtan Cassiers, Loïc Masure, Charles Momin, Thorben Moos, Amir Moradi, François-Xavier Standaert
Published 2024-07-08 PDFPDF

Masking is a prominent strategy to protect cryptographic implementations against side-channel analysis. Its popularity arises from the exponential security gains that can be achieved for (approximately) quadratic resource utilization. Many variants of the countermeasure tailored for different optimization goals have been proposed. The common denominator among all of them is the implicit demand for robust and high entropy randomness. Simply assuming that uniformly distributed random bits are available, without taking the cost of their generation into account, leads to a poor understanding of the efficiency vs. security tradeoff of masked implementations. This is especially relevant in case of hardware masking schemes which are known to consume large amounts of random bits per cycle due to parallelism. Currently, there seems to be no consensus on how to most efficiently derive many pseudo-random bits per clock cycle from an initial seed and with properties suitable for masked hardware implementations. In this work, we evaluate a number of building blocks for this purpose and find that hardware-oriented stream ciphers like Trivium and its reduced-security variant Bivium B outperform most competitors when implemented in an unrolled fashion. Unrolled implementations of these primitives enable the flexible generation of many bits per cycle, which is crucial for satisfying the large randomness demands of state-of-the-art masking schemes. According to our analysis, only Linear Feedback Shift Registers (LFSRs), when also unrolled, are capable of producing long non-repetitive sequences of random-looking bits at a higher rate per cycle for the same or lower cost as Trivium and Bivium B. Yet, these instances do not provide black-box security as they generate only linear outputs. We experimentally demonstrate that using multiple output bits from an LFSR in the same masked implementation can violate probing security and even lead to harmful randomness cancellations. Circumventing these problems, and enabling an independent analysis of randomness generation and masking, requires the use of cryptographically stronger primitives like stream ciphers. As a result of our studies, we provide an evidence-based estimate for the cost of securely generating $n$ fresh random bits per cycle. Depending on the desired level of black-box security and operating frequency, this cost can be as low as $20n$ to $30n$ ASIC gate equivalents (GE) or $3n$ to $4n$ FPGA look-up tables (LUTs), where $n$ is the number of random bits required. Our results demonstrate that the cost per bit is (sometimes significantly) lower than estimated in previous works, incentivizing parallelism whenever exploitable. This provides further motivation to potentially move low randomness usage from a primary to a secondary design goal in hardware masking research.

Ji Luo
Published 2024-07-08 PDFPDF

Traitor tracing schemes [Chor–Fiat–Naor, Crypto ’94] help content distributors fight against piracy and are defined with the content distributor as a trusted authority having access to the secret keys of all users. While the traditional model caters well to its original motivation, its centralized nature makes it unsuitable for many scenarios. For usage among mutually untrusted parties, a notion of *ad hoc* traitor tracing (naturally with the capability of broadcast and revocation) is proposed and studied in this work. Such a scheme allows users in the system to generate their own public/secret key pairs, without trusting any other entity. To encrypt, a list of public keys is used to identify the set of recipients, and decryption is possible with a secret key for any of the public keys in the list. In addition, there is a tracing algorithm that given a list of recipients’ public keys and a pirate decoder capable of decrypting ciphertexts encrypted to them, identifies at least one recipient whose secret key must have been used to construct the said decoder.

Two constructions are presented. The first is based on functional encryption for circuits (conceptually, obfuscation) and has constant-size ciphertext, yet its decryption time is linear in the number of recipients. The second is a generic transformation that reduces decryption time at the cost of increased ciphertext size. A matching lower bound on the trade-off between ciphertext size and decryption time is shown, indicating that the two constructions achieve all possible optimal trade-offs, i.e., they fully demonstrate the Pareto front of efficiency. The lower bound also applies to broadcast encryption (hence all mildly expressive attribute-based encryption schemes) and is of independent interest.

Charles Bouillaguet, Julia Sauvage
Published 2024-04-09 PDFPDF

Biscuit is a recent multivariate signature scheme based on the MPC-in-the-Head paradigm. It has been submitted to the NIST competition for additional signature schemes. Signatures are derived from a zero-knowledge proof of knowledge of the solution of a structured polynomial system. This extra structure enables efficient proofs and compact signatures. This short note demonstrates that it also makes these polynomial systems easier to solve than random ones. As a consequence, the original parameters of Biscuit failed to meet the required security levels and had to be upgraded.

Subhadeep Banik, Andrea Caforio, Serge Vaudenay
Published 2024-04-09 PDFPDF

The LowMC family of block ciphers was proposed by Albrecht et al. in Eurocrypt 2015, specifically targeting adoption in FHE and MPC applications due to its low multiplicative complexity. The construction operates a 3-bit quadratic S-box as the sole non-linear transformation in the algorithm. In contrast, both the linear layer and round key generation are achieved through multiplications of full rank matrices over GF(2). The cipher is instantiable using a diverse set of default configurations, some of which have partial non-linear layers i.e., in which the S-boxes are not applied over the entire internal state of the cipher.

The significance of cryptanalysing LowMC was elevated by its inclusion into the NIST PQC digital signature scheme PICNIC in which a successful key recovery using a single plaintext/ciphertext pair is akin to retrieving the secret signing key. The current state-of-the-art attack in this setting is due to Dinur at Eurocrypt 2021, in which a novel way of enumerating roots of a Boolean system of equation is morphed into a key-recovery procedure that undercuts an ordinary exhaustive search in terms of time complexity for the variants of the cipher up to five rounds.

In this work, we demonstrate that this technique can efficiently be enriched with a specific linearization strategy that reduces the algebraic degree of the non-linear layer as put forward by Banik et al. at IACR ToSC 2020(4). This amalgamation yields new attacks on certain instances of LowMC up to seven rounds.

Emmanuela Orsini, Riccardo Zanotto
Published 2024-04-09 PDFPDF

In this work we study algebraic and generic models for group actions, and extend them to the universal composability (UC) framework of Canetti (FOCS 2001). We revisit the constructions of Duman et al. (PKC 2023) integrating the type-safe model by Zhandry (Crypto 2022), adapted to the group action setting, and formally define an algebraic action model (AAM). This model restricts the power of the adversary in a similar fashion to the algebraic group model (AGM). By imposing algebraic behaviour to the adversary and environment of the UC framework, we construct the UC-AAM. Finally, we instantiate UC-AAM with isogeny-based assumptions, in particular the CSIDH action with twists, obtaining the explicit isogeny model, UC-EI; we observe that, under certain assumptions, this model is "closer" to standard UC than the UC-AGM, even though there still exists an important separation. We demonstrate the utility of our definitions by proving UC-EI security for the passive-secure oblivious transfer protocol described by Lai et al. (Eurocrypt 2021), hence providing the first concretely efficient two-message isogeny-based OT protocol in the random oracle model against malicious adversaries.

Keewoo Lee
Published 2024-04-09 PDFPDF

We revisit the question of what the definition of bit security should be, previously answered by Micciancio-Walter (Eurocrypt 2018) and Watanabe-Yasunaga (Asiacrypt 2021). Our new definition is simple, but (i) captures both search and decision primitives in a single framework like Micciancio-Walter, and (ii) has a firm operational meaning like Watanabe-Yasunaga. It also matches intuitive expectations and can be well-formulated regarding Hellinger distance. To support and justify the new definition, we prove several classic security reductions with respect to our bit security. We also provide pathological examples that indicate the ill-definedness of bit security defined in Micciancio-Walter and Watanabe-Yasunaga.

Akira Takahashi, Greg Zaverucha
Published 2024-04-09 PDFPDF

Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations. It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations. In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme. Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.