Communications in Cryptology IACR CiC


Dates are inconsistent
66 results sorted by publication date
Editors in chief
Call for papers: IACR Communications in Cryptology Submit a paper Communications in Cryptology is a journal for original research papers which welcomes submissions on any topic in cryptology. This covers all research topics in cryptography and cryptanalysis, including but not limited to foundational theory and mathematics the design, proposal, and analysis of cryptographic primitives a...
Editors in chief
Policy on publication ethics Communications in Cryptology (CiC) is committed to ensuring ethics and quality in research. We therefore expect everyone involved in the journal to follow our principles (see below) and ethics. See the related IACR docs here and here. Duties for Authors Confidentiality You may not ask Editorial Board members for information about your submission befor...
Editors in chief
Policy on conflict of interest A conflict of interest (CoI) is a situation in which a person is involved in multiple interests, one of which could affect the judgment of that individual. In the context of scientific reviewing on behalf of the IACR CiC, a CoI exists when particular relationships between reviewers and authors, or their respective institutions, may taint a reviewer’s de...
Editors in chief
Frequently asked questions The International Association for Cryptologic Research (IACR) Communications in Cryptology (CiC) was approved by the Membership in the IACR 2022 election and targets publications that advance the field, but with a broader range of contributions than the ones accepted by the IACR flagship or area conferences. What are the main principles of CiC? Low-cost open ...
Chunzhi Zhao, Junqi Zhang, Jinzheng Cao, Qingfeng Cheng, Fushan Wei
Published 2024-10-07 PDFPDF

At PKC 2009, May and Ritzenhofen proposed the implicit factorization problem (IFP). They showed that it is undemanding to factor two h-bit RSA moduli N1=p1q1, N2=p2q2 where q1, q2 are both αh-bit, and p1, p2 share uh>2αh the least significant bits (LSBs). Subsequent works mainly focused on extending the IFP to the cases where p1, p2 share some of the most significant bits (MSBs) or the middle bits (MBs). In this paper, we propose a novel generalized IFP where p1 and p2 share an arbitrary number of bit blocks, with each block having a consistent displacement in its position between p1 and p2, and we solve it successfully based on Coppersmith’s method. Specifically, we generate a new set of shift polynomials to construct the lattice and optimize the structure of the lattice by introducing a new variable z=p1. We derive that we can factor the two moduli in polynomial time when u>2(n+1)α(1−α^1/(n+1)) with p1, p2 sharing n blocks. Further, no matter how many blocks are shared, we can theoretically factor the two moduli as long as u>2αln(1/α). In addition, we consider two other cases where the positions of the shared blocks are arbitrary or there are k>2 known moduli. Meanwhile, we provide the corresponding solutions for the two cases. Our work is verified by experiments.

Sebastian Kolby, Divya Ravi, Sophia Yakoubov
Published 2024-10-07 PDFPDF

YOSO MPC (Gentry et al., Crypto 2021) is a new MPC framework where each participant can speak at most once. This models an adaptive adversary’s ability to watch the network and corrupt or destroy parties it deems significant based on their communication. By using private channels to anonymous receivers (e.g. by encrypting to a public key whose owner is unknown), the communication complexity of YOSO MPC can scale sublinearly with the total number N of available parties, even when the adversary’s corruption threshold is linear in N (e.g. just under N/2). It was previously an open problem whether YOSO MPC can achieve guaranteed output delivery in a constant number of rounds without relying on trusted setup. In this work, we show that this can indeed be accomplished. We demonstrate three different approaches: the first two (which we call YaOSO and YOSO-GLS) use two and three rounds of communication, respectively. Our third approach (which we call YOSO-LHSS) uses O(d) rounds, where d is the multiplicative depth of the circuit being evaluated; however, it can be used to bootstrap any constant-round YOSO protocol that requires setup, by generating that setup within YOSO-LHSS. Though YOSO-LHSS requires more rounds than our first two approaches, it may be more practical, since the zero knowledge proofs it employs are more efficient to instantiate. As a contribution of independent interest, we introduce a verifiable state propagation UC functionality, which allows parties to send private message which are verifiably derived in the “correct” way (according to the protocol in question) to anonymous receivers. This is a natural functionality to build YOSO protocols on top of.

Qian Guo, Erik Mårtensson, Adrian Åström
Published 2024-10-07 PDFPDF

The Module Learning With Errors (MLWE)-based Key Encapsulation Mechanism (KEM) Kyber is NIST's new standard scheme for post-quantum encryption. As a building block, Kyber uses a Chosen Plaintext Attack (CPA)-secure Public Key Encryption (PKE) scheme, referred to as Kyber.CPAPKE. In this paper we study the robustness of Kyber.CPAPKE against key mismatch attacks.

We demonstrate that Kyber's security levels can be compromised if having access to a few mismatch queries of Kyber.CPAPKE, by striking a balance between the parallelization level and the cost of lattice reduction for post-processing. This highlights the imperative need to strictly prohibit key reuse in Kyber.CPAPKE.

We further propose an adaptive method to enhance parallel mismatch attacks, initially proposed by Shao et al. at AsiaCCS 2024, thereby significantly reducing query complexity. This method combines the adaptive attack with post-processing via lattice reduction to retrieve the final secret key entries. Our method proves its efficacy by reducing query complexity by 14.6 % for Kyber512 and 7.5 % for Kyber768/Kyber1024.

Furthermore, this approach has the potential to improve multi-value Plaintext-Checking (PC) oracle-based side-channel attacks and fault-injection attacks against Kyber itself.

Maria Corte-Real Santos, Jonathan Komada Eriksen, Michael Meyer, Francisco Rodríguez-Henríquez
Published 2024-10-07 PDFPDF

Isogeny-based schemes often come with special requirements on the field of definition of the involved elliptic curves. For instance, the efficiency of SQIsign, a promising candidate in the NIST signature standardisation process, requires a large power of two and a large smooth integer $T$ to divide $p^2-1$ for its prime parameter $p$. We present two new methods that combine previous techniques for finding suitable primes: sieve-and-boost and XGCD-and-boost. We use these methods to find primes for the NIST submission of SQIsign. Furthermore, we show that our methods are flexible and can be adapted to find suitable parameters for other isogeny-based schemes such as AprèsSQI or POKE. For all three schemes, the parameters we present offer the best performance among all parameters proposed in the literature.

Carsten Baum, Jens Berlips, Walther Chen, Ivan B. Damgård, Kevin M. Esvelt, Leonard Foner, Dana Gretton, Martin Kysel, Ronald L. Rivest, Lawrence Roy, Francesca Sage-Ling, Adi Shamir, Vinod Vaikuntanathan, Lynn Van Hauwe, Theia Vogel, Benjamin Weinstein-Raun, Daniel Wichs, Stephen Wooster, Andrew C. Yao, Yu Yu
Published 2024-10-07 PDFPDF

Oblivious Pseudorandom Functions (OPRFs) allow a client to evaluate a pseudorandom function (PRF) on her secret input based on a key that is held by a server. In the process, the client only learns the PRF output but not the key, while the server neither learns the input nor the output of the client. The arguably most popular OPRF is due to Naor, Pinkas and Reingold (Eurocrypt 2009). It is based on an Oblivious Exponentiation by the server, with passive security under the Decisional Diffie-Hellman assumption. In this work, we strengthen the security guarantees of the NPR OPRF by protecting it against active attacks of the server. We have implemented our solution and report on the performance. Our main result is a new batch OPRF protocol which is secure against maliciously corrupted servers, but is essentially as efficient as the semi-honest solution. More precisely, the computation (and communication) overhead is a multiplicative factor $o(1)$ as the batch size increases. The obvious solution using zero-knowledge proofs would have a constant factor overhead at best, which can be too expensive for certain deployments. Our protocol relies on a novel version of the DDH problem, which we call the Oblivious Exponentiation Problem (OEP), and we give evidence for its hardness in the Generic Group model. We also present a variant of our maliciously secure protocol that does not rely on the OEP but nevertheless only has overhead $o(1)$ over the known semi-honest protocol. Moreover, we show that our techniques can also be used to efficiently protect threshold blind BLS signing and threshold ElGamal decryption against malicious attackers.

Alexander Bille, Elmar Tischhauser
Published 2024-10-07 PDFPDF

Mixed-Integer Linear Programming (MILP) modeling has become an important tool for both the analysis and the design of symmetric cryptographic primitives. The bit-wise modeling of their nonlinear components, especially the S-boxes, is of particular interest since it allows more informative analysis compared to word-oriented models focusing on counting active S-boxes. At the same time, the size of these models, especially in terms of the number of required inequalities, tends to significantly influence and ultimately limit the applicability of this method to real-world ciphers, especially for larger number of rounds.

It is therefore of great cryptographic significance to study optimal linear inequality descriptions for Boolean functions. The pioneering works of Abdelkhalek et al. (FSE 2017), Boura and Coggia (FSE 2020) and Li and Sun (FSE 2023) provided various heuristic techniques for this computationally hard problem, decomposing it into two algorithmic steps, coined Problem 1 and Problem 2, with the latter being identical to the well-known NP-hard set cover problem, for which there are many heuristic and exact algorithms in the literature.

In this paper, we introduce a novel and efficient branch-and-bound algorithm for generating all minimal, non-redundant candidate inequalities that satisfy a given Boolean function, therefore solving Problem 1 in an optimal manner without relying on heuristics. We furthermore prove that our algorithm correctly computes optimal solutions. Using a number of dedicated optimizations, it provides significantly improved runtimes compared to previous approaches and allows the optimal modeling of the difference distribution tables (DDT) and linear approximation tables (LAT) of many practically used S-boxes. The source code for our algorithm is publicly available as a tool for researchers and practitioners in symmetric cryptography.

Ritam Bhaumik, André Chailloux, Paul Frixons, Bart Mennink, María Naya-Plasencia
Published 2024-10-07 PDFPDF

In order to maintain a similar security level in a post-quantum setting, many symmetric primitives should have to double their keys and increase their state sizes. So far, no generic way for doing this is known that would provide convincing quantum security guarantees. In this paper we propose a new generic construction, QuEME, that allows one to double the key and the state size of a block cipher in such a way that a decent level of quantum security is guaranteed. The QuEME design is inspired by the ECB-Mix-ECB (EME) construction, but is defined for a different choice of mixing function than what we have seen before, in order to withstand a new quantum superposition attack that we introduce as a side result: this quantum superposition attack exhibits a periodic property found in collisions and breaks EME and a large class of its variants. We prove that QuEME achieves n-bit security in the classical setting, where n is the block size of the underlying block cipher, and at least (n/6)-bit security in the quantum setting. We finally propose a concrete instantiation of this construction, called Double-AES, that is built with variants of the standardized AES-128 block cipher.

Seongtaek Chee, Kyung Chul Jeong, Tanja Lange, Nari Lee, Alex Pellegrini, Hansol Ryu
Published 2024-10-07 PDFPDF

We analyze Layered ROLLO-I, a code-based cryptosystem published in IEEE Communications Letters and submitted to the Korean post-quantum cryptography competition. Four versions of Layered ROLLO-I have been proposed in the competition. We show that the first two versions do not provide the claimed security against rank decoding attacks and give reductions to small instances of the original ROLLO-I scheme, which was a candidate in the NIST competition and eliminated there due to rank decoding attacks. As a second contribution, we provide two efficient message recovery attacks, affecting every security level of the first three versions of Layered ROLLO-I and security levels 128 and 192 of the fourth version.

Sebastian Kolby, Elena Pagnin, Sophia Yakoubov
Published 2024-10-07 PDFPDF

We study signatures well suited for sensitive applications (e.g. whistleblowing) where both the signer's anonymity and deniability are important. Two independent lines of work have tackled these two goals: ring signatures ensure the signer's anonymity (within a set of signers, called a ring), and — separately — multi designated verifier signatures ensure that all the intended recipients agree on whether a signature is valid, while maintaining the signer's deniability by preventing the intended recipients from convincing an outsider of the validity of the signature. In this paper, we introduce multi designated verifier ring signatures (MDVRS), which simultaneously offer both signer anonymity and deniability. This makes MDVRS uniquely suited for sensitive scenarios.

Following the blueprint of Damgård et al (TCC'20) for multi designated verifier signatures, we introduce provably simulatable designated verifier ring signatures (PSDVRS) as an intermediate building block which we then compile into an MDVRS. We instantiate PSDVRS in a concretely efficient way from discrete logarithm based sigma protocols, encryption and commitments.

Liu Zhang, Zilong Wang, Baocang Wang
Published 2024-10-07 PDFPDF

Our first objective is to enhance the capabilities of differential-neural distinguishers by applying more deep-learning techniques, focusing on handling more rounds and improving accuracy. Inspired by the Inception Block in GoogLeNet, we adopted a design that uses multiple parallel convolutional layers with varying kernel sizes before the residual block to capture multi-dimensional information. Additionally, we expanded the convolutional kernels in the residual blocks, enlarging the network's receptive field. In the case of Speck32/64, our efforts yield accuracy improvements in rounds 6, 7, and 8, enabling the successful training of a 9-round differential-neural distinguisher. As for Simon32/64, we developed a differential-neural distinguisher capable of effectively handling 12 rounds while achieving noteworthy accuracy enhancements in rounds 9, 10, and 11.

Additionally, we utilized neutral bits to ensure the required data distribution for launching a successful key recovery attack when using multiple-ciphertext pairs as input for the neural network. Meanwhile, we redefined the formula for time complexity based on the differences in prediction speeds of the distinguisher between a single-core CPU and a GPU. Combining these various advancements allows us to considerably reduce the time and data complexity of key recovery attacks on 13-round Speck32/64. Furthermore, we used knowledge distillation techniques to reduce the model size, accelerating the distinguisher's prediction speed and reducing the time complexity. In particular, we achieved a successful 14-round key recovery attack by exhaustively guessing a 1-round subkey. For Simon32/64, we accomplished a 17-round key recovery attack for the first time and reduced the time complexity of the 16-round key recovery attack.

Rebecca Hay, Elisabeth Oswald
Published 2024-10-07 PDFPDF

New proposals for scalable key rank estimation methods have appeared recently, in particular the sampling based approach MCRank. The idea is that one can consistently estimate the key rank by sampling only a small portion of the key space as a “proxy”, leading to both an accurate and scalable approach, at least in comparison with another approach based on histograms. We show that the (earlier) GEEA algorithm is in fact a sampling based algorithm, and provide an in-depth comparison between GEEA (when adapted to produce rank estimates rather than guessing entropy estimates), GM bounds, MCRank and the currently most performant counting based rank estimation as implemented in the Labynkyr library. We find that although MCRank does live up to the promised accuracy and scalability for probability-based distinguishers, it fails to handle cases with unusual distinguisher distributions.

Furthermore, we put forward a novel proposal for a highly scalable key rank estimation method by introducing the notion of an “attacker budget”. Our proposal is based on the idea that, in particular for very long keys, the exact key rank is less important than the knowledge whether a key is within a certain bound. Thus our “budget approach” is based on efficiently checking if the result of an attack is such that the attacker's budget suffices for successful enumeration. Our budget approach scales linearly with the key size and thus enables security estimations even for post-quantum key lengths.

Franklin Harding, Jiayu Xu
Published 2024-10-07 PDFPDF

Blind signature schemes enable a user to obtain a digital signature on a message from a signer without revealing the message itself. Among the most fundamental examples of such a scheme is blind Schnorr, but recent results show that it does not satisfy the standard notion of security against malicious users, One-More Unforgeability (OMUF), as it is vulnerable to the ROS attack. However, blind Schnorr does satisfy the weaker notion of sequential OMUF, in which only one signing session is open at a time, in the Algebraic Group Model (AGM) + Random Oracle Model (ROM), assuming the hardness of the Discrete Logarithm (DL) problem.

This paper serves as a first step towards characterizing the security of blind Schnorr in the limited concurrency setting. Specifically, we show that blind Schnorr satisfies OMUF when at most two signing sessions can be concurrently open (in the AGM+ROM, assuming DL). Our argument suggests that it is plausible that blind Schnorr satisfies OMUF for up to polylogarithmically many concurrent signing sessions. Our security proof involves interesting techniques from linear algebra and combinatorics.

Nima Mahdion, Elisabeth Oswald
Published 2024-10-07 PDFPDF

Software implementations of cryptographic algorithms often use masking schemes as a countermeasure against side channel attacks. A number of recent results show clearly the challenge of implementing masking schemes in such a way, that (unforeseen) micro-architectural effects do not cause masking flaws that undermine the intended security goal of an implementation. So far, utilising a higher-order version of the non-specific (fixed-vs-random) input test of the Test Vector Leakage Assessment (TVLA) framework has been the best option to identify such flaws. The drawbacks of this method are both its significant computation cost, as well as its inability to pinpoint which interaction of masking shares leads to the flaw. In this paper we propose a novel version, the fixed-vs-random shares test, to tackle both drawbacks. We explain our method and show its application to three case studies, where each time it outperforms its conventional TVLA counterpart. The drawback of our method is that it requires control over the shares, which, we argue, is practically feasible in the context of in-house evaluation and testing for software implementations.

Lichao Wu, Sébastien Tiran, Guilherme Perin, Stjepan Picek
Published 2024-10-07 PDFPDF

Side-channel Collision Attacks (SCCA) is a classical method that exploits information dependency leaked during cryptographic operations. Unlike collision attacks that seek instances where two different inputs to a cryptographic algorithm yield identical outputs, SCCAs specifically target the internal state, where identical outputs are more likely. Although SCCA does not rely on the pre-assumption of the leakage model, it explicitly operates on precise trace segments reflecting the target operation, which is challenging to perform when the leakage measurements are noisy. Besides, its attack performance may vary dramatically, as it relies on selecting a reference byte (and its corresponding leakages) to “collide” other bytes. A poor selection would lead to many bytes unrecoverable. These two facts make its real-world application problematic.

This paper addresses these challenges by introducing a novel plaintext-based SCCA. We leverage the bijective relationship between plaintext and secret data, using plaintext as labels to train profiling models to depict leakages from varying operations. By comparing the leakage representations produced by the profiling model instead of the leakage segmentation itself, all secret key differences can be revealed simultaneously without processing leakage traces. Furthermore, we propose a novel error correction scheme to rectify false predictions further. Experimental results show that our approach significantly surpasses the state-of-the-art SCCA in both attack performance and computational complexity (e.g., training time reduced from approximately three hours to five minutes). These findings underscore our method's effectiveness and practicality in real-world attack scenarios.

Aron van Baarsen, Marc Stevens
Published 2024-10-07 PDFPDF

Private set intersection (PSI) is a cryptographic functionality for two parties to learn the intersection of their input sets, without leaking any other information. Circuit-PSI is a stronger PSI functionality where the parties learn only a secret-shared form of the desired intersection, thus without revealing the intersection directly. These secret shares can subsequently serve as input to a secure multiparty computation of any function on this intersection.

In this paper we consider several settings in which parties take part in multiple Circuit-PSI executions with the same input set, and aim to amortize communications and computations. To that end, we build up a new framework for Circuit-PSI around generalizations of oblivious (programmable) PRFs that are extended with offline setup phases. We present several efficient instantiations of this framework with new security proofs for this setting. As a side result, we obtain a slight improvement in communication and computation complexity over the state-of-the-art semi-honest Circuit-PSI protocol by Bienstock et al. (USENIX '23). Additionally, we present a novel Circuit-PSI protocol from a PRF with secret-shared outputs, which has linear communication and computation complexity in the parties' input set sizes, and is able to realize a stronger security notion. Lastly, we derive the potential amortizations over multiple protocol executions, and observe that each of the presented instantiations is favorable in at least one of the multiple-execution settings.

Balthazar Bauer, Pooya Farshim, Patrick Harasser, Markulf Kohlweiss
Published 2024-10-07 PDFPDF

The generic-group model (GGM) and the algebraic-group model (AGM) have been exceptionally successful in proving the security of many classical and modern cryptosystems. These models, however, come with standard-model uninstantiability results, raising the question of whether the schemes analyzed under them can be based on firmer standard-model footing.

We formulate the uber-knowledge (UK) assumption, a standard-model assumption that naturally extends the uber-assumption family to knowledge-type problems. We justify the soundness of UK in both the bilinear GGM and the bilinear AGM. Along the way we extend these models to account for hashing into groups, an adversarial capability that is available in many concrete groups—In contrast to standard assumptions, hashing may affect the validity of knowledge assumptions. These results, in turn, enable a modular approach to security in the GGM and the AGM.

As example applications, we use the UK assumption to prove knowledge soundness of Groth's zero-knowledge SNARK (EUROCRYPT 2016) and of KZG polynomial commitments (ASIACRYPT 2010) in the standard model, where for the former we reuse the existing proof in the AGM without hashing.

Yi-Fu Lai
Published 2024-10-07 PDFPDF

In this work, we introduce two post-quantum Verifiable Random Function (VRF) constructions based on abelian group actions and isogeny group actions with a twist. The former relies on the standard group action Decisional Diffie-Hellman (GA-DDH) assumption. VRFs serve as cryptographic tools allowing users to generate pseudorandom outputs along with publicly verifiable proofs. Moreover, the residual pseudorandomness of VRFs ensures the pseudorandomness of unrevealed inputs, even when multiple outputs and proofs are disclosed. Our work aims at addressing the growing demand for post-quantum VRFs, as existing constructions based on elliptic curve cryptography (ECC) or classical DDH-type assumptions are vulnerable to quantum threats.

In our contributions, our two VRF constructions, rooted in number-theoretic pseudorandom functions, are both simple and secure over the random oracle model. We introduce a new proof system for the factorization of group actions and set elements, serving as the proofs for our VRFs. The first proposal is based on the standard GA-DDH problem, and for its security proof, we introduce the (group action) master Decisional Diffie-Hellman problem over group actions, proving its equivalence to the standard GA-DDH problem. In the second construction, we leverage quadratic twists to enhance efficiency, reducing the key size and the proof sizes, expanding input size. The scheme is based on the square GA-DDH problem.

Moreover, we employ advanced techniques from the isogeny literature to optimize the proof size to 39KB and 34KB using CSIDH-512 without compromising VRF notions. The schemes feature fast evaluations but exhibit slower proof generation. To the best of our knowledge, these constructions represent the first two provably secure VRFs based on isogenies.

Haetham Al Aswad, Emmanuel Thomé, Cécile Pierrot
Published 2024-10-07 PDFPDF

The Number Field Sieve and its variants are the best algorithms to solve the discrete logarithm problem in finite fields (except for the weak small characteristic case). The Factory variant accelerates the computation when several prime fields are targeted. This article adapts the Factory variant to non-prime finite fields of medium and large characteristic. A precomputation, solely dependent on an approximate finite field size and an extension degree, allows to efficiently compute discrete logarithms in a constant proportion of the finite fields of the given approximate size and extension degree. We combine this idea with two other variants of NFS, namely the tower and special variant. This combination improves the asymptotic complexity. We also notice that combining our approach with the MNFS variant would be an unnecessary complication as all the potential gain of MNFS is subsumed by our Factory variant anyway. Furthermore, we demonstrate how Chebotarev's density theorem allows to compute the density of finite fields that can be solved with a given precomputation. Finally, we provide experimental data in order to assess the practical reach of our approach.

Tsz Hon Yuen, Sherman S. M. Chow, Huangting Wu, Cong Zhang, Siu-Ming Yiu
Published 2024-10-07 PDFPDF

Salient in many cryptosystems, the exponent-inversion technique began without randomization in the random oracle model (SCIS '03, PKC '04), evolved into the Boneh-Boyen short signature scheme (JoC '08) and exerted a wide influence. Seen as a notable case, Gentry's (EuroCrypt '06) identity-based encryption (IBE) applies exponent inversion on a randomized base in its identity-based trapdoors. Making use of the non-static q-strong Diffie-Hellman assumption, Boneh-Boyen signatures are shown to be unforgeable against q-chosen-message attacks, while a variant q-type decisional assumption is used to establish the security of Gentry-IBE. Challenges remain in proving their security under weaker static assumptions.

Supported by the dual form/system framework (Crypto '09, AsiaCrypt '12), we propose dual form exponent-inversion Boneh-Boyen signatures and Gentry-IBE, with security proven under the symmetric external Diffie-Hellman (SXDH) assumption. Starting from our signature scheme, we extend it into P-signatures (TCC '08), resulting in the first anonymous credential scheme from the SXDH assumption, serving as a competitive alternative to the static-assumption construction of Abe et al. (JoC '16). Moreover, from our Gentry-IBE variant, we propose an accountable-authority IBE scheme also from SXDH, surpassing the fully secure Sahai-Seyalioglu scheme (PKC '11) in efficiency and the generic Kiayias-Tang transform (ESORICS '15) in security. Collectively, we present a suite of results under static assumptions.

Sougata Mandal
Published 2024-10-07 PDFPDF

In ASIACRYPT 2019, Andreeva et al. introduced a new symmetric key primitive called the forkcipher, designed for lightweight applications handling short messages. A forkcipher is a keyed function with a public tweak, featuring fixed-length input and fixed-length (expanding) output. They also proposed a specific forkcipher, ForkSkinny, based on the tweakable block cipher SKINNY, and its security was evaluated through cryptanalysis. Since then, several efficient AEAD and MAC schemes based on forkciphers have been proposed, catering not only to short messages but also to various purposes such as leakage resilience and cloud security. While forkciphers have proven to be efficient solutions for designing AEAD schemes, the area of forkcipher design remains unexplored, particularly the lack of provably secure forkcipher constructions.

In this work, we propose forkcipher design for various tweak lengths, based on a block cipher as the underlying primitive. We provide proofs of security for these constructions, assuming the underlying block cipher behaves as an ideal block cipher. First, we present a forkcipher, $\widetilde{\textsf{F}}1$, for an $n$-bit tweak and prove its optimal ($n$-bit) security. Next, we propose another construction, $\widetilde{\textsf{F}}2$, for a $2n$-bit tweak, also proving its optimal ($n$-bit) security. Finally, we introduce a construction, $\widetilde{\textsf{F}}r$, for a general $rn$-bit tweak, achieving $n$-bit security.

Avishek Majumder, Sayantan Mukherjee
Published 2024-10-07 PDFPDF

Broadcast Encryption (BE) allows a sender to send an encrypted message to multiple receivers. In a typical broadcast encryption scenario, the broadcaster decides the set of users who can decrypt a particular ciphertext (denoted as the privileged set). Gritti et al. (IJIS'16) introduced a new primitive called Broadcast Encryption with Dealership (BrED), where the dealer decides the privileged set. A BrED scheme allows a dealer to buy content from the broadcaster and sell it to users. It provides better flexibility in managing a large user base. To date, quite a few different constructions of BrED schemes have been proposed by the research community.

We find that all existing BrED schemes are insecure under the existing security definitions. We demonstrate a concrete attack on all the existing schemes in the purview of the existing security definition. We also find that the security definitions proposed in the state-of-the-art BrED schemes do not capture the real world. We argue about the inadequacy of existing definitions and propose a new security definition that models the real world more closely. Finally, we propose a new BrED construction and prove it to be secure in our newly proposed security model.

Dinal Kamel, François-Xavier Standaert, Olivier Bronchain
Published 2024-10-07 PDFPDF

Raccoon is a lattice-based scheme submitted to the NIST 2022 call for additional post-quantum signatures. One of its main selling points is that its design is intrinsically easy to mask against side-channel attacks. So far, Raccoon's physical security guarantees were only stated in the abstract probing model. In this paper, we discuss how these probing security results translate into guarantees in more realistic leakage models. We also highlight that this translation differs from what is usually observed (e.g., in symmetric cryptography), due to the algebraic structure of Raccoon's operations. For this purpose, we perform an in-depth information theoretic evaluation of Raccoon's most innovative part, namely the AddRepNoise function which allows generating its arithmetic shares on-the-fly. Our results are twofold. First, we show that the resulting shares do not enforce a statistical security order (i.e., the need for the side-channel adversary to estimate higher-order moments of the leakage distribution), as usually expected when masking. Second, we observe that the first-order leakage on the (large) random coefficients manipulated by Raccoon cannot be efficiently turned into leakage on the (smaller) coefficients of its long-term secret. Concretely, our information theoretic evaluations for relevant leakage functions also suggest that Raccoon's masked implementations can ensure high security with less shares than suggested by a conservative analysis in the probing model.

Diego F. Aranha, Georgios Fotiadis, Aurore Guillevic
Published 2024-10-07 PDFPDF

For more than two decades, pairings have been a fundamental tool for designing elegant cryptosystems, varying from digital signature schemes to more complex privacy-preserving constructions. However, the advancement of quantum computing threatens to undermine public-key cryptography. Concretely, it is widely accepted that a future large-scale quantum computer would be capable to break any public-key cryptosystem used today, rendering today's public-key cryptography obsolete and mandating the transition to quantum-safe cryptographic solutions. This necessity is enforced by numerous recognized government bodies around the world, including NIST which initiated the first open competition in standardizing post-quantum (PQ) cryptographic schemes, focusing primarily on digital signatures and key encapsulation/public-key encryption schemes. Despite the current efforts in standardizing PQ primitives, the landscape of complex, privacy-preserving cryptographic protocols, e.g., zkSNARKs/zkSTARKs, is at an early stage. Existing solutions suffer from various disadvantages in terms of efficiency and compactness and in addition, they need to undergo the required scrutiny to gain the necessary trust in the academic and industrial domains. Therefore, it is believed that the migration to purely quantum-safe cryptography would require an intermediate step where current classically secure protocols and quantum-safe solutions will co-exist. This is enforced by the report of the Commercial National Security Algorithm Suite version 2.0, mandating transition to quantum-safe cryptographic algorithms by 2033 and suggesting to incorporate ECC at 192-bit security in the meantime. To this end, the present paper aims at providing a comprehensive study on pairings at 192-bit security level. We start with an exhaustive review in the literature to search for all possible recommendations of such pairing constructions, from which we extract the most promising candidates in terms of efficiency and security, with respect to the advanced Special TNFS attacks. Our analysis is focused, not only on the pairing computation itself, but on additional operations that are relevant in pairing-based applications, such as hashing to pairing groups, cofactor clearing and subgroup membership testing. We implement all functionalities of the most promising candidates within the RELIC cryptographic toolkit in order to identify the most efficient pairing implementation at 192-bit security and provide extensive experimental results.

Lichao Wu, Azade Rezaeezade, Amir Ali-pour, Guilherme Perin, Stjepan Picek
Published 2024-10-07 PDFPDF

Profiling side-channel analysis has gained widespread acceptance in both academic and industrial realms due to its robust capacity to unveil protected secrets, even in the presence of countermeasures. To harness this capability, an adversary must access a clone of the target device to acquire profiling measurements, labeling them with leakage models. The challenge of finding an effective leakage model, especially for a protected dataset with a low signal-to-noise ratio or weak correlation between actual leakages and labels, often necessitates an intuitive engineering approach, as otherwise, the attack will not perform well.

In this paper, we introduce a deep learning approach with a flexible leakage model, referred to as the multi-bit model. Instead of trying to learn a pre-determined representation of the target intermediate data, we utilize the concept of the stochastic model to decompose the label into bits. Then, the deep learning model is used to classify each bit independently. This versatile multi-bit model can adjust to existing leakage models like the Hamming weight and Most Significant Bit while also possessing the flexibility to adapt to complex leakage scenarios. To further improve the attack efficiency, we extend the multi-bit model to profile all 16 subkey bytes simultaneously, which requires negligible computational effort. The experimental results show that the proposed methods can efficiently break all key bytes across four considered datasets while the conventional leakage models fail. Our work signifies a significant step forward in deep learning-based side-channel attacks, showcasing a high degree of flexibility and efficiency with the proposed leakage model.

Jinzheng Cao, Qingfeng Cheng, Jian Weng
Published 2024-10-07 PDFPDF

The Learning with Errors (LWE) problem has become one of the most prominent candidates of post-quantum cryptography, offering promising potential to meet the challenge of quantum computing. From a theoretical perspective, optimizing algorithms to solve LWE is a vital task for the analysis of this cryptographic primitive. In this paper, we propose a fine-grained time/memory trade-off method to analyze c-sum BKW variants for LWE in both classical and quantum models, then offer new complexity bounds for multiple BKW variants determined by modulus q, dimension k, error rate alpha, and stripe size b. Through our analysis, optimal parameters can be efficiently found for different settings, and the minimized complexities are lower than existing results. Furthermore, we enhance the performance of c-sum BKW in the quantum computing model by adopting the quantum Meet-in-the-Middle technique as c-sum solver instead of the naive c-sum technique. Our complexity trade-off formula also applies to the quantum version of BKW, and optimizes the theoretical quantum time and memory costs, which are exponentially lower than existing quantum c-sum BKW variants.

Jonathan Komada Eriksen, Antonin Leroux
Published 2024-10-07 PDFPDF

This work introduces several algorithms related to the computation of orientations in endomorphism rings of supersingular elliptic curves. This problem is at the heart of several results regarding the security of oriented-curves in isogeny-based cryptography. Under the Deuring correspondence, it can be expressed purely in terms of quaternion and boils down to representing integers by ternary quadratic forms. Our main contribution is to show that there exist efficient algorithms to solve this problem for quadratic orders of discriminant $n$ up to $O(p^{4/3})$. Our approach improves upon previous results by increasing this bound from $O(p)$ to $O(p^{4/3})$ and removing some heuristics. We introduce several variants of our new algorithm and provide a careful analysis of their asymptotic running time (without heuristic when it is possible). The best proven asymptotic complexity of one of our variants is $O(n^{3/4}/p)$ in average. The best heuristic variant has a complexity of $O(p^{1/3})$ for big enough $n$. We then introduce several results regarding the computation of ideals between oriented orders. The first application of this is a simplification of the known reduction from vectorization to computing the endomorphism ring, removing the assumption on the factorization of the discriminant. As a second application, we relate the problem of computing fixed-degree isogenies between supersingular curves to the problem of computing orientations in endomorphism rings, and we show that for a large range of degree $d$, our new algorithms improve on the state-of-the-art, and in important special cases, the range of degree $d$ for which there exist a polynomial-time algorithm is increased. In the most special case we consider, when both curves are oriented by a small degree endomorphism, we show heuristically that our techniques allow the computation of isogenies of any degree, assuming they exist.

Benoît Cogliati, Jérémy Jean, Thomas Peyrin, Yannick Seurin
Published 2024-07-08 PDFPDF

We analyze the multi-user (mu) security of a family of nonce-based authentication encryption (nAE) schemes based on a tweakable block cipher (TBC). The starting point of our work is an analysis of the mu security of the SCT-II mode which underlies the nAE scheme Deoxys-II, winner of the CAESAR competition for the defense-in-depth category. We extend this analysis in two directions, as we detail now.

First, we investigate the mu security of several TBC-based variants of the counter encryption mode (including CTRT, the encryption mode used within SCT-II) that differ by the way a nonce, a random value, and a counter are combined as tweak and plaintext inputs to the TBC to produce the keystream blocks that will mask the plaintext blocks. Then, we consider the authentication part of SCT-II and study the mu security of the nonce-based MAC Nonce-as-Tweak (NaT) built from a TBC and an almost universal (AU) hash function. We also observe that the standard construction of an AU hash function from a (T)BC can be proven secure under the assumption that the underlying TBC is unpredictable rather than pseudorandom, allowing much better conjectures on the concrete AU advantage. This allows us to derive the mu security of the family of nAE modes obtained by combining these encryption/MAC building blocks through the NSIV composition method.

Some of these modes require an underlying TBC with a larger tweak length than what is usually available for existing ones. We then show the practicality of our modes by instantiating them with two new TBC constructions, Deoxys-TBC-512 and Deoxys-TBC-640, which can be seen as natural extensions of the Deoxys-TBC family to larger tweak input sizes. Designing such TBCs with unusually large tweaks is prone to pitfalls: Indeed, we show that a large-tweak proposal for SKINNY published at EUROCRYPT 2020 presents an inherent construction flaw. We therefore provide a sound design strategy to construct large-tweak TBCs within the Superposition Tweakey (STK) framework, leading to new Deoxys-TBC and SKINNY variants. We provide software benchmarks indicating that while ensuring a very high security level, the performances of our proposals remain very competitive.

Nilanjan Datta, Avijit Dutta, Eik List, Sougata Mandal
Published 2024-07-08 PDFPDF

There has been a notable surge of research on leakage-resilient authenticated encryption (AE) schemes, in the bounded as well as the unbounded leakage model. The latter has garnered significant attention due to its detailed and practical orientation. Designers have commonly utilized (tweakable) block ciphers, exemplified by the TEDT scheme, achieving $\mathcal{O}(n-\log(n^2))$-bit integrity under leakage and comparable AE security in the black-box setting. However, the privacy of TEDT was limited by $n/2$-bits under leakage; TEDT2 sought to overcome these limitations by achieving improved security with $\mathcal{O}(n-\log n)$-bit integrity and privacy under leakage.

This work introduces FEDT, an efficient leakage-resilient authenticated encryption (AE) scheme based on fork-cipher. Compared to the state-of-the-art schemes TEDT and TEDT2, which process messages with a rate of $1/2$ block per primitive call for encryption and one for authentication, FEDT doubles their rates at the price of a different primitive. FEDT employs a more parallelizable tree-based encryption compared to its predecessors while maintaining $\mathcal{O}(n-\log n)$-bit security for both privacy and integrity under leakage. FEDT prioritizes high throughput at the cost of increased latency. For settings where latency is important, we propose FEDT*, which combines the authentication part of FEDT with a CTR-based encryption. FEDT* offers security equivalent to FEDT while increasing the encryption rate of $4/3$ and reducing the latency.

Thomas Attema, Aron van Baarsen, Stefan van den Berg, Pedro Capitão, Vincent Dunning, Lisa Kohl
Published 2024-07-08 PDFPDF

Despite much progress, general-purpose secure multi-party computation (MPC) with active security may still be prohibitively expensive in settings with large input datasets. This particularly applies to the secure evaluation of graph algorithms, where each party holds a subset of a large graph. Recently, Araki et al. (ACM CCS '21) showed that dedicated solutions may provide significantly better efficiency if the input graph is sparse. In particular, they provide an efficient protocol for the secure evaluation of “message passing” algorithms, such as the PageRank algorithm. Their protocol's computation and communication complexity are both $\tilde{O}(M\cdot B)$ instead of the $O(M^2)$ complexity achieved by general-purpose MPC protocols, where $M$ denotes the number of nodes and $B$ the (average) number of incoming edges per node. On the downside, their approach achieves only a relatively weak security notion; $1$-out-of-$3$ malicious security with selective abort.

In this work, we show that PageRank can instead be captured efficiently as a restricted multiplication straight-line (RMS) program, and present a new actively secure MPC protocol tailored to handle RMS programs. In particular, we show that the local knowledge of the participants can be leveraged towards the first maliciously-secure protocol with communication complexity linear in $M$, independently of the sparsity of the graph. We present two variants of our protocol. In our communication-optimized protocol, going from semi-honest to malicious security only introduces a small communication overhead, but results in quadratic computation complexity $O(M^2)$. In our balanced protocol, we still achieve a linear communication complexity $O(M)$, although with worse constants, but a significantly better computational complexity scaling with $O(M\cdot B)$. Additionally, our protocols achieve security with identifiable abort and can tolerate up to $n-1$ corruptions.

Gaëtan Cassiers, Loïc Masure, Charles Momin, Thorben Moos, Amir Moradi, François-Xavier Standaert
Published 2024-07-08 PDFPDF

Masking is a prominent strategy to protect cryptographic implementations against side-channel analysis. Its popularity arises from the exponential security gains that can be achieved for (approximately) quadratic resource utilization. Many variants of the countermeasure tailored for different optimization goals have been proposed. The common denominator among all of them is the implicit demand for robust and high entropy randomness. Simply assuming that uniformly distributed random bits are available, without taking the cost of their generation into account, leads to a poor understanding of the efficiency vs. security tradeoff of masked implementations. This is especially relevant in case of hardware masking schemes which are known to consume large amounts of random bits per cycle due to parallelism. Currently, there seems to be no consensus on how to most efficiently derive many pseudo-random bits per clock cycle from an initial seed and with properties suitable for masked hardware implementations. In this work, we evaluate a number of building blocks for this purpose and find that hardware-oriented stream ciphers like Trivium and its reduced-security variant Bivium B outperform most competitors when implemented in an unrolled fashion. Unrolled implementations of these primitives enable the flexible generation of many bits per cycle, which is crucial for satisfying the large randomness demands of state-of-the-art masking schemes. According to our analysis, only Linear Feedback Shift Registers (LFSRs), when also unrolled, are capable of producing long non-repetitive sequences of random-looking bits at a higher rate per cycle for the same or lower cost as Trivium and Bivium B. Yet, these instances do not provide black-box security as they generate only linear outputs. We experimentally demonstrate that using multiple output bits from an LFSR in the same masked implementation can violate probing security and even lead to harmful randomness cancellations. Circumventing these problems, and enabling an independent analysis of randomness generation and masking, requires the use of cryptographically stronger primitives like stream ciphers. As a result of our studies, we provide an evidence-based estimate for the cost of securely generating $n$ fresh random bits per cycle. Depending on the desired level of black-box security and operating frequency, this cost can be as low as $20n$ to $30n$ ASIC gate equivalents (GE) or $3n$ to $4n$ FPGA look-up tables (LUTs), where $n$ is the number of random bits required. Our results demonstrate that the cost per bit is (sometimes significantly) lower than estimated in previous works, incentivizing parallelism whenever exploitable. This provides further motivation to potentially move low randomness usage from a primary to a secondary design goal in hardware masking research.

Estuardo Alpirez Bock, Chris Brzuska, Russell W. F. Lai
Published 2024-07-08 PDFPDF

Watermarking pseudorandom functions (PRF) allow an authority to embed an unforgeable and unremovable watermark into a PRF while preserving its functionality. In this work, we extend the work of Kim and Wu [Crypto'19] who gave a simple two-step construction of watermarking PRFs from a class of extractable PRFs satisfying several other properties – first construct a mark-embedding scheme, and then upgrade it to a message-embedding scheme.

While the message-embedding scheme of Kim and Wu is based on complex homomorphic evaluation techniques, we observe that much simpler constructions can be obtained and from a wider range of assumptions, if we forego the strong requirement of security against the watermarking authority. Concretely, we introduce a new notion called extractable PRGs (xPRGs), from which extractable PRFs (without security against authorities) suitable for the Kim-Wu transformations can be simply obtained via the Goldreich-Goldwasser-Micali (GGM) construction. We provide simple constructions of xPRGs from a wide range of assumptions such as hardness of computational Diffie-Hellman (CDH) in the random oracle model, as well as LWE and RSA in the standard model.

Guilhèm Assael, Philippe Elbaz-Vincent
Published 2024-07-08 PDFPDF

Several cryptographic schemes, including lattice-based cryptography and the SHA-2 family of hash functions, involve both integer arithmetic and Boolean logic. Each of these classes of operations, considered separately, can be efficiently implemented under the masking countermeasure when resistance against vertical attacks is required. However, protecting interleaved arithmetic and logic operations is much more expensive, requiring either additional masking conversions to switch between masking schemes, or implementing arithmetic functions as nonlinear operations over a Boolean masking. Both solutions can be achieved by providing masked arithmetic addition over Boolean shares, which is an operation with relatively long latency and usually high area utilization in hardware. A further complication arises when the arithmetic performed by the scheme is over a prime modulus, which is common in lattice-based cryptography. In this work, we propose a first-order masked implementation of arithmetic addition over Boolean shares occupying a very small area, while still having reasonable latency. Our proposal is specifically tuned for efficient addition and subtraction modulo an arbitrary integer, but it can also be configured at runtime for power-of-two arithmetic. To the best of our knowledge, we propose the first such construction whose security is formally proven in the glitch+transition-robust probing model.

Qinyi Li, Xavier Boyen
Published 2024-07-08 PDFPDF

Public-key searchable encryption allows keyword-associated tokens to be used to test if a ciphertext contains specific keywords. Due to the low entropies of keywords, the token holder can create ciphertexts from candidate keywords and test them using the token in hand to recover the keywords, known as inside keyword guessing attacks (IKGA). Public-key authenticated encryption with keyword search is a searchable encryption proposed to defend against such attacks. It ensures the sender's private key protects the ciphertexts from the IKGA. PAEKS schemes with reasonable security and practical efficiency remain elusive despite many proposals. This work provides a simple generic PAEKS scheme from non-interactive key exchange (NIKE) and symmetric-key equality-predicate encryption with three new constructions for the latter, respectively from pseudorandom functions (PRFs), the decision bilinear Diffie-Hellman assumption, and the learning-with-errors assumption. Instantiating our generic scheme, we derive several PAEKS schemes from the most well-known assumptions, with some of them achieving full cipher-keyword indistinguishability and full token indistinguishability in the standard model, for the first time. Our instantiated schemes allow practical implementations and outperform the existing PAEKS schemes under the same assumptions.

Jianhua Wang, Tao Huang, Shuang Wu, Zilong Liu
Published 2024-07-08 PDFPDF

In this paper, we aim to explore the design of low-latency authenticated encryption schemes particularly for memory encryption, with a focus on the temporal uniqueness property. To achieve this, we present the low-latency Pseudo-Random Function (PRF) called Twinkle with an output up to 1152 bits. Leveraging only one block of Twinkle, we developed Twinkle-AE, a specialized authenticated encryption scheme with six variants covering different cache line sizes and security requirements. We also propose Twinkle-PA, a pointer authentication algorithm, which takes a 64-bit pointer and 64-bit context as input and outputs a tag of 1 to 32 bits.

We conducted thorough security evaluations of both the PRFs and these schemes, examining their robustness against various common attacks. The results of our cryptanalysis indicate that these designs successfully achieve their targeted security objectives.

Hardware implementations using the FreePDK45nm library show that Twinkle-AE achieves an encryption and authentication latency of 3.83 ns for a cache line. In comparison, AES-CTR with WC-MAC scheme and Ascon-128a achieve latencies of 9.78 ns and 27.30 ns, respectively. Moreover, Twinkle-AE is also most area-effective for the 1024-bit cache line. For the pointer authentication scheme Twinkle-PA, the latency is 2.04 ns, while QARMA-64-sigma0 has a latency of 5.57 ns.

Scott Griffy, Anna Lysyanskaya
Published 2024-07-08 PDFPDF

To be useful and widely accepted, automated contact tracing schemes (also called exposure notification) need to solve two seemingly contradictory problems at the same time: they need to protect the anonymity of honest users while also preventing malicious users from creating false alarms. In this paper, we provide, for the first time, an exposure notification construction that guarantees the same levels of privacy and integrity as existing schemes but with a fully malicious database (notably similar to Auerbach et al. CT-RSA 2021) without special restrictions on the adversary. We construct a new definition so that we can formally prove our construction secure. Our definition ensures the following integrity guarantees: no malicious user can cause exposure warnings in two locations at the same time and that any uploaded exposure notifications must be recent and not previously uploaded. Our construction is efficient, requiring only a single message to be broadcast at contact time no matter how many recipients are nearby. To notify contacts of potential infection, an infected user uploads data with size linear in the number of notifications, similar to other schemes. Linear upload complexity is not trivial with our assumptions and guarantees (a naive scheme would be quadratic). This linear complexity is achieved with a new primitive: zero knowledge subset proofs over commitments which is used by our "no cloning" proof protocol. We also introduce another new primitive: set commitments on equivalence classes, which makes each step of our construction more efficient. Both of these new primitives are of independent interest.

Gil Segev, Liat Shapira
Published 2024-07-08 PDFPDF

In this work we first present an explicit forking lemma that distills the information-theoretic essence of the high-moment technique introduced by Rotem and Segev (CRYPTO '21), who analyzed the security of identification protocols and Fiat-Shamir signature schemes. Whereas the technique of Rotem and Segev was particularly geared towards two specific cryptographic primitives, we present a stand-alone probabilistic lower bound, which does not involve any underlying primitive or idealized model. The key difference between our lemma and previous ones is that instead of focusing on the tradeoff between the worst-case or expected running time of the resulting forking algorithm and its success probability, we focus on the tradeoff between higher moments of its running time and its success probability.

Equipped with our lemma, we then establish concrete security bounds for the BN and BLS multi-signature schemes that are significantly tighter than the concrete security bounds established by Bellare and Neven (CCS '06) and Boneh, Drijvers and Neven (ASIACRYPT '18), respectively. Our analysis does not limit adversaries to any idealized algebraic model, such as the algebraic group model in which all algorithms are assumed to provide an algebraic justification for each group element they produce. Our bounds are derived in the random-oracle model based on the standard-model second-moment hardness of the discrete logarithm problem (for the BN scheme) and the computational co-Diffie-Hellman problem (for the BLS scheme). Such second-moment assumptions, asking that the success probability of any algorithm in solving the underlying computational problems is dominated by the second moment of the algorithm's running time, are particularly plausible in any group where no better-than-generic algorithms are currently known.

Ji Luo
Published 2024-07-08 PDFPDF

Traitor tracing schemes [Chor–Fiat–Naor, Crypto ’94] help content distributors fight against piracy and are defined with the content distributor as a trusted authority having access to the secret keys of all users. While the traditional model caters well to its original motivation, its centralized nature makes it unsuitable for many scenarios. For usage among mutually untrusted parties, a notion of *ad hoc* traitor tracing (naturally with the capability of broadcast and revocation) is proposed and studied in this work. Such a scheme allows users in the system to generate their own public/secret key pairs, without trusting any other entity. To encrypt, a list of public keys is used to identify the set of recipients, and decryption is possible with a secret key for any of the public keys in the list. In addition, there is a tracing algorithm that given a list of recipients’ public keys and a pirate decoder capable of decrypting ciphertexts encrypted to them, identifies at least one recipient whose secret key must have been used to construct the said decoder.

Two constructions are presented. The first is based on functional encryption for circuits (conceptually, obfuscation) and has constant-size ciphertext, yet its decryption time is linear in the number of recipients. The second is a generic transformation that reduces decryption time at the cost of increased ciphertext size. A matching lower bound on the trade-off between ciphertext size and decryption time is shown, indicating that the two constructions achieve all possible optimal trade-offs, i.e., they fully demonstrate the Pareto front of efficiency. The lower bound also applies to broadcast encryption (hence all mildly expressive attribute-based encryption schemes) and is of independent interest.

Vincent Hwang
Published 2024-07-08 PDFPDF

We survey various mathematical tools used in software works multiplying polynomials in \[ \frac{\mathbb{Z}_q[x]}{\left\langle {x^n - \alpha x - \beta} \right\rangle}. \] In particular, we survey implementation works targeting polynomial multiplications in lattice-based cryptosystems Dilithium, Kyber, NTRU, NTRU Prime, and Saber with instruction set architectures/extensions Armv7-M, Armv7E-M, Armv8-A, and AVX2.

There are three emphases in this paper: (i) modular arithmetic, (ii) homomorphisms, and (iii) vectorization. For modular arithmetic, we survey Montgomery, Barrett, and Plantard multiplications. For homomorphisms, we survey (a) various homomorphisms such as Cooley–Tukey FFT, Good–Thomas FFT, Bruun's FFT, Rader's FFT, Karatsuba, and Toom–Cook; (b) various algebraic techniques for adjoining nice properties to the coefficient rings, including localization, Schönhage's FFT, Nussbaumer's FFT, and coefficient ring switching; and (c) various algebraic techniques related to the polynomial moduli, including twisting, composed multiplication, evaluation at $\infty$, truncation, incomplete transformation, striding, and Toeplitz matrix-vector product. For vectorization, we survey the relations between homomorphisms and vector arithmetic.

We then go through several case studies: We compare the implementations of modular multiplications used in Dilithium and Kyber, explain how the matrix-to-vector structure was exploited in Saber, and review the design choices of transformations for NTRU and NTRU Prime with vectorization. Finally, we outline several interesting implementation projects.

Décio Luiz Gazzoni Filho, Tomás S. R. Silva, Julio López
Published 2024-07-08 PDFPDF

We present a solution to the open problem of designing a linear-time, unbiased and timing attack-resistant shuffling algorithm for fixed-weight sampling. Although it can be implemented without timing leakages of secret data in any architecture, we illustrate with ARMv7-M and ARMv8-A implementations; for the latter, we take advantage of architectural features such as NEON and conditional instructions, which are representative of features available on architectures targeting similar systems, such as Intel. Our proposed algorithm improves asymptotically upon the current approach based on constant-time sorting networks ($O(n)$ versus $O(n \log^2 n)$), and an implementation of the new algorithm applied to NTRU is also faster in practice, by a factor of up to $6.91\ (591\%)$ on ARMv8-A cores and $12.89\ (1189\%)$ on the Cortex-M4; it also requires fewer uniform random bits. This translates into performance improvements for NTRU encapsulation, compared to state-of-the-art implementations, of up to 50% on ARMv8-A cores and 72% on the Cortex-M4, and small improvements to key generation (up to 2.7% on ARMv8-A cores and 6.1% on the Cortex-M4), with negligible impact on code size and a slight improvement in RAM usage for the Cortex-M4.

Nouri Alnahawi, Johannes Müller, Jan Oupický, Alexander Wiesmaier
Published 2024-07-08 PDFPDF

Transport Layer Security (TLS) is the backbone security protocol of the Internet. As this fundamental protocol is at risk from future quantum attackers, many proposals have been made to protect TLS against this threat by implementing post-quantum cryptography (PQC). The widespread interest in post-quantum TLS has given rise to a large number of solutions over the last decade. These proposals differ in many aspects, including the security properties they seek to protect, the efficiency and trustworthiness of their post-quantum building blocks, and the application scenarios they consider, to name a few.

Based on an extensive literature review, we classify existing solutions according to their general approaches, analyze their individual contributions, and present the results of our extensive performance experiments. Based on these insights, we identify the most reasonable candidates for post-quantum TLS, which research problems in this area have already been solved, and which are still open. Overall, our work provides a well-founded reference point for researching post-quantum TLS and preparing TLS in practice for the quantum age.

Ky Nguyen, David Pointcheval, Robert Schädlich
Published 2024-07-08 PDFPDF

Decentralized Multi-Client Functional Encryption (DMCFE) extends the basic functional encryption to multiple clients that do not trust each other. They can independently encrypt the multiple plaintext-inputs to be given for evaluation to the function embedded in the functional decryption key, defined by multiple parameter-inputs. And they keep control on these functions as they all have to contribute to the generation of the functional decryption keys. Tags can be used in the ciphertexts and the keys to specify which inputs can be combined together. As any encryption scheme, DMCFE provides privacy of the plaintexts. But the functions associated to the functional decryption keys might be sensitive too (e.g. a model in machine learning). The function-hiding property has thus been introduced to additionally protect the function evaluated during the decryption process.

In this paper, we provide new proof techniques to analyze a new concrete construction of function-hiding DMCFE for inner products, with strong security guarantees: the adversary can adaptively query multiple challenge ciphertexts and multiple challenge keys, with unbounded repetitions of the same tags in the ciphertext-queries and a fixed polynomially-large number of repetitions of the same tags in the key-queries. Previous constructions were proven secure in the selective setting only.

Camille Mutschler, Laurent Imbert, Thomas Roche
Published 2024-07-08 PDFPDF

We introduce InspectorGadget, an Open-Source Python-based software for assessing and comparing the complexity of masking gadgets. By providing a limited set of characteristics of a hardware platform, our tool allows to estimate the cost of a masking gadget in terms of cycle count equivalent and memory footprint. InspectorGadget is highly flexible. It enables the user to define her own estimation functions, as well as to expand the set of gadgets and predefined microcontrollers. As a case-study, we produce a fair comparison of several masked versions of Kyber compression function from the literature, together with novel alternatives automatically generated by our tool. Our results confirm that an interesting middle ground exists between theoretical performance measures (asymptotic complexity or operations count) and real implementations benchmarks (clock cycle accurate evaluations). InspectorGadget offers both simplicity and genericity while capturing the main performance-related parameters of a hardware platform.

Sean Murphy, Rachel Player
Published 2024-07-08 PDFPDF

This paper develops Central Limit arguments for analysing the noise in ciphertexts in two homomorphic encryption schemes that are based on Ring-LWE. The first main contribution of this paper is to present and evaluate an average-case noise analysis for the BGV scheme. Our approach relies on the recent work of Costache et al.(SAC 2023) that gives the approximation of a polynomial product as a multivariate Normal distribution. We show how this result can be applied in the BGV context and evaluate its efficacy. We find this average-case approach can much more closely model the noise growth in BGV implementations than prior approaches, but in some cases it can also underestimate the practical noise growth. Our second main contribution is to develop a Central Limit framework to analyse the noise growth in the homomorphic Ring-LWE cryptosystem of Lyubashevsky, Peikert and Regev (Eurocrypt 2013, full version). Our approach is very general: apart from finite variance, no assumption on the distribution of the noise is required (in particular, the noise need not be subgaussian). We show that our approach leads to tighter bounds for the probability of decryption failure than those of prior work.

Marcel Tiepelt, Christian Martin, Nils Maeurer
Published 2024-04-09 PDFPDF

Transitioning from classically to quantum secure key agreement protocols may require to exchange fundamental components, for example, exchanging Diffie-Hellman-like key exchange with a key encapsulation mechanism (KEM). Accordingly, the corresponding security proof can no longer rely on the Diffie-Hellman assumption, thus invalidating the security guarantees. As a consequence, the security properties have to be re-proven under a KEM-based security notion.

We initiate the study of the LDACS key agreement protocol (Edition 01.01.00 from 25.04.2023), which is soon-to-be-standardized by the International Civil Aviation Organization. The protocol's cipher suite features Diffie-Hellman as well as a KEM-based key agreement protocol to provide post-quantum security. While the former results in an instantiation of an ISO key agreement inheriting all security properties, the security achieved by the latter is ambiguous. We formalize the computational security using the systematic notions of de Saint Guilhem, Fischlin and Warinshi (CSF '20), and prove the exact security that the KEM-based variant achieves in this model; primarily entity authentication, key secrecy and key authentication. To further strengthen our “pen-and-paper” findings, we model the protocol and its security guarantees using Tamarin, providing an automated proof of the security against a Dolev-Yao attacker.

Charles Bouillaguet, Julia Sauvage
Published 2024-04-09 PDFPDF

Biscuit is a recent multivariate signature scheme based on the MPC-in-the-Head paradigm. It has been submitted to the NIST competition for additional signature schemes. Signatures are derived from a zero-knowledge proof of knowledge of the solution of a structured polynomial system. This extra structure enables efficient proofs and compact signatures. This short note demonstrates that it also makes these polynomial systems easier to solve than random ones. As a consequence, the original parameters of Biscuit failed to meet the required security levels and had to be upgraded.

Manuel Barbosa, Deirdre Connolly, João Diogo Duarte, Aaron Kaiser, Peter Schwabe, Karolin Varner, Bas Westerbaan
Published 2024-04-09 PDFPDF

X-Wing is a hybrid key-encapsulation mechanism based on X25519 and ML-KEM-768. It is designed to be the sensible choice for most applications. The concrete choice of X25519 and ML-KEM-768 allows X-Wing to achieve improved efficiency compared to using a generic KEM combiner. In this paper, we introduce the X-Wing hybrid KEM construction and provide a proof of security. We show (1) that X-Wing is a classically IND-CCA secure KEM if the strong Diffie-Hellman assumption holds in the X25519 nominal group, and (2) that X-Wing is a post-quantum IND-CCA secure KEM if ML-KEM-768 is itself an IND-CCA secure KEM and SHA3-256 is secure when used as a pseudorandom function. The first result is proved in the ROM, whereas the second one holds in the standard model. Loosely speaking, this means X-Wing is secure if either X25519 or ML-KEM-768 is secure. We stress that these security guarantees and optimizations are only possible due to the concrete choices that were made, and it may not apply in the general case.

Yehuda Lindell
Published 2024-04-09 PDFPDF

In a multiparty signing protocol, also known as a threshold signature scheme, the private signing key is shared amongst a set of parties and only a quorum of those parties can generate a signature. Research on multiparty signing has been growing in popularity recently due to its application to cryptocurrencies. Most work has focused on reducing the number of rounds to two, and as a result: (a) are not fully simulatable in the sense of MPC real/ideal security definitions, and/or (b) are not secure under concurrent composition, and/or (c) utilize non-standard assumptions of different types in their proofs of security. In this paper, we describe a simple three-round multiparty protocol for Schnorr signatures that is secure for any number of corrupted parties; i.e., in the setting of a dishonest majority. The protocol is fully simulatable, secure under concurrent composition, and proven secure in the standard model or random-oracle model (depending on the instantiations of the commitment and zero-knowledge primitives). The protocol realizes an ideal Schnorr signing functionality with perfect security in the ideal commitment and zero-knowledge hybrid model (and thus the only assumptions needed are for realizing these functionalities).

In our presentation, we do not assume that all parties begin with the message to be signed, the identities of the participating parties and a unique common session identifier, since this is often not the case in practice. Rather, the parties achieve consensus on these parameters as the protocol progresses.

Thomas Pornin
Published 2024-04-09 PDFPDF

This paper describes a generic methodology for obtaining unified, and then complete formulas for a prime-order group abstraction homomorphic to a subgroup of an elliptic curve with even order. The method is applicable to any curve with even order, in finite fields of both even and odd characteristic; it is most efficient on curves with order equal to 2 modulo 4, dubbed "double-odd curves". In large characteristic fields, we obtain doubling formulas with cost as low as 1M + 5S, and the resulting group allows building schemes such as signatures that outperform existing fast solutions, e.g. Ed25519. In binary fields, the obtained formulas are not only complete but also faster than previously known incomplete formulas; we can sign and verify in as low as 18k and 27k cycles on x86 CPUs, respectively.

Shichang Wang, Meicheng Liu, Shiqi Hou, Dongdai Lin
Published 2024-04-09 PDFPDF

At CHES 2017, Banik et al. proposed a lightweight block cipher GIFT consisting of two versions GIFT-64 and GIFT-128. Recently, there are lots of authenticated encryption schemes that adopt GIFT-128 as their underlying primitive, such as GIFT-COFB and HyENA. To promote a comprehensive perception of the soundness of the designs, we evaluate their security against differential-linear cryptanalysis.

For this, automatic tools have been developed to search differential-linear approximation for the ciphers based on S-boxes. With the assistance of the automatic tools, we find 13-round differential-linear approximations for GIFT-COFB and HyENA. Based on the distinguishers, 18-round key-recovery attacks are given for the message processing phase and initialization phase of both ciphers. Moreover, the resistance of GIFT-64/128 against differential-linear cryptanalysis is also evaluated. The 12-round and 17-round differential-linear approximations are found for GIFT-64 and GIFT-128 respectively, which lead to 18-round and 19-round key-recovery attacks respectively. Here, we stress that our attacks do not threaten the security of these ciphers.

Décio Luiz Gazzoni Filho, Guilherme Brandão, Julio López
Published 2024-04-09 PDFPDF

Efficient polynomial multiplication routines are critical to the performance of lattice-based post-quantum cryptography (PQC). As PQC standards only recently started to emerge, CPUs still lack specialized instructions to accelerate such routines. Meanwhile, deep learning has grown immeasurably in importance. Its workloads call for teraflops-level of processing power for linear algebra operations, mainly matrix multiplication. Computer architects have responded by introducing ISA extensions, coprocessors and special-purpose cores to accelerate such operations. In particular, Apple ships an undocumented matrix-multiplication coprocessor, AMX, in hundreds of millions of mobile phones, tablets and personal computers. Our work repurposes AMX to implement polynomial multiplication and applies it to the NTRU cryptosystem, setting new speed records on the Apple M1 and M3 systems-on-chip (SoCs): polynomial multiplication, key generation, encapsulation and decapsulation are sped up by $1.54$–$3.07\times$, $1.08$–$1.33\times$, $1.11$–$1.50\times$ and $1.20$–$1.98\times$, respectively, over the previous state-of-the-art.

Yi-Hsiu Chen, Yehuda Lindell
Published 2024-04-09 PDFPDF

Verifiable secret sharing (VSS) protocols enable parties to share secrets while guaranteeing security (in particular, that all parties hold valid and consistent shares) even if the dealer or some of the participants are malicious. Most work on VSS focuses on the honest majority case, primarily since it enables one to guarantee output delivery (e.g., a corrupted recipient cannot prevent an honest dealer from sharing their value). Feldman's VSS is a well known and popular protocol for this task and relies on the discrete log hardness assumption. In this paper, we present a variant of Feldman's VSS for the dishonest majority setting and formally prove its security. Beyond the basic VSS protocol, we present a publicly-verifiable version, as well as show how to securely add participants to the sharing and how to refresh an existing sharing (all secure in the presence of a dishonest majority). We prove that our protocols are UC secure, for appropriately defined ideal functionalities.

Pierrick Méaux, Jeongeun Park, Hilder V. L. Pereira
Published 2024-04-09 PDFPDF

Fully Homomorphic Encryption (FHE) is a powerful tool to achieve non-interactive privacy preserving protocols with optimal computation/communication complexity. However, the main disadvantage is that the actual communication cost (bandwidth) is high due to the large size of FHE ciphertexts. As a solution, a technique called transciphering (also known as Hybrid Homomorphic Encryption) was introduced to achieve almost optimal bandwidth for such protocols. However, all existing works require clients to fix a precision for the messages or a mathematical structure for the message space beforehand. It results in unwanted constraints on the plaintext size or underlying structure of FHE based applications.

In this article, we introduce a new approach for transciphering which does not require fixed message precision decided by the client, for the first time. In more detail, a client uses any kind of FHE-friendly symmetric cipher for $\{0,1\}$ to send its input data encrypted bit-by-bit, then the server can choose a precision $p$ depending on the application and homomorphically transforms the encrypted bits into FHE ciphertexts encrypting integers in $\mathbb{Z}_p$. To illustrate our new technique, we evaluate a transciphering using FiLIP cipher and adapt the most practical homomorphic evaluation technique [CCS'22] to keep the practical latency. As a result, our proof-of-concept implementation for $p$ from $2^2$ to $2^8$ takes only from $13$ ms to $137$ ms.

Jingwen Chen, Qun Liu, Yanhong Fan, Lixuan Wu, Boyun Li, Meiqin Wang
Published 2024-04-09 PDFPDF

In recent years, quantum technology has been rapidly developed. As security analyses for symmetric ciphers continue to emerge, many require an evaluation of the resources needed for the quantum circuit implementation of the encryption algorithm. In this regard, we propose the quantum circuit decision problem, which requires us to determine whether there exists a quantum circuit for a given permutation f using M ancilla qubits and no more than K quantum gates within the circuit depth D. Firstly, we investigate heuristic algorithms and classical SAT-based models in previous works, revealing their limitations in solving the problem. Hence, we innovatively propose an improved SAT-based model incorporating three metrics of quantum circuits. The model enables us to find the optimal quantum circuit of an arbitrary 3 or 4-bit S-box under a given optimization goal based on SAT solvers, which has proved the optimality of circuits constructed by the tool, LIGHTER-R. Then, by combining different criteria in the model, we find more compact quantum circuit implementations of S-boxes such as RECTANGLE and GIFT. For GIFT S-box, our model provides the optimal quantum circuit that only requires 8 gates with a depth of 31. Furthermore, our model can be generalized to linear layers and improve the previous SAT-based model proposed by Huang et al. in ASIACRYPT 2022 by adding the criteria on the number of qubits and the circuit depth.

Subhadeep Banik, Andrea Caforio, Serge Vaudenay
Published 2024-04-09 PDFPDF

The LowMC family of block ciphers was proposed by Albrecht et al. in Eurocrypt 2015, specifically targeting adoption in FHE and MPC applications due to its low multiplicative complexity. The construction operates a 3-bit quadratic S-box as the sole non-linear transformation in the algorithm. In contrast, both the linear layer and round key generation are achieved through multiplications of full rank matrices over GF(2). The cipher is instantiable using a diverse set of default configurations, some of which have partial non-linear layers i.e., in which the S-boxes are not applied over the entire internal state of the cipher.

The significance of cryptanalysing LowMC was elevated by its inclusion into the NIST PQC digital signature scheme PICNIC in which a successful key recovery using a single plaintext/ciphertext pair is akin to retrieving the secret signing key. The current state-of-the-art attack in this setting is due to Dinur at Eurocrypt 2021, in which a novel way of enumerating roots of a Boolean system of equation is morphed into a key-recovery procedure that undercuts an ordinary exhaustive search in terms of time complexity for the variants of the cipher up to five rounds.

In this work, we demonstrate that this technique can efficiently be enriched with a specific linearization strategy that reduces the algebraic degree of the non-linear layer as put forward by Banik et al. at IACR ToSC 2020(4). This amalgamation yields new attacks on certain instances of LowMC up to seven rounds.

Keewoo Lee
Published 2024-04-09 PDFPDF

We revisit the question of what the definition of bit security should be, previously answered by Micciancio-Walter (Eurocrypt 2018) and Watanabe-Yasunaga (Asiacrypt 2021). Our new definition is simple, but (i) captures both search and decision primitives in a single framework like Micciancio-Walter, and (ii) has a firm operational meaning like Watanabe-Yasunaga. It also matches intuitive expectations and can be well-formulated regarding Hellinger distance. To support and justify the new definition, we prove several classic security reductions with respect to our bit security. We also provide pathological examples that indicate the ill-definedness of bit security defined in Micciancio-Walter and Watanabe-Yasunaga.

Loïc Demange, Mélissa Rossi
Published 2024-04-09 PDFPDF

BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST's standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based algorithms by masking has only been marginally studied in a 2016 paper by Chen et al. in SAC 2015. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of performances.

In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model.

More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms, we show that masking at order 1, 2, 3, 4 and 5 implies a performance penalty of x5.8, x14.2, x24.4, x38 and x55.6 compared to order 0 (unmasked and unoptimized BIKE). This scaling is encouraging and no Boolean to Arithmetic conversion has been used.

Daniel J. Bernstein
Published 2024-04-09 PDFPDF

This paper reviews, from bottom to top, a polynomial-time algorithm to correct $t$ errors in classical binary Goppa codes defined by squarefree degree-$t$ polynomials. The proof is factored through a proof of a simple Reed–Solomon decoder, and the algorithm is simpler than Patterson's algorithm. All algorithm layers are expressed as Sage scripts backed by test scripts. All theorems are formally verified. The paper also covers the use of decoding inside the Classic McEliece cryptosystem, including reliable recognition of valid inputs.

Marloes Venema, Leon Botros
Published 2024-04-09 PDFPDF

Predicate encryption (PE) is a type of public-key encryption that captures many useful primitives such as attribute-based encryption (ABE). Although much progress has been made to generically achieve security against chosen-plaintext attacks (CPA) efficiently, in practice, we also require security against chosen-ciphertext attacks (CCA). Because achieving CCA-security on a case-by-case basis is a complicated task, several generic conversion methods have been proposed, which typically target different subclasses of PE such as ciphertext-policy ABE. As is common, such conversion methods may sacrifice some efficiency. Notably, for ciphertext-policy ABE, all proposed generic transformations incur a significant decryption overhead. Furthermore, depending on the setting in which PE is used, we may also want to require that messages are signed. To do this, predicate signature schemes can be used. However, such schemes provide a strong notion of privacy for the signer, which may be stronger than necessary for some practical settings at the cost of efficiency.

In this work, we propose the notion of predicate extension, which transforms the predicate used in a PE scheme to include one additional attribute, in both the keys and the ciphertexts. Using predicate extension, we can generically obtain CCA-security and signatures from a CPA-secure PE scheme. For the CCA-security transform, we observe that predicate extension implies a two-step approach to achieving CCA-security. This insight broadens the applicability of existing transforms for specific subclasses of PE to cover all PE. We also propose a new transform that incurs slightly less overhead than existing transforms. Furthermore, we show that predicate extension allows us to create a new type of signatures, which we call PE-based signatures. PE-based signatures are weaker than typical predicate signatures in the sense that they do not provide privacy for the signer. Nevertheless, such signatures may be more suitable for some practical settings owing to their efficiency or reduced interactivity. Lastly, to show that predicate extensions may facilitate a more efficient way to achieve CCA-security generically than existing methods, we propose a novel predicate-extension transformation for a large class of pairing-based PE, covered by the pair and predicate encodings frameworks. In particular, this yields the most efficient generic CCA-conversion for ciphertext-policy ABE.

Mustafa Khairallah
Published 2024-04-09 PDFPDF

The size of the authentication tag represents a significant overhead for applications that are limited by bandwidth or memory. Hence, some authenticated encryption designs have a smaller tag than the required privacy level, which was also suggested by the NIST lightweight cryptography standardization project. In the ToSC 2022, two papers have raised questions about the IND-CCA security of AEAD schemes in this situation. These papers show that (a) online AE cannot provide IND-CCA security beyond the tag length, and (b) it is possible to have IND-CCA security beyond the tag length in a restricted Encode-then-Encipher framework. In this paper, we address some of the remaining gaps in this area. Our main result is to show that, for a fixed stretch, Pseudo-Random Injection security implies IND-CCA security as long as the minimum ciphertext size is at least as large as the required IND-CCA security level. We also show that this bound is tight and that any AEAD scheme that allows empty plaintexts with a fixed stretch cannot achieve IND-CCA security beyond the tag length. Next, we look at the weaker notion of MRAE security, and show that two-pass schemes that achieve MRAE security do not achieve IND-CCA security beyond the tag size. This includes SIV and rugged PRPs.

Akira Takahashi, Greg Zaverucha
Published 2024-04-09 PDFPDF

Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations. It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations. In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme. Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.

Matteo Campanelli, Chaya Ganesh, Rosario Gennaro
Published 2024-04-09 PDFPDF

We investigate proof systems where security holds against rational parties instead of malicious ones. Our starting point is the notion of rational arguments, a variant of rational proofs (Azar and Micali, STOC 2012) where security holds against rational adversaries that are also computationally bounded.

Rational arguments are an interesting primitive because they generally allow for very efficient protocols, and in particular sublinear verification (i.e. where the Verifier does not have to read the entire input). In this paper we aim at narrowing the gap between literature on rational schemes and real world applications. Our contribution is two-fold.

We provide the first construction of rational arguments for the class of polynomial computations that is practical (i.e., it can be applied to real-world computations on reasonably common hardware) and with logarithmic communication. Techniques-wise, we obtain this result through a compiler from information-theoretic protocols and rational proofs for polynomial evaluation. The latter could be of independent interest.

As a second contribution, we propose a new notion of extractability for rational arguments. Through this notion we can obtain arguments where knowledge of a witness is incentivized (rather than incentivizing mere soundness). We show how our aforementioned compiler can also be applied to obtain efficient extractable rational arguments for $\mathsf{NP}$.

Fabio Campos, Jorge Chávez-Saab, Jesús-Javier Chi-Domínguez, Michael Meyer, Krijn Reijnders, Francisco Rodríguez-Henríquez, Peter Schwabe, Thom Wiggers
Published 2024-04-09 PDFPDF

In this work, we assess the real-world practicality of CSIDH, an isogeny-based non-interactive key exchange. We provide the first thorough assessment of the practicality of CSIDH in higher parameter sizes for conservative estimates of quantum security, and with protection against physical attacks.

This requires a three-fold analysis of CSIDH. First, we describe two approaches to efficient high-security CSIDH implementations, based on SQALE and CTIDH. Second, we optimize such high-security implementations, on a high level by improving several subroutines, and on a low level by improving the finite field arithmetic. Third, we benchmark the performance of high-security CSIDH. As a stand-alone primitive, our implementations outperform previous results by a factor up to 2.53×.

As a real-world use case considering network protocols, we use CSIDH in TLS variants that allow early authentication through a NIKE. Although our instantiations of CSIDH have smaller communication requirements than post-quantum KEM and signature schemes, even our highly-optimized implementations result in too-large handshake latency (tens of seconds), showing that CSIDH is only practical in niche cases.