Communications in Cryptology IACR CiC


Dates are inconsistent
36 results sorted by publication date
Editors in chief
Call for papers: IACR Communications in Cryptology Submit a paper Communications in Cryptology is a journal for original research papers which welcomes submissions on any topic in cryptology. This covers all research topics in cryptography and cryptanalysis, including but not limited to foundational theory and mathematics the design, proposal, and analysis of cryptographic primitives a...
Editors in chief
Policy on publication ethics Communications in Cryptology (CiC) is committed to ensuring ethics and quality in research. We therefore expect everyone involved in the journal to follow our principles (see below) and ethics. See the related IACR docs here and here. Duties for Authors Confidentiality You may not ask Editorial Board members for information about your submission befor...
Editors in chief
Policy on irregular submissions The Communications in Cryptology (CiC) Policy on Irregular Submissions is based on the IACR policy for irregular submissions. Scope Simultaneous submission of substantially similar papers to multiple venues, resubmis­sion of previously published work, and plagiarism of content, are a serious breach of authors’ ethical obligations to the CiC and the e...
Editors in chief
Policy on conflict of interest A conflict of interest (CoI) is a situation in which a person is involved in multiple interests, one of which could affect the judgment of that individual. In the context of scientific reviewing on behalf of the IACR CiC, a CoI exists when particular relationships between reviewers and authors, or their respective institutions, may taint a reviewer’s de...
Editors in chief
Policy on retractions The purpose of this retraction policy is to provide guidelines for the retraction of articles published in the IACR CiC. Retractions are undertaken in cases of serious errors or ethical concerns that significantly impact the validity and reliability of the published work. The IACR is committed to maintaining the highest standards of academic integrity and transpa...
Editors in chief
Frequently asked questions The International Association for Cryptologic Research (IACR) Communications in Cryptology (CiC) was approved by the Membership in the IACR 2022 election and targets publications that advance the field, but with a broader range of contributions than the ones accepted by the IACR flagship or area conferences. What are the main principles of CiC? Low-cost open ...
Benoît Cogliati, Jérémy Jean, Thomas Peyrin, Yannick Seurin
Published 2024-07-08 PDFPDF

We analyze the multi-user (mu) security of a family of nonce-based authentication encryption (nAE) schemes based on a tweakable block cipher (TBC). The starting point of our work is an analysis of the mu security of the SCT-II mode which underlies the nAE scheme Deoxys-II, winner of the CAESAR competition for the defense-in-depth category. We extend this analysis in two directions, as we detail now.

First, we investigate the mu security of several TBC-based variants of the counter encryption mode (including CTRT, the encryption mode used within SCT-II) that differ by the way a nonce, a random value, and a counter are combined as tweak and plaintext inputs to the TBC to produce the keystream blocks that will mask the plaintext blocks. Then, we consider the authentication part of SCT-II and study the mu security of the nonce-based MAC Nonce-as-Tweak (NaT) built from a TBC and an almost universal (AU) hash function. We also observe that the standard construction of an AU hash function from a (T)BC can be proven secure under the assumption that the underlying TBC is unpredictable rather than pseudorandom, allowing much better conjectures on the concrete AU advantage. This allows us to derive the mu security of the family of nAE modes obtained by combining these encryption/MAC building blocks through the NSIV composition method.

Some of these modes require an underlying TBC with a larger tweak length than what is usually available for existing ones. We then show the practicality of our modes by instantiating them with two new TBC constructions, Deoxys-TBC-512 and Deoxys-TBC-640, which can be seen as natural extensions of the Deoxys-TBC family to larger tweak input sizes. Designing such TBCs with unusually large tweaks is prone to pitfalls: Indeed, we show that a large-tweak proposal for SKINNY published at EUROCRYPT 2020 presents an inherent construction flaw. We therefore provide a sound design strategy to construct large-tweak TBCs within the Superposition Tweakey (STK) framework, leading to new Deoxys-TBC and SKINNY variants. We provide software benchmarks indicating that while ensuring a very high security level, the performances of our proposals remain very competitive.

Thomas Attema, Aron van Baarsen, Stefan van den Berg, Pedro Capitão, Vincent Dunning, Lisa Kohl
Published 2024-07-08 PDFPDF

Despite much progress, general-purpose secure multi-party computation (MPC) with active security may still be prohibitively expensive in settings with large input datasets. This particularly applies to the secure evaluation of graph algorithms, where each party holds a subset of a large graph. Recently, Araki et al. (ACM CCS '21) showed that dedicated solutions may provide significantly better efficiency if the input graph is sparse. In particular, they provide an efficient protocol for the secure evaluation of “message passing” algorithms, such as the PageRank algorithm. Their protocol's computation and communication complexity are both $\tilde{O}(M\cdot B)$ instead of the $O(M^2)$ complexity achieved by general-purpose MPC protocols, where $M$ denotes the number of nodes and $B$ the (average) number of incoming edges per node. On the downside, their approach achieves only a relatively weak security notion; $1$-out-of-$3$ malicious security with selective abort.

In this work, we show that PageRank can instead be captured efficiently as a restricted multiplication straight-line (RMS) program, and present a new actively secure MPC protocol tailored to handle RMS programs. In particular, we show that the local knowledge of the participants can be leveraged towards the first maliciously-secure protocol with communication complexity linear in $M$, independently of the sparsity of the graph. We present two variants of our protocol. In our communication-optimized protocol, going from semi-honest to malicious security only introduces a small communication overhead, but results in quadratic computation complexity $O(M^2)$. In our balanced protocol, we still achieve a linear communication complexity $O(M)$, although with worse constants, but a significantly better computational complexity scaling with $O(M\cdot B)$. Additionally, our protocols achieve security with identifiable abort and can tolerate up to $n-1$ corruptions.

Gaëtan Cassiers, Loïc Masure, Charles Momin, Thorben Moos, Amir Moradi, François-Xavier Standaert
Published 2024-07-08 PDFPDF

Masking is a prominent strategy to protect cryptographic implementations against side-channel analysis. Its popularity arises from the exponential security gains that can be achieved for (approximately) quadratic resource utilization. Many variants of the countermeasure tailored for different optimization goals have been proposed. The common denominator among all of them is the implicit demand for robust and high entropy randomness. Simply assuming that uniformly distributed random bits are available, without taking the cost of their generation into account, leads to a poor understanding of the efficiency vs. security tradeoff of masked implementations. This is especially relevant in case of hardware masking schemes which are known to consume large amounts of random bits per cycle due to parallelism. Currently, there seems to be no consensus on how to most efficiently derive many pseudo-random bits per clock cycle from an initial seed and with properties suitable for masked hardware implementations. In this work, we evaluate a number of building blocks for this purpose and find that hardware-oriented stream ciphers like Trivium and its reduced-security variant Bivium B outperform most competitors when implemented in an unrolled fashion. Unrolled implementations of these primitives enable the flexible generation of many bits per cycle, which is crucial for satisfying the large randomness demands of state-of-the-art masking schemes. According to our analysis, only Linear Feedback Shift Registers (LFSRs), when also unrolled, are capable of producing long non-repetitive sequences of random-looking bits at a higher rate per cycle for the same or lower cost as Trivium and Bivium B. Yet, these instances do not provide black-box security as they generate only linear outputs. We experimentally demonstrate that using multiple output bits from an LFSR in the same masked implementation can violate probing security and even lead to harmful randomness cancellations. Circumventing these problems, and enabling an independent analysis of randomness generation and masking, requires the use of cryptographically stronger primitives like stream ciphers. As a result of our studies, we provide an evidence-based estimate for the cost of securely generating $n$ fresh random bits per cycle. Depending on the desired level of black-box security and operating frequency, this cost can be as low as $20n$ to $30n$ ASIC gate equivalents (GE) or $3n$ to $4n$ FPGA look-up tables (LUTs), where $n$ is the number of random bits required. Our results demonstrate that the cost per bit is (sometimes significantly) lower than estimated in previous works, incentivizing parallelism whenever exploitable. This provides further motivation to potentially move low randomness usage from a primary to a secondary design goal in hardware masking research.

Scott Griffy, Anna Lysyanskaya
Published 2024-07-08 PDFPDF

To be useful and widely accepted, automated contact tracing schemes (also called exposure notification) need to solve two seemingly contradictory problems at the same time: they need to protect the anonymity of honest users while also preventing malicious users from creating false alarms. In this paper, we provide, for the first time, an exposure notification construction that guarantees the same levels of privacy and integrity as existing schemes but with a fully malicious database (notably similar to Auerbach et al. CT-RSA 2021) without special restrictions on the adversary. We construct a new definition so that we can formally prove our construction secure. Our definition ensures the following integrity guarantees: no malicious user can cause exposure warnings in two locations at the same time and that any uploaded exposure notifications must be recent and not previously uploaded. Our construction is efficient, requiring only a single message to be broadcast at contact time no matter how many recipients are nearby. To notify contacts of potential infection, an infected user uploads data with size linear in the number of notifications, similar to other schemes. Linear upload complexity is not trivial with our assumptions and guarantees (a naive scheme would be quadratic). This linear complexity is achieved with a new primitive: zero knowledge subset proofs over commitments which is used by our "no cloning" proof protocol. We also introduce another new primitive: set commitments on equivalence classes, which makes each step of our construction more efficient. Both of these new primitives are of independent interest.

Gil Segev, Liat Shapira
Published 2024-07-08 PDFPDF

In this work we first present an explicit forking lemma that distills the information-theoretic essence of the high-moment technique introduced by Rotem and Segev (CRYPTO '21), who analyzed the security of identification protocols and Fiat-Shamir signature schemes. Whereas the technique of Rotem and Segev was particularly geared towards two specific cryptographic primitives, we present a stand-alone probabilistic lower bound, which does not involve any underlying primitive or idealized model. The key difference between our lemma and previous ones is that instead of focusing on the tradeoff between the worst-case or expected running time of the resulting forking algorithm and its success probability, we focus on the tradeoff between higher moments of its running time and its success probability.

Equipped with our lemma, we then establish concrete security bounds for the BN and BLS multi-signature schemes that are significantly tighter than the concrete security bounds established by Bellare and Neven (CCS '06) and Boneh, Drijvers and Neven (ASIACRYPT '18), respectively. Our analysis does not limit adversaries to any idealized algebraic model, such as the algebraic group model in which all algorithms are assumed to provide an algebraic justification for each group element they produce. Our bounds are derived in the random-oracle model based on the standard-model second-moment hardness of the discrete logarithm problem (for the BN scheme) and the computational co-Diffie-Hellman problem (for the BLS scheme). Such second-moment assumptions, asking that the success probability of any algorithm in solving the underlying computational problems is dominated by the second moment of the algorithm's running time, are particularly plausible in any group where no better-than-generic algorithms are currently known.

Anis Bkakria, Malika Izabachène
Published 2024-07-08 PDFPDF

Pattern matching methods are essential in various applications where users must disclose highly sensitive information. Among these applications are genomic data analysis, financial records inspection, and intrusion detection processes, all of which necessitate robust privacy protection mechanisms. Balancing the imperative of protecting the confidentiality of analyzed data with the need for efficient pattern matching presents a significant challenge.

In this paper, we propose an efficient post-quantum secure construction that enables arbitrary pattern matching over encrypted data while ensuring the confidentiality of the data to be analyzed. In addition, we address scenarios where a malicious data sender, intended to send an encrypted content for pattern detection analysis, has the ability to modify the encrypted content. We adapt the data fragmentation technique to handle such a malicious sender. Our construction makes use of a well-suited Homomorphic Encryption packing method in the context of fragmented streams and combines homomorphic operations in a leveled mode (i.e. without bootstrapping) to obtain a very efficient pattern matching detection process.

In contrast to the most efficient state-of-the-art scheme, our construction achieves a significant reduction in the time required for encryption, decryption, and pattern matching on encrypted data. Specifically, our approach decreases the time by factors of $1850$, $10^6$, and $245$, respectively, for matching a single pattern, and by factors of $115$, $10^5$, and $12$, respectively, for matching $2^{10}$ patterns.

Ji Luo
Published 2024-07-08 PDFPDF

Traitor tracing schemes [Chor–Fiat–Naor, Crypto ’94] help content distributors fight against piracy and are defined with the content distributor as a trusted authority having access to the secret keys of all users. While the traditional model caters well to its original motivation, its centralized nature makes it unsuitable for many scenarios. For usage among mutually untrusted parties, a notion of *ad hoc* traitor tracing (naturally with the capability of broadcast and revocation) is proposed and studied in this work. Such a scheme allows users in the system to generate their own public/secret key pairs, without trusting any other entity. To encrypt, a list of public keys is used to identify the set of recipients, and decryption is possible with a secret key for any of the public keys in the list. In addition, there is a tracing algorithm that given a list of recipients’ public keys and a pirate decoder capable of decrypting ciphertexts encrypted to them, identifies at least one recipient whose secret key must have been used to construct the said decoder.

Two constructions are presented. The first is based on functional encryption for circuits (conceptually, obfuscation) and has constant-size ciphertext, yet its decryption time is linear in the number of recipients. The second is a generic transformation that reduces decryption time at the cost of increased ciphertext size. A matching lower bound on the trade-off between ciphertext size and decryption time is shown, indicating that the two constructions achieve all possible optimal trade-offs, i.e., they fully demonstrate the Pareto front of efficiency. The lower bound also applies to broadcast encryption (hence all mildly expressive attribute-based encryption schemes) and is of independent interest.

Vincent Hwang
Published 2024-07-08 PDFPDF

We survey various mathematical tools used in software works multiplying polynomials in \[ \frac{\mathbb{Z}_q[x]}{\left\langle {x^n - \alpha x - \beta} \right\rangle}. \] In particular, we survey implementation works targeting polynomial multiplications in lattice-based cryptosystems Dilithium, Kyber, NTRU, NTRU Prime, and Saber with instruction set architectures/extensions Armv7-M, Armv7E-M, Armv8-A, and AVX2.

There are three emphases in this paper: (i) modular arithmetic, (ii) homomorphisms, and (iii) vectorization. For modular arithmetic, we survey Montgomery, Barrett, and Plantard multiplications. For homomorphisms, we survey (a) various homomorphisms such as Cooley–Tukey FFT, Good–Thomas FFT, Bruun's FFT, Rader's FFT, Karatsuba, and Toom–Cook; (b) various algebraic techniques for adjoining nice properties to the coefficient rings, including localization, Schönhage's FFT, Nussbaumer's FFT, and coefficient ring switching; and (c) various algebraic techniques related to the polynomial moduli, including twisting, composed multiplication, evaluation at $\infty$, truncation, incomplete transformation, striding, and Toeplitz matrix-vector product. For vectorization, we survey the relations between homomorphisms and vector arithmetic.

We then go through several case studies: We compare the implementations of modular multiplications used in Dilithium and Kyber, explain how the matrix-to-vector structure was exploited in Saber, and review the design choices of transformations for NTRU and NTRU Prime with vectorization. Finally, we outline several interesting implementation projects.

Décio Luiz Gazzoni Filho, Tomás S. R. Silva, Julio López
Published 2024-07-08 PDFPDF

We present a solution to the open problem of designing a linear-time, unbiased and timing attack-resistant shuffling algorithm for fixed-weight sampling. Although it can be implemented without timing leakages of secret data in any architecture, we illustrate with ARMv7-M and ARMv8-A implementations; for the latter, we take advantage of architectural features such as NEON and conditional instructions, which are representative of features available on architectures targeting similar systems, such as Intel. Our proposed algorithm improves asymptotically upon the current approach based on constant-time sorting networks ($O(n)$ versus $O(n \log^2 n)$), and an implementation of the new algorithm applied to NTRU is also faster in practice, by a factor of up to $6.91\ (591\%)$ on ARMv8-A cores and $12.89\ (1189\%)$ on the Cortex-M4; it also requires fewer uniform random bits. This translates into performance improvements for NTRU encapsulation, compared to state-of-the-art implementations, of up to 50% on ARMv8-A cores and 72% on the Cortex-M4, and small improvements to key generation (up to 2.7% on ARMv8-A cores and 6.1% on the Cortex-M4), with negligible impact on code size and a slight improvement in RAM usage for the Cortex-M4.

Nouri Alnahawi, Johannes Müller, Jan Oupický, Alexander Wiesmaier
Published 2024-07-08 PDFPDF

Transport Layer Security (TLS) is the backbone security protocol of the Internet. As this fundamental protocol is at risk from future quantum attackers, many proposals have been made to protect TLS against this threat by implementing post-quantum cryptography (PQC). The widespread interest in post-quantum TLS has given rise to a large number of solutions over the last decade. These proposals differ in many aspects, including the security properties they seek to protect, the efficiency and trustworthiness of their post-quantum building blocks, and the application scenarios they consider, to name a few.

Based on an extensive literature review, we classify existing solutions according to their general approaches, analyze their individual contributions, and present the results of our extensive performance experiments. Based on these insights, we identify the most reasonable candidates for post-quantum TLS, which research problems in this area have already been solved, and which are still open. Overall, our work provides a well-founded reference point for researching post-quantum TLS and preparing TLS in practice for the quantum age.

Sean Murphy, Rachel Player
Published 2024-07-08 PDFPDF

This paper develops Central Limit arguments for analysing the noise in ciphertexts in two homomorphic encryption schemes that are based on Ring-LWE. The first main contribution of this paper is to present and evaluate an average-case noise analysis for the BGV scheme. Our approach relies on the recent work of Costache et al.(SAC 2023) that gives the approximation of a polynomial product as a multivariate Normal distribution. We show how this result can be applied in the BGV context and evaluate its efficacy. We find this average-case approach can much more closely model the noise growth in BGV implementations than prior approaches, but in some cases it can also underestimate the practical noise growth. Our second main contribution is to develop a Central Limit framework to analyse the noise growth in the homomorphic Ring-LWE cryptosystem of Lyubashevsky, Peikert and Regev (Eurocrypt 2013, full version). Our approach is very general: apart from finite variance, no assumption on the distribution of the noise is required (in particular, the noise need not be subgaussian). We show that our approach leads to tighter bounds for the probability of decryption failure than those of prior work.

Keita Xagawa
Published 2024-04-09 PDFPDF

One of the central questions in cryptology is how efficient generic constructions of cryptographic primitives can be. Gennaro, Gertner, Katz, and Trevisan [SIAM J. of Compt., 2005] studied the lower bounds of the number of invocations of a (trapdoor) one-way permutation in order to construct cryptographic schemes, e.g., pseudorandom number generators, digital signatures, and public-key and symmetric-key encryption.

Recently, quantum machines have been explored to _construct_ cryptographic primitives other than quantum key distribution. This paper studies the efficiency of _quantum_ black-box constructions of cryptographic primitives when the communications are _classical_. Following Gennaro et al., we give the lower bounds of the number of invocations of an underlying quantumly-computable quantum-one-way permutation when the _quantum_ construction of pseudorandom number generator and symmetric-key encryption is weakly black-box. Our results show that the quantum black-box constructions of pseudorandom number generator and symmetric-key encryption do not improve the number of invocations of an underlying quantumly-computable quantum-one-way permutation.

Jules Maire, Damien Vergnaud
Published 2024-04-09 PDFPDF

We present new secure multi-party computation protocols for linear algebra over a finite field, which improve the state-of-the-art in terms of security. We look at the case of unconditional security with perfect correctness, i.e., information-theoretic security without errors. We notably propose an expected constant-round protocol for solving systems of m linear equations in n variables over Fq with expected complexity O(k n^2.5 + k m) (where complexity is measured in terms of the number of secure multiplications required) with k > m(m+n)+1. The previous proposals were not error-free: known protocols can indeed fail and thus reveal information with probability Omega(poly(m)/q). Our protocols are simple and rely on existing computer-algebra techniques, notably the Preparata-Sarwate algorithm, a simple but poorly known “baby-step giant-step” method for computing the characteristic polynomial of a matrix, and techniques due to Mulmuley for error-free linear algebra in positive characteristic.

Charles Bouillaguet, Julia Sauvage
Published 2024-04-09 PDFPDF

Biscuit is a recent multivariate signature scheme based on the MPC-in-the-Head paradigm. It has been submitted to the NIST competition for additional signature schemes. Signatures are derived from a zero-knowledge proof of knowledge of the solution of a structured polynomial system. This extra structure enables efficient proofs and compact signatures. This short note demonstrates that it also makes these polynomial systems easier to solve than random ones. As a consequence, the original parameters of Biscuit failed to meet the required security levels and had to be upgraded.

Manuel Barbosa, Deirdre Connolly, João Diogo Duarte, Aaron Kaiser, Peter Schwabe, Karolin Varner, Bas Westerbaan
Published 2024-04-09 PDFPDF

X-Wing is a hybrid key-encapsulation mechanism based on X25519 and ML-KEM-768. It is designed to be the sensible choice for most applications. The concrete choice of X25519 and ML-KEM-768 allows X-Wing to achieve improved efficiency compared to using a generic KEM combiner. In this paper, we introduce the X-Wing hybrid KEM construction and provide a proof of security. We show (1) that X-Wing is a classically IND-CCA secure KEM if the strong Diffie-Hellman assumption holds in the X25519 nominal group, and (2) that X-Wing is a post-quantum IND-CCA secure KEM if ML-KEM-768 is itself an IND-CCA secure KEM and SHA3-256 is secure when used as a pseudorandom function. The first result is proved in the ROM, whereas the second one holds in the standard model. Loosely speaking, this means X-Wing is secure if either X25519 or ML-KEM-768 is secure. We stress that these security guarantees and optimizations are only possible due to the concrete choices that were made, and it may not apply in the general case.

Yehuda Lindell
Published 2024-04-09 PDFPDF

In a multiparty signing protocol, also known as a threshold signature scheme, the private signing key is shared amongst a set of parties and only a quorum of those parties can generate a signature. Research on multiparty signing has been growing in popularity recently due to its application to cryptocurrencies. Most work has focused on reducing the number of rounds to two, and as a result: (a) are not fully simulatable in the sense of MPC real/ideal security definitions, and/or (b) are not secure under concurrent composition, and/or (c) utilize non-standard assumptions of different types in their proofs of security. In this paper, we describe a simple three-round multiparty protocol for Schnorr signatures that is secure for any number of corrupted parties; i.e., in the setting of a dishonest majority. The protocol is fully simulatable, secure under concurrent composition, and proven secure in the standard model or random-oracle model (depending on the instantiations of the commitment and zero-knowledge primitives). The protocol realizes an ideal Schnorr signing functionality with perfect security in the ideal commitment and zero-knowledge hybrid model (and thus the only assumptions needed are for realizing these functionalities).

In our presentation, we do not assume that all parties begin with the message to be signed, the identities of the participating parties and a unique common session identifier, since this is often not the case in practice. Rather, the parties achieve consensus on these parameters as the protocol progresses.

Thomas Pornin
Published 2024-04-09 PDFPDF

This paper describes a generic methodology for obtaining unified, and then complete formulas for a prime-order group abstraction homomorphic to a subgroup of an elliptic curve with even order. The method is applicable to any curve with even order, in finite fields of both even and odd characteristic; it is most efficient on curves with order equal to 2 modulo 4, dubbed "double-odd curves". In large characteristic fields, we obtain doubling formulas with cost as low as 1M + 5S, and the resulting group allows building schemes such as signatures that outperform existing fast solutions, e.g. Ed25519. In binary fields, the obtained formulas are not only complete but also faster than previously known incomplete formulas; we can sign and verify in as low as 18k and 27k cycles on x86 CPUs, respectively.

Loïs Huguenin-Dumittan, Serge Vaudenay
Published 2024-04-09 PDFPDF

Proving whether it is possible to build IND-CCA public-key encryption (PKE) from IND-CPA PKE in a black-box manner is a major open problem in theoretical cryptography. In a significant breakthrough, Gertner, Malkin and Myers showed in 2007 that shielding black-box reductions from IND-CCA to IND-CPA do not exist in the standard model. Shielding means that the decryption algorithm of the IND-CCA scheme does not call the encryption algorithm of the underlying IND-CPA scheme. In other words, it implies that every tentative construction of IND-CCA from IND-CPA must have a re-encryption step when decrypting.

This result was only proven with respect to classical algorithms. In this work we show that it stands in a post-quantum setting. That is, we prove that there is no post-quantum shielding black-box construction of IND-CCA PKE from IND-CPA PKE. In the type of reductions we consider, i.e. post-quantum ones, the constructions are still classical in the sense that the schemes must be computable on classical computers, but the adversaries and the reduction algorithm can be quantum. This suggests that considering quantum notions, which are stronger than their classical counterparts, and allowing for quantum reductions does not make building IND-CCA public-key encryption easier.

Gabrielle De Micheli, Nadia Heninger
Published 2024-04-09 PDFPDF

Side-channel attacks targeting cryptography may leak only partial or indirect information about the secret keys. There are a variety of techniques in the literature for recovering secret keys from partial information. In this work, we survey several of the main families of partial key recovery algorithms for RSA, (EC)DSA, and (elliptic curve) Diffie-Hellman, the classical public-key cryptosystems in common use today. We categorize the known techniques by the structure of the information that is learned by the attacker, and give simplified examples for each technique to illustrate the underlying ideas.

Shichang Wang, Meicheng Liu, Shiqi Hou, Dongdai Lin
Published 2024-04-09 PDFPDF

At CHES 2017, Banik et al. proposed a lightweight block cipher GIFT consisting of two versions GIFT-64 and GIFT-128. Recently, there are lots of authenticated encryption schemes that adopt GIFT-128 as their underlying primitive, such as GIFT-COFB and HyENA. To promote a comprehensive perception of the soundness of the designs, we evaluate their security against differential-linear cryptanalysis.

For this, automatic tools have been developed to search differential-linear approximation for the ciphers based on S-boxes. With the assistance of the automatic tools, we find 13-round differential-linear approximations for GIFT-COFB and HyENA. Based on the distinguishers, 18-round key-recovery attacks are given for the message processing phase and initialization phase of both ciphers. Moreover, the resistance of GIFT-64/128 against differential-linear cryptanalysis is also evaluated. The 12-round and 17-round differential-linear approximations are found for GIFT-64 and GIFT-128 respectively, which lead to 18-round and 19-round key-recovery attacks respectively. Here, we stress that our attacks do not threaten the security of these ciphers.

Décio Luiz Gazzoni Filho, Guilherme Brandão, Julio López
Published 2024-04-09 PDFPDF

Efficient polynomial multiplication routines are critical to the performance of lattice-based post-quantum cryptography (PQC). As PQC standards only recently started to emerge, CPUs still lack specialized instructions to accelerate such routines. Meanwhile, deep learning has grown immeasurably in importance. Its workloads call for teraflops-level of processing power for linear algebra operations, mainly matrix multiplication. Computer architects have responded by introducing ISA extensions, coprocessors and special-purpose cores to accelerate such operations. In particular, Apple ships an undocumented matrix-multiplication coprocessor, AMX, in hundreds of millions of mobile phones, tablets and personal computers. Our work repurposes AMX to implement polynomial multiplication and applies it to the NTRU cryptosystem, setting new speed records on the Apple M1 and M3 systems-on-chip (SoCs): polynomial multiplication, key generation, encapsulation and decapsulation are sped up by $1.54$–$3.07\times$, $1.08$–$1.33\times$, $1.11$–$1.50\times$ and $1.20$–$1.98\times$, respectively, over the previous state-of-the-art.

Yi-Hsiu Chen, Yehuda Lindell
Published 2024-04-09 PDFPDF

Verifiable secret sharing (VSS) protocols enable parties to share secrets while guaranteeing security (in particular, that all parties hold valid and consistent shares) even if the dealer or some of the participants are malicious. Most work on VSS focuses on the honest majority case, primarily since it enables one to guarantee output delivery (e.g., a corrupted recipient cannot prevent an honest dealer from sharing their value). Feldman's VSS is a well known and popular protocol for this task and relies on the discrete log hardness assumption. In this paper, we present a variant of Feldman's VSS for the dishonest majority setting and formally prove its security. Beyond the basic VSS protocol, we present a publicly-verifiable version, as well as show how to securely add participants to the sharing and how to refresh an existing sharing (all secure in the presence of a dishonest majority). We prove that our protocols are UC secure, for appropriately defined ideal functionalities.

Subhadeep Banik, Andrea Caforio, Serge Vaudenay
Published 2024-04-09 PDFPDF

The LowMC family of block ciphers was proposed by Albrecht et al. in Eurocrypt 2015, specifically targeting adoption in FHE and MPC applications due to its low multiplicative complexity. The construction operates a 3-bit quadratic S-box as the sole non-linear transformation in the algorithm. In contrast, both the linear layer and round key generation are achieved through multiplications of full rank matrices over GF(2). The cipher is instantiable using a diverse set of default configurations, some of which have partial non-linear layers i.e., in which the S-boxes are not applied over the entire internal state of the cipher.

The significance of cryptanalysing LowMC was elevated by its inclusion into the NIST PQC digital signature scheme PICNIC in which a successful key recovery using a single plaintext/ciphertext pair is akin to retrieving the secret signing key. The current state-of-the-art attack in this setting is due to Dinur at Eurocrypt 2021, in which a novel way of enumerating roots of a Boolean system of equation is morphed into a key-recovery procedure that undercuts an ordinary exhaustive search in terms of time complexity for the variants of the cipher up to five rounds.

In this work, we demonstrate that this technique can efficiently be enriched with a specific linearization strategy that reduces the algebraic degree of the non-linear layer as put forward by Banik et al. at IACR ToSC 2020(4). This amalgamation yields new attacks on certain instances of LowMC up to seven rounds.

Daniel J. Bernstein
Published 2024-04-09 PDFPDF

This paper reviews, from bottom to top, a polynomial-time algorithm to correct $t$ errors in classical binary Goppa codes defined by squarefree degree-$t$ polynomials. The proof is factored through a proof of a simple Reed–Solomon decoder, and the algorithm is simpler than Patterson's algorithm. All algorithm layers are expressed as Sage scripts backed by test scripts. All theorems are formally verified. The paper also covers the use of decoding inside the Classic McEliece cryptosystem, including reliable recognition of valid inputs.

Marloes Venema, Leon Botros
Published 2024-04-09 PDFPDF

Predicate encryption (PE) is a type of public-key encryption that captures many useful primitives such as attribute-based encryption (ABE). Although much progress has been made to generically achieve security against chosen-plaintext attacks (CPA) efficiently, in practice, we also require security against chosen-ciphertext attacks (CCA). Because achieving CCA-security on a case-by-case basis is a complicated task, several generic conversion methods have been proposed, which typically target different subclasses of PE such as ciphertext-policy ABE. As is common, such conversion methods may sacrifice some efficiency. Notably, for ciphertext-policy ABE, all proposed generic transformations incur a significant decryption overhead. Furthermore, depending on the setting in which PE is used, we may also want to require that messages are signed. To do this, predicate signature schemes can be used. However, such schemes provide a strong notion of privacy for the signer, which may be stronger than necessary for some practical settings at the cost of efficiency.

In this work, we propose the notion of predicate extension, which transforms the predicate used in a PE scheme to include one additional attribute, in both the keys and the ciphertexts. Using predicate extension, we can generically obtain CCA-security and signatures from a CPA-secure PE scheme. For the CCA-security transform, we observe that predicate extension implies a two-step approach to achieving CCA-security. This insight broadens the applicability of existing transforms for specific subclasses of PE to cover all PE. We also propose a new transform that incurs slightly less overhead than existing transforms. Furthermore, we show that predicate extension allows us to create a new type of signatures, which we call PE-based signatures. PE-based signatures are weaker than typical predicate signatures in the sense that they do not provide privacy for the signer. Nevertheless, such signatures may be more suitable for some practical settings owing to their efficiency or reduced interactivity. Lastly, to show that predicate extensions may facilitate a more efficient way to achieve CCA-security generically than existing methods, we propose a novel predicate-extension transformation for a large class of pairing-based PE, covered by the pair and predicate encodings frameworks. In particular, this yields the most efficient generic CCA-conversion for ciphertext-policy ABE.

Mustafa Khairallah
Published 2024-04-09 PDFPDF

The size of the authentication tag represents a significant overhead for applications that are limited by bandwidth or memory. Hence, some authenticated encryption designs have a smaller tag than the required privacy level, which was also suggested by the NIST lightweight cryptography standardization project. In the ToSC 2022, two papers have raised questions about the IND-CCA security of AEAD schemes in this situation. These papers show that (a) online AE cannot provide IND-CCA security beyond the tag length, and (b) it is possible to have IND-CCA security beyond the tag length in a restricted Encode-then-Encipher framework. In this paper, we address some of the remaining gaps in this area. Our main result is to show that, for a fixed stretch, Pseudo-Random Injection security implies IND-CCA security as long as the minimum ciphertext size is at least as large as the required IND-CCA security level. We also show that this bound is tight and that any AEAD scheme that allows empty plaintexts with a fixed stretch cannot achieve IND-CCA security beyond the tag length. Next, we look at the weaker notion of MRAE security, and show that two-pass schemes that achieve MRAE security do not achieve IND-CCA security beyond the tag size. This includes SIV and rugged PRPs.

Akira Takahashi, Greg Zaverucha
Published 2024-04-09 PDFPDF

Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations. It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations. In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme. Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.

Shahla Atapoor, Karim Baghery, Hilder V. L. Pereira, Jannik Spiessens
Published 2024-04-09 PDFPDF

Fully Homomorphic Encryption (FHE) is a prevalent cryptographic primitive that allows for computation on encrypted data. In various cryptographic protocols, this enables outsourcing computation to a third party while retaining the privacy of the inputs to the computation. However, these schemes make an honest-but-curious assumption about the adversary. Previous work has tried to remove this assumption by combining FHE with Verifiable Computation (VC). Recent work has increased the flexibility of this approach by introducing integrity checks for homomorphic computations over rings. However, efficient FHE for circuits of large multiplicative depth also requires non-ring computations called maintenance operations, i.e. modswitching and keyswitching, which cannot be efficiently verified by existing constructions. We propose the first efficiently verifiable FHE scheme that allows for arbitrary depth homomorphic circuits by utilizing the double-CRT representation in which FHE schemes are typically computed, and using lattice-based SNARKs to prove components of this computation separately, including the maintenance operations. Therefore, our construction can theoretically handle bootstrapping operations. We also present the first implementation of a verifiable computation on encrypted data for a computation that contains multiple ciphertext-ciphertext multiplications. Concretely, we verify the homomorphic computation of an approximate neural network containing three layers and >100 ciphertexts in less than 1 second while maintaining reasonable prover costs.

Matteo Campanelli, Chaya Ganesh, Rosario Gennaro
Published 2024-04-09 PDFPDF

We investigate proof systems where security holds against rational parties instead of malicious ones. Our starting point is the notion of rational arguments, a variant of rational proofs (Azar and Micali, STOC 2012) where security holds against rational adversaries that are also computationally bounded.

Rational arguments are an interesting primitive because they generally allow for very efficient protocols, and in particular sublinear verification (i.e. where the Verifier does not have to read the entire input). In this paper we aim at narrowing the gap between literature on rational schemes and real world applications. Our contribution is two-fold.

We provide the first construction of rational arguments for the class of polynomial computations that is practical (i.e., it can be applied to real-world computations on reasonably common hardware) and with logarithmic communication. Techniques-wise, we obtain this result through a compiler from information-theoretic protocols and rational proofs for polynomial evaluation. The latter could be of independent interest.

As a second contribution, we propose a new notion of extractability for rational arguments. Through this notion we can obtain arguments where knowledge of a witness is incentivized (rather than incentivizing mere soundness). We show how our aforementioned compiler can also be applied to obtain efficient extractable rational arguments for $\mathsf{NP}$.

Samuel Bouaziz–Ermann, Alex B. Grilo, Damien Vergnaud, Quoc-Huy Vu
Published 2024-04-09 PDFPDF

There has been a recent interest in proposing quantum protocols whose security relies on weaker computational assumptions than their classical counterparts. Importantly to our work, it has been recently shown that public-key encryption (PKE) from one-way functions (OWF) is possible if we consider quantum public keys. Notice that we do not expect classical PKE from OWF given the impossibility results of Impagliazzo and Rudich (STOC'89).

However, the distribution of quantum public keys is a challenging task. Therefore, the main question that motivates our work is if quantum PKE from OWF is possible if we have classical public keys. Such protocols are impossible if ciphertexts are also classical, given the impossibility result of Austrin et al.(CRYPTO'22) of quantum enhanced key-agreement (KA) with classical communication.

In this paper, we focus on black-box separation for PKE with classical public key and quantum ciphertext from OWF under the polynomial compatibility conjecture, first introduced in Austrin et al.. More precisely, we show the separation when the decryption algorithm of the PKE does not query the OWF. We prove our result by extending the techniques of Austrin et al. and we show an attack for KA in an extended classical communication model where the last message in the protocol can be a quantum state.