Communications in Cryptology IACR CiC


Dates are inconsistent
27 results sorted by publication date
Editors in chief
Call for papers: IACR Communications in Cryptology Submit a paper Communications in Cryptology is a journal for original research papers which welcomes submissions on any topic in cryptology. This covers all research topics in cryptography and cryptanalysis, including but not limited to foundational theory and mathematics the design, proposal, and analysis of cryptographic primitives a...
Sebastian Kolby, Divya Ravi, Sophia Yakoubov
Published 2024-10-07 PDFPDF

YOSO MPC (Gentry et al., Crypto 2021) is a new MPC framework where each participant can speak at most once. This models an adaptive adversary’s ability to watch the network and corrupt or destroy parties it deems significant based on their communication. By using private channels to anonymous receivers (e.g. by encrypting to a public key whose owner is unknown), the communication complexity of YOSO MPC can scale sublinearly with the total number N of available parties, even when the adversary’s corruption threshold is linear in N (e.g. just under N/2). It was previously an open problem whether YOSO MPC can achieve guaranteed output delivery in a constant number of rounds without relying on trusted setup. In this work, we show that this can indeed be accomplished. We demonstrate three different approaches: the first two (which we call YaOSO and YOSO-GLS) use two and three rounds of communication, respectively. Our third approach (which we call YOSO-LHSS) uses O(d) rounds, where d is the multiplicative depth of the circuit being evaluated; however, it can be used to bootstrap any constant-round YOSO protocol that requires setup, by generating that setup within YOSO-LHSS. Though YOSO-LHSS requires more rounds than our first two approaches, it may be more practical, since the zero knowledge proofs it employs are more efficient to instantiate. As a contribution of independent interest, we introduce a verifiable state propagation UC functionality, which allows parties to send private message which are verifiably derived in the “correct” way (according to the protocol in question) to anonymous receivers. This is a natural functionality to build YOSO protocols on top of.

Douglas Wikström
Published 2024-10-07 PDFPDF

We generalize and abstract the problem of extracting a witness from a prover of a special sound protocol into a combinatorial problem induced by a sequence of matroids and a predicate, and present a parametrized algorithm for solving this problem.

The parametrization provides a tight tradeoff between the running time and the extraction error of the algorithm, which allows optimizing the parameters to minimize: the soundness error for interactive proofs, or the extraction time for proofs of knowledge.

In contrast to previous work we bound the distribution of the running time and not only the expected running time. Tail bounds give a tighter analysis when applied recursively and a concentrated running time.

Carsten Baum, Jens Berlips, Walther Chen, Ivan B. Damgård, Kevin M. Esvelt, Leonard Foner, Dana Gretton, Martin Kysel, Ronald L. Rivest, Lawrence Roy, Francesca Sage-Ling, Adi Shamir, Vinod Vaikuntanathan, Lynn Van Hauwe, Theia Vogel, Benjamin Weinstein-Raun, Daniel Wichs, Stephen Wooster, Andrew C. Yao, Yu Yu
Published 2024-10-07 PDFPDF

Oblivious Pseudorandom Functions (OPRFs) allow a client to evaluate a pseudorandom function (PRF) on her secret input based on a key that is held by a server. In the process, the client only learns the PRF output but not the key, while the server neither learns the input nor the output of the client. The arguably most popular OPRF is due to Naor, Pinkas and Reingold (Eurocrypt 2009). It is based on an Oblivious Exponentiation by the server, with passive security under the Decisional Diffie-Hellman assumption. In this work, we strengthen the security guarantees of the NPR OPRF by protecting it against active attacks of the server. We have implemented our solution and report on the performance. Our main result is a new batch OPRF protocol which is secure against maliciously corrupted servers, but is essentially as efficient as the semi-honest solution. More precisely, the computation (and communication) overhead is a multiplicative factor $o(1)$ as the batch size increases. The obvious solution using zero-knowledge proofs would have a constant factor overhead at best, which can be too expensive for certain deployments. Our protocol relies on a novel version of the DDH problem, which we call the Oblivious Exponentiation Problem (OEP), and we give evidence for its hardness in the Generic Group model. We also present a variant of our maliciously secure protocol that does not rely on the OEP but nevertheless only has overhead $o(1)$ over the known semi-honest protocol. Moreover, we show that our techniques can also be used to efficiently protect threshold blind BLS signing and threshold ElGamal decryption against malicious attackers.

Franklin Harding, Jiayu Xu
Published 2024-10-07 PDFPDF

Blind signature schemes enable a user to obtain a digital signature on a message from a signer without revealing the message itself. Among the most fundamental examples of such a scheme is blind Schnorr, but recent results show that it does not satisfy the standard notion of security against malicious users, One-More Unforgeability (OMUF), as it is vulnerable to the ROS attack. However, blind Schnorr does satisfy the weaker notion of sequential OMUF, in which only one signing session is open at a time, in the Algebraic Group Model (AGM) + Random Oracle Model (ROM), assuming the hardness of the Discrete Logarithm (DL) problem.

This paper serves as a first step towards characterizing the security of blind Schnorr in the limited concurrency setting. Specifically, we show that blind Schnorr satisfies OMUF when at most two signing sessions can be concurrently open (in the AGM+ROM, assuming DL). Our argument suggests that it is plausible that blind Schnorr satisfies OMUF for up to polylogarithmically many concurrent signing sessions. Our security proof involves interesting techniques from linear algebra and combinatorics.

Ida Bruhns, Sebastian Berndt, Jonas Sander, Thomas Eisenbarth
Published 2024-10-07 PDFPDF

Machine learning applications gain more and more access to highly sensitive information while simultaneously requiring more and more computation resources. Hence, the need for outsourcing these computational expensive tasks while still ensuring security and confidentiality of the data is imminent. In their seminal work, Tramer and Boneh presented the Slalom protocol for privacy-preserving inference by splitting the computation into a data-independent preprocessing phase and a very efficient online phase. In this work, we present a new method to significantly speed up the preprocessing phase by introducing the Carnival protocol. Carnival leverages the pseudo-randomness of the Subset sum problem to also enable efficient outsourcing during the preprocessing phase. In addition to a security proof we also include an empirical study analyzing the landscape of the uniformity of the output of the Subset sum function for smaller parameters. Our findings show that Carnival is a great candidate for real-world implementations.

Aron van Baarsen, Marc Stevens
Published 2024-10-07 PDFPDF

Private set intersection (PSI) is a cryptographic functionality for two parties to learn the intersection of their input sets, without leaking any other information. Circuit-PSI is a stronger PSI functionality where the parties learn only a secret-shared form of the desired intersection, thus without revealing the intersection directly. These secret shares can subsequently serve as input to a secure multiparty computation of any function on this intersection.

In this paper we consider several settings in which parties take part in multiple Circuit-PSI executions with the same input set, and aim to amortize communications and computations. To that end, we build up a new framework for Circuit-PSI around generalizations of oblivious (programmable) PRFs that are extended with offline setup phases. We present several efficient instantiations of this framework with new security proofs for this setting. As a side result, we obtain a slight improvement in communication and computation complexity over the state-of-the-art semi-honest Circuit-PSI protocol by Bienstock et al. (USENIX '23). Additionally, we present a novel Circuit-PSI protocol from a PRF with secret-shared outputs, which has linear communication and computation complexity in the parties' input set sizes, and is able to realize a stronger security notion. Lastly, we derive the potential amortizations over multiple protocol executions, and observe that each of the presented instantiations is favorable in at least one of the multiple-execution settings.

Balthazar Bauer, Pooya Farshim, Patrick Harasser, Markulf Kohlweiss
Published 2024-10-07 PDFPDF

The generic-group model (GGM) and the algebraic-group model (AGM) have been exceptionally successful in proving the security of many classical and modern cryptosystems. These models, however, come with standard-model uninstantiability results, raising the question of whether the schemes analyzed under them can be based on firmer standard-model footing.

We formulate the uber-knowledge (UK) assumption, a standard-model assumption that naturally extends the uber-assumption family to knowledge-type problems. We justify the soundness of UK in both the bilinear GGM and the bilinear AGM. Along the way we extend these models to account for hashing into groups, an adversarial capability that is available in many concrete groups—In contrast to standard assumptions, hashing may affect the validity of knowledge assumptions. These results, in turn, enable a modular approach to security in the GGM and the AGM.

As example applications, we use the UK assumption to prove knowledge soundness of Groth's zero-knowledge SNARK (EUROCRYPT 2016) and of KZG polynomial commitments (ASIACRYPT 2010) in the standard model, where for the former we reuse the existing proof in the AGM without hashing.

Yi-Fu Lai
Published 2024-10-07 PDFPDF

In this work, we introduce two post-quantum Verifiable Random Function (VRF) constructions based on abelian group actions and isogeny group actions with a twist. The former relies on the standard group action Decisional Diffie-Hellman (GA-DDH) assumption. VRFs serve as cryptographic tools allowing users to generate pseudorandom outputs along with publicly verifiable proofs. Moreover, the residual pseudorandomness of VRFs ensures the pseudorandomness of unrevealed inputs, even when multiple outputs and proofs are disclosed. Our work aims at addressing the growing demand for post-quantum VRFs, as existing constructions based on elliptic curve cryptography (ECC) or classical DDH-type assumptions are vulnerable to quantum threats.

In our contributions, our two VRF constructions, rooted in number-theoretic pseudorandom functions, are both simple and secure over the random oracle model. We introduce a new proof system for the factorization of group actions and set elements, serving as the proofs for our VRFs. The first proposal is based on the standard GA-DDH problem, and for its security proof, we introduce the (group action) master Decisional Diffie-Hellman problem over group actions, proving its equivalence to the standard GA-DDH problem. In the second construction, we leverage quadratic twists to enhance efficiency, reducing the key size and the proof sizes, expanding input size. The scheme is based on the square GA-DDH problem.

Moreover, we employ advanced techniques from the isogeny literature to optimize the proof size to 39KB and 34KB using CSIDH-512 without compromising VRF notions. The schemes feature fast evaluations but exhibit slower proof generation. To the best of our knowledge, these constructions represent the first two provably secure VRFs based on isogenies.

Douglas Wikström
Published 2024-10-07 PDFPDF

We generalize the optimal knowledge extractor for constant-round special sound protocols presented by Wikström (2018) to a knowledge extractor for the corresponding non-interactive Fiat-Shamir proofs in the random oracle model and give an exact analysis of the extraction error and running time.

Relative the interactive case the extraction error and the running time are both asymptotically increased by a multiplicative factor equal to the number of oracle queries made by the prover.

Through carefully chosen notation, novel concepts, and a technical lemma, we effectively recast the extraction problem of the notoriously complex non-interactive case to the interactive case. Thus, our approach may be of independent interest.

Sougata Mandal
Published 2024-10-07 PDFPDF

In ASIACRYPT 2019, Andreeva et al. introduced a new symmetric key primitive called the forkcipher, designed for lightweight applications handling short messages. A forkcipher is a keyed function with a public tweak, featuring fixed-length input and fixed-length (expanding) output. They also proposed a specific forkcipher, ForkSkinny, based on the tweakable block cipher SKINNY, and its security was evaluated through cryptanalysis. Since then, several efficient AEAD and MAC schemes based on forkciphers have been proposed, catering not only to short messages but also to various purposes such as leakage resilience and cloud security. While forkciphers have proven to be efficient solutions for designing AEAD schemes, the area of forkcipher design remains unexplored, particularly the lack of provably secure forkcipher constructions.

In this work, we propose forkcipher design for various tweak lengths, based on a block cipher as the underlying primitive. We provide proofs of security for these constructions, assuming the underlying block cipher behaves as an ideal block cipher. First, we present a forkcipher, $\widetilde{\textsf{F}}1$, for an $n$-bit tweak and prove its optimal ($n$-bit) security. Next, we propose another construction, $\widetilde{\textsf{F}}2$, for a $2n$-bit tweak, also proving its optimal ($n$-bit) security. Finally, we introduce a construction, $\widetilde{\textsf{F}}r$, for a general $rn$-bit tweak, achieving $n$-bit security.

Anne Broadbent, Martti Karvonen, Sébastien Lord
Published 2024-10-07 PDFPDF

The famous no-cloning principle has been shown recently to enable a number of uncloneable cryptographic primitives, including the copy-protection of certain functionalities. Here we address for the first time unkeyed quantum uncloneablity, via the study of a complexity-theoretic tool that enables a computation, but that is natively unkeyed: quantum advice. Remarkably, this is an application of the no-cloning principle in a context where the quantum states of interest are not chosen by a random process. We establish unconditional constructions for promise problems admitting uncloneable quantum advice and, assuming the feasibility of quantum copy-protecting certain functions, for languages with uncloneable advice. Along the way, we note that state complexity classes, introduced by Rosenthal and Yuen (ITCS 2022) — which concern the computational difficulty of synthesizing sequences of quantum states — can be naturally generalized to obtain state cloning complexity classes. We make initial observations on these classes, notably obtaining a result analogous to the existence of undecidable problems.

Our proof technique defines and constructs ingenerable sequences of finite bit strings, essentially meaning that they cannot be generated by any uniform circuit family with non-negligible probability. We then prove a generic result showing that the difficulty of accomplishing a computational task on uniformly random inputs implies its difficulty on any fixed, ingenerable sequence. We use this result to derandomize quantum cryptographic games that relate to cloning, and then incorporate a result of Kundu and Tan (arXiv 2022) to obtain uncloneable advice. Applying this two-step process to a monogamy-of-entanglement game yields a promise problem with uncloneable advice, and applying it to the quantum copy-protection of pseudorandom functions with super-logarithmic output lengths yields a language with uncloneable advice.

Thomas Attema, Aron van Baarsen, Stefan van den Berg, Pedro Capitão, Vincent Dunning, Lisa Kohl
Published 2024-07-08 PDFPDF

Despite much progress, general-purpose secure multi-party computation (MPC) with active security may still be prohibitively expensive in settings with large input datasets. This particularly applies to the secure evaluation of graph algorithms, where each party holds a subset of a large graph. Recently, Araki et al. (ACM CCS '21) showed that dedicated solutions may provide significantly better efficiency if the input graph is sparse. In particular, they provide an efficient protocol for the secure evaluation of “message passing” algorithms, such as the PageRank algorithm. Their protocol's computation and communication complexity are both $\tilde{O}(M\cdot B)$ instead of the $O(M^2)$ complexity achieved by general-purpose MPC protocols, where $M$ denotes the number of nodes and $B$ the (average) number of incoming edges per node. On the downside, their approach achieves only a relatively weak security notion; $1$-out-of-$3$ malicious security with selective abort.

In this work, we show that PageRank can instead be captured efficiently as a restricted multiplication straight-line (RMS) program, and present a new actively secure MPC protocol tailored to handle RMS programs. In particular, we show that the local knowledge of the participants can be leveraged towards the first maliciously-secure protocol with communication complexity linear in $M$, independently of the sparsity of the graph. We present two variants of our protocol. In our communication-optimized protocol, going from semi-honest to malicious security only introduces a small communication overhead, but results in quadratic computation complexity $O(M^2)$. In our balanced protocol, we still achieve a linear communication complexity $O(M)$, although with worse constants, but a significantly better computational complexity scaling with $O(M\cdot B)$. Additionally, our protocols achieve security with identifiable abort and can tolerate up to $n-1$ corruptions.

Scott Griffy, Anna Lysyanskaya
Published 2024-07-08 PDFPDF

To be useful and widely accepted, automated contact tracing schemes (also called exposure notification) need to solve two seemingly contradictory problems at the same time: they need to protect the anonymity of honest users while also preventing malicious users from creating false alarms. In this paper, we provide, for the first time, an exposure notification construction that guarantees the same levels of privacy and integrity as existing schemes but with a fully malicious database (notably similar to Auerbach et al. CT-RSA 2021) without special restrictions on the adversary. We construct a new definition so that we can formally prove our construction secure. Our definition ensures the following integrity guarantees: no malicious user can cause exposure warnings in two locations at the same time and that any uploaded exposure notifications must be recent and not previously uploaded. Our construction is efficient, requiring only a single message to be broadcast at contact time no matter how many recipients are nearby. To notify contacts of potential infection, an infected user uploads data with size linear in the number of notifications, similar to other schemes. Linear upload complexity is not trivial with our assumptions and guarantees (a naive scheme would be quadratic). This linear complexity is achieved with a new primitive: zero knowledge subset proofs over commitments which is used by our "no cloning" proof protocol. We also introduce another new primitive: set commitments on equivalence classes, which makes each step of our construction more efficient. Both of these new primitives are of independent interest.

Yi-Hsiu Chen, Yehuda Lindell
Published 2024-07-08 PDFPDF

Fischlin's transform (CRYPTO 2005) is an alternative to the Fiat-Shamir transform that enables straight-line extraction when proving knowledge. In this work we focus on the problem of using the Fischlin transform to construct UC-secure zero-knowledge from Sigma protocols, since UC security – that guarantees security under general concurrent composition – requires straight-line (non-rewinding) simulators. We provide a slightly simplified transform that is much easier to understand, and present algorithmic and implementation optimizations that significantly improve the running time. It appears that the main obstacles to the use of Fischlin in practice is its computational cost and implementation complexity (with multiple parameters that need to be chosen). We provide clear guidelines and a simple methodology for choosing parameters, and show that with our optimizations the running-time is far lower than expected. For just one example, on a 2023 MacBook, the cost of proving the knowledge of discrete log with Fischlin is only 0.41ms (on a single core). This is 15 times slower than plain Fiat-Shamir on the same machine, which is a significant multiple but objectively not significant in many applications. We also extend the transform so that it can be applied to batch proofs, and show how this can be much more efficient than individually proving each statement. We hope that this paper will both encourage and help practitioners implement the Fischlin transform where relevant.

Ky Nguyen, David Pointcheval, Robert Schädlich
Published 2024-07-08 PDFPDF

Decentralized Multi-Client Functional Encryption (DMCFE) extends the basic functional encryption to multiple clients that do not trust each other. They can independently encrypt the multiple plaintext-inputs to be given for evaluation to the function embedded in the functional decryption key, defined by multiple parameter-inputs. And they keep control on these functions as they all have to contribute to the generation of the functional decryption keys. Tags can be used in the ciphertexts and the keys to specify which inputs can be combined together. As any encryption scheme, DMCFE provides privacy of the plaintexts. But the functions associated to the functional decryption keys might be sensitive too (e.g. a model in machine learning). The function-hiding property has thus been introduced to additionally protect the function evaluated during the decryption process.

In this paper, we provide new proof techniques to analyze a new concrete construction of function-hiding DMCFE for inner products, with strong security guarantees: the adversary can adaptively query multiple challenge ciphertexts and multiple challenge keys, with unbounded repetitions of the same tags in the ciphertext-queries and a fixed polynomially-large number of repetitions of the same tags in the key-queries. Previous constructions were proven secure in the selective setting only.

Marcel Tiepelt, Christian Martin, Nils Maeurer
Published 2024-04-09 PDFPDF

Transitioning from classically to quantum secure key agreement protocols may require to exchange fundamental components, for example, exchanging Diffie-Hellman-like key exchange with a key encapsulation mechanism (KEM). Accordingly, the corresponding security proof can no longer rely on the Diffie-Hellman assumption, thus invalidating the security guarantees. As a consequence, the security properties have to be re-proven under a KEM-based security notion.

We initiate the study of the LDACS key agreement protocol (Edition 01.01.00 from 25.04.2023), which is soon-to-be-standardized by the International Civil Aviation Organization. The protocol's cipher suite features Diffie-Hellman as well as a KEM-based key agreement protocol to provide post-quantum security. While the former results in an instantiation of an ISO key agreement inheriting all security properties, the security achieved by the latter is ambiguous. We formalize the computational security using the systematic notions of de Saint Guilhem, Fischlin and Warinshi (CSF '20), and prove the exact security that the KEM-based variant achieves in this model; primarily entity authentication, key secrecy and key authentication. To further strengthen our “pen-and-paper” findings, we model the protocol and its security guarantees using Tamarin, providing an automated proof of the security against a Dolev-Yao attacker.

Charles Bouillaguet, Julia Sauvage
Published 2024-04-09 PDFPDF

Biscuit is a recent multivariate signature scheme based on the MPC-in-the-Head paradigm. It has been submitted to the NIST competition for additional signature schemes. Signatures are derived from a zero-knowledge proof of knowledge of the solution of a structured polynomial system. This extra structure enables efficient proofs and compact signatures. This short note demonstrates that it also makes these polynomial systems easier to solve than random ones. As a consequence, the original parameters of Biscuit failed to meet the required security levels and had to be upgraded.

Manuel Barbosa, Deirdre Connolly, João Diogo Duarte, Aaron Kaiser, Peter Schwabe, Karolin Varner, Bas Westerbaan
Published 2024-04-09 PDFPDF

X-Wing is a hybrid key-encapsulation mechanism based on X25519 and ML-KEM-768. It is designed to be the sensible choice for most applications. The concrete choice of X25519 and ML-KEM-768 allows X-Wing to achieve improved efficiency compared to using a generic KEM combiner. In this paper, we introduce the X-Wing hybrid KEM construction and provide a proof of security. We show (1) that X-Wing is a classically IND-CCA secure KEM if the strong Diffie-Hellman assumption holds in the X25519 nominal group, and (2) that X-Wing is a post-quantum IND-CCA secure KEM if ML-KEM-768 is itself an IND-CCA secure KEM and SHA3-256 is secure when used as a pseudorandom function. The first result is proved in the ROM, whereas the second one holds in the standard model. Loosely speaking, this means X-Wing is secure if either X25519 or ML-KEM-768 is secure. We stress that these security guarantees and optimizations are only possible due to the concrete choices that were made, and it may not apply in the general case.

Yehuda Lindell
Published 2024-04-09 PDFPDF

In a multiparty signing protocol, also known as a threshold signature scheme, the private signing key is shared amongst a set of parties and only a quorum of those parties can generate a signature. Research on multiparty signing has been growing in popularity recently due to its application to cryptocurrencies. Most work has focused on reducing the number of rounds to two, and as a result: (a) are not fully simulatable in the sense of MPC real/ideal security definitions, and/or (b) are not secure under concurrent composition, and/or (c) utilize non-standard assumptions of different types in their proofs of security. In this paper, we describe a simple three-round multiparty protocol for Schnorr signatures that is secure for any number of corrupted parties; i.e., in the setting of a dishonest majority. The protocol is fully simulatable, secure under concurrent composition, and proven secure in the standard model or random-oracle model (depending on the instantiations of the commitment and zero-knowledge primitives). The protocol realizes an ideal Schnorr signing functionality with perfect security in the ideal commitment and zero-knowledge hybrid model (and thus the only assumptions needed are for realizing these functionalities).

In our presentation, we do not assume that all parties begin with the message to be signed, the identities of the participating parties and a unique common session identifier, since this is often not the case in practice. Rather, the parties achieve consensus on these parameters as the protocol progresses.

Pierrick Méaux, Jeongeun Park, Hilder V. L. Pereira
Published 2024-04-09 PDFPDF

Fully Homomorphic Encryption (FHE) is a powerful tool to achieve non-interactive privacy preserving protocols with optimal computation/communication complexity. However, the main disadvantage is that the actual communication cost (bandwidth) is high due to the large size of FHE ciphertexts. As a solution, a technique called transciphering (also known as Hybrid Homomorphic Encryption) was introduced to achieve almost optimal bandwidth for such protocols. However, all existing works require clients to fix a precision for the messages or a mathematical structure for the message space beforehand. It results in unwanted constraints on the plaintext size or underlying structure of FHE based applications.

In this article, we introduce a new approach for transciphering which does not require fixed message precision decided by the client, for the first time. In more detail, a client uses any kind of FHE-friendly symmetric cipher for $\{0,1\}$ to send its input data encrypted bit-by-bit, then the server can choose a precision $p$ depending on the application and homomorphically transforms the encrypted bits into FHE ciphertexts encrypting integers in $\mathbb{Z}_p$. To illustrate our new technique, we evaluate a transciphering using FiLIP cipher and adapt the most practical homomorphic evaluation technique [CCS'22] to keep the practical latency. As a result, our proof-of-concept implementation for $p$ from $2^2$ to $2^8$ takes only from $13$ ms to $137$ ms.

Benjamin E. Diamond, Jim Posen
Published 2024-04-09 PDFPDF

A fundamental result dating to Ligero (Des. Codes Cryptogr. '23) establishes that each fixed linear block code exhibits proximity gaps with respect to the collection of affine subspaces, in the sense that each given subspace either resides entirely close to the code, or else contains only a small portion which resides close to the code. In particular, any given subspace's failure to reside entirely close to the code is necessarily witnessed, with high probability, by a uniformly randomly sampled element of that subspace. We investigate a variant of this phenomenon in which the witness is not sampled uniformly from the subspace, but rather from a much smaller subset of it. We show that a logarithmic number of random field elements (in the dimension of the subspace) suffice to effect an analogous proximity test, with moreover only a logarithmic (multiplicative) loss in the possible prevalence of false witnesses. We discuss applications to recent noninteractive proofs based on linear codes, including Brakedown (CRYPTO '23).

Loïc Demange, Mélissa Rossi
Published 2024-04-09 PDFPDF

BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST's standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based algorithms by masking has only been marginally studied in a 2016 paper by Chen et al. in SAC 2015. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of performances.

In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model.

More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms, we show that masking at order 1, 2, 3, 4 and 5 implies a performance penalty of x5.8, x14.2, x24.4, x38 and x55.6 compared to order 0 (unmasked and unoptimized BIKE). This scaling is encouraging and no Boolean to Arithmetic conversion has been used.

Daniel J. Bernstein
Published 2024-04-09 PDFPDF

This paper reviews, from bottom to top, a polynomial-time algorithm to correct $t$ errors in classical binary Goppa codes defined by squarefree degree-$t$ polynomials. The proof is factored through a proof of a simple Reed–Solomon decoder, and the algorithm is simpler than Patterson's algorithm. All algorithm layers are expressed as Sage scripts backed by test scripts. All theorems are formally verified. The paper also covers the use of decoding inside the Classic McEliece cryptosystem, including reliable recognition of valid inputs.

Akira Takahashi, Greg Zaverucha
Published 2024-04-09 PDFPDF

Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations. It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations. In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme. Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.

Matteo Campanelli, Chaya Ganesh, Rosario Gennaro
Published 2024-04-09 PDFPDF

We investigate proof systems where security holds against rational parties instead of malicious ones. Our starting point is the notion of rational arguments, a variant of rational proofs (Azar and Micali, STOC 2012) where security holds against rational adversaries that are also computationally bounded.

Rational arguments are an interesting primitive because they generally allow for very efficient protocols, and in particular sublinear verification (i.e. where the Verifier does not have to read the entire input). In this paper we aim at narrowing the gap between literature on rational schemes and real world applications. Our contribution is two-fold.

We provide the first construction of rational arguments for the class of polynomial computations that is practical (i.e., it can be applied to real-world computations on reasonably common hardware) and with logarithmic communication. Techniques-wise, we obtain this result through a compiler from information-theoretic protocols and rational proofs for polynomial evaluation. The latter could be of independent interest.

As a second contribution, we propose a new notion of extractability for rational arguments. Through this notion we can obtain arguments where knowledge of a witness is incentivized (rather than incentivizing mere soundness). We show how our aforementioned compiler can also be applied to obtain efficient extractable rational arguments for $\mathsf{NP}$.

Dan Boneh, Benedikt Bünz, Ben Fisch
Published 2024-04-09 PDFPDF

A verifiable delay function (VDF) is an important tool used for adding delay in decentralized applications. This paper surveys and compares two beautiful verifiable delay functions, one due to Pietrzak, and the other due to Wesolowski, In addition, we provide a new computational proof of security for one of them, present an attack on an incorrect implementation of the other, and compare the complexity assumptions needed for both schemes.