Communications in Cryptology IACR CiC


Dates are inconsistent
28 results sorted by publication date
Editors in chief
Call for papers: IACR Communications in Cryptology Submit a paper Communications in Cryptology is a journal for original research papers which welcomes submissions on any topic in cryptology. This covers all research topics in cryptography and cryptanalysis, including but not limited to foundational theory and mathematics the design, proposal, and analysis of cryptographic primitives a...
Editors in chief
Frequently asked questions The International Association for Cryptologic Research (IACR) Communications in Cryptology (CiC) was approved by the Membership in the IACR 2022 election and targets publications that advance the field, but with a broader range of contributions than the ones accepted by the IACR flagship or area conferences. What are the main principles of CiC? Low-cost open ...
Chunzhi Zhao, Junqi Zhang, Jinzheng Cao, Qingfeng Cheng, Fushan Wei
Published 2024-10-07 PDFPDF

At PKC 2009, May and Ritzenhofen proposed the implicit factorization problem (IFP). They showed that it is undemanding to factor two h-bit RSA moduli N1=p1q1, N2=p2q2 where q1, q2 are both αh-bit, and p1, p2 share uh>2αh the least significant bits (LSBs). Subsequent works mainly focused on extending the IFP to the cases where p1, p2 share some of the most significant bits (MSBs) or the middle bits (MBs). In this paper, we propose a novel generalized IFP where p1 and p2 share an arbitrary number of bit blocks, with each block having a consistent displacement in its position between p1 and p2, and we solve it successfully based on Coppersmith’s method. Specifically, we generate a new set of shift polynomials to construct the lattice and optimize the structure of the lattice by introducing a new variable z=p1. We derive that we can factor the two moduli in polynomial time when u>2(n+1)α(1−α^1/(n+1)) with p1, p2 sharing n blocks. Further, no matter how many blocks are shared, we can theoretically factor the two moduli as long as u>2αln(1/α). In addition, we consider two other cases where the positions of the shared blocks are arbitrary or there are k>2 known moduli. Meanwhile, we provide the corresponding solutions for the two cases. Our work is verified by experiments.

Sebastian Kolby, Divya Ravi, Sophia Yakoubov
Published 2024-10-07 PDFPDF

YOSO MPC (Gentry et al., Crypto 2021) is a new MPC framework where each participant can speak at most once. This models an adaptive adversary’s ability to watch the network and corrupt or destroy parties it deems significant based on their communication. By using private channels to anonymous receivers (e.g. by encrypting to a public key whose owner is unknown), the communication complexity of YOSO MPC can scale sublinearly with the total number N of available parties, even when the adversary’s corruption threshold is linear in N (e.g. just under N/2). It was previously an open problem whether YOSO MPC can achieve guaranteed output delivery in a constant number of rounds without relying on trusted setup. In this work, we show that this can indeed be accomplished. We demonstrate three different approaches: the first two (which we call YaOSO and YOSO-GLS) use two and three rounds of communication, respectively. Our third approach (which we call YOSO-LHSS) uses O(d) rounds, where d is the multiplicative depth of the circuit being evaluated; however, it can be used to bootstrap any constant-round YOSO protocol that requires setup, by generating that setup within YOSO-LHSS. Though YOSO-LHSS requires more rounds than our first two approaches, it may be more practical, since the zero knowledge proofs it employs are more efficient to instantiate. As a contribution of independent interest, we introduce a verifiable state propagation UC functionality, which allows parties to send private message which are verifiably derived in the “correct” way (according to the protocol in question) to anonymous receivers. This is a natural functionality to build YOSO protocols on top of.

Ritam Bhaumik, André Chailloux, Paul Frixons, Bart Mennink, María Naya-Plasencia
Published 2024-10-07 PDFPDF

In order to maintain a similar security level in a post-quantum setting, many symmetric primitives should have to double their keys and increase their state sizes. So far, no generic way for doing this is known that would provide convincing quantum security guarantees. In this paper we propose a new generic construction, QuEME, that allows one to double the key and the state size of a block cipher in such a way that a decent level of quantum security is guaranteed. The QuEME design is inspired by the ECB-Mix-ECB (EME) construction, but is defined for a different choice of mixing function than what we have seen before, in order to withstand a new quantum superposition attack that we introduce as a side result: this quantum superposition attack exhibits a periodic property found in collisions and breaks EME and a large class of its variants. We prove that QuEME achieves n-bit security in the classical setting, where n is the block size of the underlying block cipher, and at least (n/6)-bit security in the quantum setting. We finally propose a concrete instantiation of this construction, called Double-AES, that is built with variants of the standardized AES-128 block cipher.

Sebastian Kolby, Elena Pagnin, Sophia Yakoubov
Published 2024-10-07 PDFPDF

We study signatures well suited for sensitive applications (e.g. whistleblowing) where both the signer's anonymity and deniability are important. Two independent lines of work have tackled these two goals: ring signatures ensure the signer's anonymity (within a set of signers, called a ring), and — separately — multi designated verifier signatures ensure that all the intended recipients agree on whether a signature is valid, while maintaining the signer's deniability by preventing the intended recipients from convincing an outsider of the validity of the signature. In this paper, we introduce multi designated verifier ring signatures (MDVRS), which simultaneously offer both signer anonymity and deniability. This makes MDVRS uniquely suited for sensitive scenarios.

Following the blueprint of Damgård et al (TCC'20) for multi designated verifier signatures, we introduce provably simulatable designated verifier ring signatures (PSDVRS) as an intermediate building block which we then compile into an MDVRS. We instantiate PSDVRS in a concretely efficient way from discrete logarithm based sigma protocols, encryption and commitments.

André Schrottenloher, Marc Stevens
Published 2024-10-07 PDFPDF

In this paper we study search problems that arise very often in cryptanalysis: nested search problems, where each search layer has known degrees of freedom and/or constraints. A generic quantum solution for such problems consists of nesting Grover's quantum search algorithm or amplitude amplification (QAA) by Brassard et al., obtaining up to a square-root speedup on classical algorithms. However, the analysis of nested Grover or QAA is complex and introduces technicalities that in previous works are handled in a case-by-case manner. Moreover, straightforward nesting of l layers multiplies the complexity by a constant factor (pi/2)^l.

In this paper, we aim to remedy both these issues and introduce a generic framework and tools to transform a classical nested search into a quantum procedure. It improves the state-of-the-art in three ways: 1) our framework results in quantum procedures that are significantly simpler to describe and analyze; 2) it reduces the overhead factor from (pi/2)^l to sqrt(l); 3) it is simpler to apply and optimize, without needing manual quantum analysis. We give generic complexity formulas and show that for concrete instances, numerical optimizations enable further improvements, reducing even more the gap to an exact quadratic speedup.

We demonstrate our framework by giving a tighter analysis of quantum attacks on reduced-round AES.

Nima Mahdion, Elisabeth Oswald
Published 2024-10-07 PDFPDF

Software implementations of cryptographic algorithms often use masking schemes as a countermeasure against side channel attacks. A number of recent results show clearly the challenge of implementing masking schemes in such a way, that (unforeseen) micro-architectural effects do not cause masking flaws that undermine the intended security goal of an implementation. So far, utilising a higher-order version of the non-specific (fixed-vs-random) input test of the Test Vector Leakage Assessment (TVLA) framework has been the best option to identify such flaws. The drawbacks of this method are both its significant computation cost, as well as its inability to pinpoint which interaction of masking shares leads to the flaw. In this paper we propose a novel version, the fixed-vs-random shares test, to tackle both drawbacks. We explain our method and show its application to three case studies, where each time it outperforms its conventional TVLA counterpart. The drawback of our method is that it requires control over the shares, which, we argue, is practically feasible in the context of in-house evaluation and testing for software implementations.

Lichao Wu, Sébastien Tiran, Guilherme Perin, Stjepan Picek
Published 2024-10-07 PDFPDF

Side-channel Collision Attacks (SCCA) is a classical method that exploits information dependency leaked during cryptographic operations. Unlike collision attacks that seek instances where two different inputs to a cryptographic algorithm yield identical outputs, SCCAs specifically target the internal state, where identical outputs are more likely. Although SCCA does not rely on the pre-assumption of the leakage model, it explicitly operates on precise trace segments reflecting the target operation, which is challenging to perform when the leakage measurements are noisy. Besides, its attack performance may vary dramatically, as it relies on selecting a reference byte (and its corresponding leakages) to “collide” other bytes. A poor selection would lead to many bytes unrecoverable. These two facts make its real-world application problematic.

This paper addresses these challenges by introducing a novel plaintext-based SCCA. We leverage the bijective relationship between plaintext and secret data, using plaintext as labels to train profiling models to depict leakages from varying operations. By comparing the leakage representations produced by the profiling model instead of the leakage segmentation itself, all secret key differences can be revealed simultaneously without processing leakage traces. Furthermore, we propose a novel error correction scheme to rectify false predictions further. Experimental results show that our approach significantly surpasses the state-of-the-art SCCA in both attack performance and computational complexity (e.g., training time reduced from approximately three hours to five minutes). These findings underscore our method's effectiveness and practicality in real-world attack scenarios.

Ruize Wang, Kalle Ngo, Joel Gärtner, Elena Dubrova
Published 2024-10-07 PDFPDF

Most of the previous attacks on Dilithium exploit side-channel information which is leaked during the computation of the polynomial multiplication cs1, where s1 is a small-norm secret and c is a verifier's challenge. In this paper, we present a new attack utilizing leakage during secret key unpacking in the signing algorithm. The unpacking is also used in other post-quantum cryptographic algorithms, including Kyber, because inputs and outputs of their API functions are byte arrays. Exploiting leakage during unpacking is more challenging than exploiting leakage during the computation of cs1 since c varies for each signing, while the unpacked secret key remains constant. Therefore, post-processing is required in the latter case to recover a full secret key. We present two variants of post-processing. In the first one, a half of the coefficients of the secret s1 and the error s2 is recovered by profiled deep learning-assisted power analysis and the rest is derived by solving linear equations based on t = As1 + s2, where A and t are parts of the public key. This case assumes knowledge of the least significant bits of t, t0. The second variant uses lattice reduction to derive s1 without the knowledge of t0. However, it needs a larger portion of s1 to be recovered by power analysis. We evaluate both variants on an ARM Cortex-M4 implementation of Dilithium-2. The experiments show that the attack assuming the knowledge of t0 can recover s1 from a single trace captured from a different from profiling device with a non-negligible probability.

Aron van Baarsen, Marc Stevens
Published 2024-10-07 PDFPDF

Private set intersection (PSI) is a cryptographic functionality for two parties to learn the intersection of their input sets, without leaking any other information. Circuit-PSI is a stronger PSI functionality where the parties learn only a secret-shared form of the desired intersection, thus without revealing the intersection directly. These secret shares can subsequently serve as input to a secure multiparty computation of any function on this intersection.

In this paper we consider several settings in which parties take part in multiple Circuit-PSI executions with the same input set, and aim to amortize communications and computations. To that end, we build up a new framework for Circuit-PSI around generalizations of oblivious (programmable) PRFs that are extended with offline setup phases. We present several efficient instantiations of this framework with new security proofs for this setting. As a side result, we obtain a slight improvement in communication and computation complexity over the state-of-the-art semi-honest Circuit-PSI protocol by Bienstock et al. (USENIX '23). Additionally, we present a novel Circuit-PSI protocol from a PRF with secret-shared outputs, which has linear communication and computation complexity in the parties' input set sizes, and is able to realize a stronger security notion. Lastly, we derive the potential amortizations over multiple protocol executions, and observe that each of the presented instantiations is favorable in at least one of the multiple-execution settings.

Balthazar Bauer, Pooya Farshim, Patrick Harasser, Markulf Kohlweiss
Published 2024-10-07 PDFPDF

The generic-group model (GGM) and the algebraic-group model (AGM) have been exceptionally successful in proving the security of many classical and modern cryptosystems. These models, however, come with standard-model uninstantiability results, raising the question of whether the schemes analyzed under them can be based on firmer standard-model footing.

We formulate the uber-knowledge (UK) assumption, a standard-model assumption that naturally extends the uber-assumption family to knowledge-type problems. We justify the soundness of UK in both the bilinear GGM and the bilinear AGM. Along the way we extend these models to account for hashing into groups, an adversarial capability that is available in many concrete groups—In contrast to standard assumptions, hashing may affect the validity of knowledge assumptions. These results, in turn, enable a modular approach to security in the GGM and the AGM.

As example applications, we use the UK assumption to prove knowledge soundness of Groth's zero-knowledge SNARK (EUROCRYPT 2016) and of KZG polynomial commitments (ASIACRYPT 2010) in the standard model, where for the former we reuse the existing proof in the AGM without hashing.

Samuel Jaques
Published 2024-10-07 PDFPDF

The security of lattice-based crytography (LWE, NTRU, and FHE) depends on the hardness of the shortest-vector problem (SVP). Sieving algorithms give the lowest asymptotic runtime to solve SVP, but depend on exponential memory. Memory access costs much more in reality than in the RAM model, so we consider a computational model where processors, memory, and meters of wire are in constant proportions to each other. While this adds substantial costs to route data during lattice sieving, we modify existing algorithms to amortize these costs and find that, asymptotically, a classical computer can achieve the previous RAM model cost of $2^{0.2925d+o(d)}$ to sieve a $d$-dimensional lattice for a computer existing in 3 or more spatial dimensions, and can reach $2^{0.3113d+o(d)}$ in 2 spatial dimensions, where “spatial dimensions” are the dimensions of the physical geometry in which the computer exists.

Since this result implies an increased cost in 2 spatial dimensions, we make several assumptions about the constant terms of memory access and lattice attacks so that we can give bit security estimates for Kyber and Dilithium. These estimates support but do not increase the security categories claimed in the Kyber and Dilithium specifications, at least with respect to known attacks.

Avishek Majumder, Sayantan Mukherjee
Published 2024-10-07 PDFPDF

Broadcast Encryption (BE) allows a sender to send an encrypted message to multiple receivers. In a typical broadcast encryption scenario, the broadcaster decides the set of users who can decrypt a particular ciphertext (denoted as the privileged set). Gritti et al. (IJIS'16) introduced a new primitive called Broadcast Encryption with Dealership (BrED), where the dealer decides the privileged set. A BrED scheme allows a dealer to buy content from the broadcaster and sell it to users. It provides better flexibility in managing a large user base. To date, quite a few different constructions of BrED schemes have been proposed by the research community.

We find that all existing BrED schemes are insecure under the existing security definitions. We demonstrate a concrete attack on all the existing schemes in the purview of the existing security definition. We also find that the security definitions proposed in the state-of-the-art BrED schemes do not capture the real world. We argue about the inadequacy of existing definitions and propose a new security definition that models the real world more closely. Finally, we propose a new BrED construction and prove it to be secure in our newly proposed security model.

Diego F. Aranha, Georgios Fotiadis, Aurore Guillevic
Published 2024-10-07 PDFPDF

For more than two decades, pairings have been a fundamental tool for designing elegant cryptosystems, varying from digital signature schemes to more complex privacy-preserving constructions. However, the advancement of quantum computing threatens to undermine public-key cryptography. Concretely, it is widely accepted that a future large-scale quantum computer would be capable to break any public-key cryptosystem used today, rendering today's public-key cryptography obsolete and mandating the transition to quantum-safe cryptographic solutions. This necessity is enforced by numerous recognized government bodies around the world, including NIST which initiated the first open competition in standardizing post-quantum (PQ) cryptographic schemes, focusing primarily on digital signatures and key encapsulation/public-key encryption schemes. Despite the current efforts in standardizing PQ primitives, the landscape of complex, privacy-preserving cryptographic protocols, e.g., zkSNARKs/zkSTARKs, is at an early stage. Existing solutions suffer from various disadvantages in terms of efficiency and compactness and in addition, they need to undergo the required scrutiny to gain the necessary trust in the academic and industrial domains. Therefore, it is believed that the migration to purely quantum-safe cryptography would require an intermediate step where current classically secure protocols and quantum-safe solutions will co-exist. This is enforced by the report of the Commercial National Security Algorithm Suite version 2.0, mandating transition to quantum-safe cryptographic algorithms by 2033 and suggesting to incorporate ECC at 192-bit security in the meantime. To this end, the present paper aims at providing a comprehensive study on pairings at 192-bit security level. We start with an exhaustive review in the literature to search for all possible recommendations of such pairing constructions, from which we extract the most promising candidates in terms of efficiency and security, with respect to the advanced Special TNFS attacks. Our analysis is focused, not only on the pairing computation itself, but on additional operations that are relevant in pairing-based applications, such as hashing to pairing groups, cofactor clearing and subgroup membership testing. We implement all functionalities of the most promising candidates within the RELIC cryptographic toolkit in order to identify the most efficient pairing implementation at 192-bit security and provide extensive experimental results.

Anne Broadbent, Martti Karvonen, Sébastien Lord
Published 2024-10-07 PDFPDF

The famous no-cloning principle has been shown recently to enable a number of uncloneable cryptographic primitives, including the copy-protection of certain functionalities. Here we address for the first time unkeyed quantum uncloneablity, via the study of a complexity-theoretic tool that enables a computation, but that is natively unkeyed: quantum advice. Remarkably, this is an application of the no-cloning principle in a context where the quantum states of interest are not chosen by a random process. We establish unconditional constructions for promise problems admitting uncloneable quantum advice and, assuming the feasibility of quantum copy-protecting certain functions, for languages with uncloneable advice. Along the way, we note that state complexity classes, introduced by Rosenthal and Yuen (ITCS 2022) — which concern the computational difficulty of synthesizing sequences of quantum states — can be naturally generalized to obtain state cloning complexity classes. We make initial observations on these classes, notably obtaining a result analogous to the existence of undecidable problems.

Our proof technique defines and constructs ingenerable sequences of finite bit strings, essentially meaning that they cannot be generated by any uniform circuit family with non-negligible probability. We then prove a generic result showing that the difficulty of accomplishing a computational task on uniformly random inputs implies its difficulty on any fixed, ingenerable sequence. We use this result to derandomize quantum cryptographic games that relate to cloning, and then incorporate a result of Kundu and Tan (arXiv 2022) to obtain uncloneable advice. Applying this two-step process to a monogamy-of-entanglement game yields a promise problem with uncloneable advice, and applying it to the quantum copy-protection of pseudorandom functions with super-logarithmic output lengths yields a language with uncloneable advice.

Ruixiao Li, Hayato Yamana
Published 2024-10-07 PDFPDF

To address security issues in cloud computing, fully homomorphic encryption (FHE) enables a third party to evaluate functions using ciphertexts that do not leak information to the cloud server. The remaining problems of FHE include high computational costs and limited arithmetic operations, only evaluating additions and multiplications. Arbitrary functions can be evaluated using a precomputed lookup table (LUT), which is one of the solutions for those problems. Previous studies proposed LUT-enabled computation methods 1) with bit-wise FHE and 2) with word-wise FHE. The performance of LUT-enabled computation with bit-wise FHE drops quickly when evaluating BigNum functions because of the complexity being O(s·2^d·m), where m represents the number of inputs, d and s represent the bit lengths of the inputs and outputs, respectively. Thus, LUT-enabled computation with word-wise FHE, which handles a set of bits with one operation, has also been proposed; however, previous studies are limited in evaluating multivariate functions within two inputs and cannot speed up the evaluation when the domain size of the integer exceeds 2N, where N is the number of elements packed into a single ciphertext. In this study, we propose a non-interactive model, in which no decryption is required, to evaluate arbitrary multivariate functions using homomorphic table lookup with word-wise FHE. The proposed LUT-enabled computation method 1) decreases the complexity to O(2^d·m/l), where l is the element size of FHE packing; 2) extends the input and output domain sizes to evaluate multivariate functions over two inputs; and 3) adopts a multidimensional table for enabling multithreading to reduce latency. The experimental results demonstrate that evaluating a 10-bit two-input function and a 5-bit three-input function takes approximately 90.5 and 105.5 s with 16-thread, respectively. Our proposed method achieves 3.2x and 23.1x speedup to evaluate two-bit and three-bit 3-input functions compared with naive LUT-enabled computation with bit-wise FHE.

Nilanjan Datta, Avijit Dutta, Eik List, Sougata Mandal
Published 2024-07-08 PDFPDF

There has been a notable surge of research on leakage-resilient authenticated encryption (AE) schemes, in the bounded as well as the unbounded leakage model. The latter has garnered significant attention due to its detailed and practical orientation. Designers have commonly utilized (tweakable) block ciphers, exemplified by the TEDT scheme, achieving $\mathcal{O}(n-\log(n^2))$-bit integrity under leakage and comparable AE security in the black-box setting. However, the privacy of TEDT was limited by $n/2$-bits under leakage; TEDT2 sought to overcome these limitations by achieving improved security with $\mathcal{O}(n-\log n)$-bit integrity and privacy under leakage.

This work introduces FEDT, an efficient leakage-resilient authenticated encryption (AE) scheme based on fork-cipher. Compared to the state-of-the-art schemes TEDT and TEDT2, which process messages with a rate of $1/2$ block per primitive call for encryption and one for authentication, FEDT doubles their rates at the price of a different primitive. FEDT employs a more parallelizable tree-based encryption compared to its predecessors while maintaining $\mathcal{O}(n-\log n)$-bit security for both privacy and integrity under leakage. FEDT prioritizes high throughput at the cost of increased latency. For settings where latency is important, we propose FEDT*, which combines the authentication part of FEDT with a CTR-based encryption. FEDT* offers security equivalent to FEDT while increasing the encryption rate of $4/3$ and reducing the latency.

Thomas Attema, Aron van Baarsen, Stefan van den Berg, Pedro Capitão, Vincent Dunning, Lisa Kohl
Published 2024-07-08 PDFPDF

Despite much progress, general-purpose secure multi-party computation (MPC) with active security may still be prohibitively expensive in settings with large input datasets. This particularly applies to the secure evaluation of graph algorithms, where each party holds a subset of a large graph. Recently, Araki et al. (ACM CCS '21) showed that dedicated solutions may provide significantly better efficiency if the input graph is sparse. In particular, they provide an efficient protocol for the secure evaluation of “message passing” algorithms, such as the PageRank algorithm. Their protocol's computation and communication complexity are both $\tilde{O}(M\cdot B)$ instead of the $O(M^2)$ complexity achieved by general-purpose MPC protocols, where $M$ denotes the number of nodes and $B$ the (average) number of incoming edges per node. On the downside, their approach achieves only a relatively weak security notion; $1$-out-of-$3$ malicious security with selective abort.

In this work, we show that PageRank can instead be captured efficiently as a restricted multiplication straight-line (RMS) program, and present a new actively secure MPC protocol tailored to handle RMS programs. In particular, we show that the local knowledge of the participants can be leveraged towards the first maliciously-secure protocol with communication complexity linear in $M$, independently of the sparsity of the graph. We present two variants of our protocol. In our communication-optimized protocol, going from semi-honest to malicious security only introduces a small communication overhead, but results in quadratic computation complexity $O(M^2)$. In our balanced protocol, we still achieve a linear communication complexity $O(M)$, although with worse constants, but a significantly better computational complexity scaling with $O(M\cdot B)$. Additionally, our protocols achieve security with identifiable abort and can tolerate up to $n-1$ corruptions.

Gaëtan Cassiers, Loïc Masure, Charles Momin, Thorben Moos, Amir Moradi, François-Xavier Standaert
Published 2024-07-08 PDFPDF

Masking is a prominent strategy to protect cryptographic implementations against side-channel analysis. Its popularity arises from the exponential security gains that can be achieved for (approximately) quadratic resource utilization. Many variants of the countermeasure tailored for different optimization goals have been proposed. The common denominator among all of them is the implicit demand for robust and high entropy randomness. Simply assuming that uniformly distributed random bits are available, without taking the cost of their generation into account, leads to a poor understanding of the efficiency vs. security tradeoff of masked implementations. This is especially relevant in case of hardware masking schemes which are known to consume large amounts of random bits per cycle due to parallelism. Currently, there seems to be no consensus on how to most efficiently derive many pseudo-random bits per clock cycle from an initial seed and with properties suitable for masked hardware implementations. In this work, we evaluate a number of building blocks for this purpose and find that hardware-oriented stream ciphers like Trivium and its reduced-security variant Bivium B outperform most competitors when implemented in an unrolled fashion. Unrolled implementations of these primitives enable the flexible generation of many bits per cycle, which is crucial for satisfying the large randomness demands of state-of-the-art masking schemes. According to our analysis, only Linear Feedback Shift Registers (LFSRs), when also unrolled, are capable of producing long non-repetitive sequences of random-looking bits at a higher rate per cycle for the same or lower cost as Trivium and Bivium B. Yet, these instances do not provide black-box security as they generate only linear outputs. We experimentally demonstrate that using multiple output bits from an LFSR in the same masked implementation can violate probing security and even lead to harmful randomness cancellations. Circumventing these problems, and enabling an independent analysis of randomness generation and masking, requires the use of cryptographically stronger primitives like stream ciphers. As a result of our studies, we provide an evidence-based estimate for the cost of securely generating $n$ fresh random bits per cycle. Depending on the desired level of black-box security and operating frequency, this cost can be as low as $20n$ to $30n$ ASIC gate equivalents (GE) or $3n$ to $4n$ FPGA look-up tables (LUTs), where $n$ is the number of random bits required. Our results demonstrate that the cost per bit is (sometimes significantly) lower than estimated in previous works, incentivizing parallelism whenever exploitable. This provides further motivation to potentially move low randomness usage from a primary to a secondary design goal in hardware masking research.

Gil Segev, Liat Shapira
Published 2024-07-08 PDFPDF

In this work we first present an explicit forking lemma that distills the information-theoretic essence of the high-moment technique introduced by Rotem and Segev (CRYPTO '21), who analyzed the security of identification protocols and Fiat-Shamir signature schemes. Whereas the technique of Rotem and Segev was particularly geared towards two specific cryptographic primitives, we present a stand-alone probabilistic lower bound, which does not involve any underlying primitive or idealized model. The key difference between our lemma and previous ones is that instead of focusing on the tradeoff between the worst-case or expected running time of the resulting forking algorithm and its success probability, we focus on the tradeoff between higher moments of its running time and its success probability.

Equipped with our lemma, we then establish concrete security bounds for the BN and BLS multi-signature schemes that are significantly tighter than the concrete security bounds established by Bellare and Neven (CCS '06) and Boneh, Drijvers and Neven (ASIACRYPT '18), respectively. Our analysis does not limit adversaries to any idealized algebraic model, such as the algebraic group model in which all algorithms are assumed to provide an algebraic justification for each group element they produce. Our bounds are derived in the random-oracle model based on the standard-model second-moment hardness of the discrete logarithm problem (for the BN scheme) and the computational co-Diffie-Hellman problem (for the BLS scheme). Such second-moment assumptions, asking that the success probability of any algorithm in solving the underlying computational problems is dominated by the second moment of the algorithm's running time, are particularly plausible in any group where no better-than-generic algorithms are currently known.

Anis Bkakria, Malika Izabachène
Published 2024-07-08 PDFPDF

Pattern matching methods are essential in various applications where users must disclose highly sensitive information. Among these applications are genomic data analysis, financial records inspection, and intrusion detection processes, all of which necessitate robust privacy protection mechanisms. Balancing the imperative of protecting the confidentiality of analyzed data with the need for efficient pattern matching presents a significant challenge.

In this paper, we propose an efficient post-quantum secure construction that enables arbitrary pattern matching over encrypted data while ensuring the confidentiality of the data to be analyzed. In addition, we address scenarios where a malicious data sender, intended to send an encrypted content for pattern detection analysis, has the ability to modify the encrypted content. We adapt the data fragmentation technique to handle such a malicious sender. Our construction makes use of a well-suited Homomorphic Encryption packing method in the context of fragmented streams and combines homomorphic operations in a leveled mode (i.e. without bootstrapping) to obtain a very efficient pattern matching detection process.

In contrast to the most efficient state-of-the-art scheme, our construction achieves a significant reduction in the time required for encryption, decryption, and pattern matching on encrypted data. Specifically, our approach decreases the time by factors of $1850$, $10^6$, and $245$, respectively, for matching a single pattern, and by factors of $115$, $10^5$, and $12$, respectively, for matching $2^{10}$ patterns.

Yi-Hsiu Chen, Yehuda Lindell
Published 2024-07-08 PDFPDF

Fischlin's transform (CRYPTO 2005) is an alternative to the Fiat-Shamir transform that enables straight-line extraction when proving knowledge. In this work we focus on the problem of using the Fischlin transform to construct UC-secure zero-knowledge from Sigma protocols, since UC security – that guarantees security under general concurrent composition – requires straight-line (non-rewinding) simulators. We provide a slightly simplified transform that is much easier to understand, and present algorithmic and implementation optimizations that significantly improve the running time. It appears that the main obstacles to the use of Fischlin in practice is its computational cost and implementation complexity (with multiple parameters that need to be chosen). We provide clear guidelines and a simple methodology for choosing parameters, and show that with our optimizations the running-time is far lower than expected. For just one example, on a 2023 MacBook, the cost of proving the knowledge of discrete log with Fischlin is only 0.41ms (on a single core). This is 15 times slower than plain Fiat-Shamir on the same machine, which is a significant multiple but objectively not significant in many applications. We also extend the transform so that it can be applied to batch proofs, and show how this can be much more efficient than individually proving each statement. We hope that this paper will both encourage and help practitioners implement the Fischlin transform where relevant.

Kemal Bicakci, Kemal Ulker, Yusuf Uzunay, Halis Taha Şahin, Muhammed Said Gündoğan
Published 2024-07-08 PDFPDF

The adversary model of white-box cryptography includes an extreme case where the adversary, sitting at the endpoint, has full access to a cryptographic scheme. Motivating by the fact that most existing white-box implementations focus on symmetric encryption, we present implementations for hash-based signatures so that the security against white-box attackers (who have read-only access to data with a size bounded by a space-hardness parameter M) depends on the availability of a white-box secure cipher (in addition to a general one-way function). We also introduce parameters and key-generation complexity results for white-box secure instantiation of stateless hash-based signature scheme SPHINCS+, one of the NIST selections for quantum-resistant digital signature algorithms, and its older version SPHINCS. We also present a hash tree-based solution for one-time passwords secure in a white-box attacker context. We implement the proposed solutions and share our performance results.

Jules Maire, Damien Vergnaud
Published 2024-04-09 PDFPDF

We present new secure multi-party computation protocols for linear algebra over a finite field, which improve the state-of-the-art in terms of security. We look at the case of unconditional security with perfect correctness, i.e., information-theoretic security without errors. We notably propose an expected constant-round protocol for solving systems of m linear equations in n variables over Fq with expected complexity O(k n^2.5 + k m) (where complexity is measured in terms of the number of secure multiplications required) with k > m(m+n)+1. The previous proposals were not error-free: known protocols can indeed fail and thus reveal information with probability Omega(poly(m)/q). Our protocols are simple and rely on existing computer-algebra techniques, notably the Preparata-Sarwate algorithm, a simple but poorly known “baby-step giant-step” method for computing the characteristic polynomial of a matrix, and techniques due to Mulmuley for error-free linear algebra in positive characteristic.

Akira Takahashi, Greg Zaverucha
Published 2024-04-09 PDFPDF

Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations. It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations. In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme. Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.

Matteo Campanelli, Chaya Ganesh, Rosario Gennaro
Published 2024-04-09 PDFPDF

We investigate proof systems where security holds against rational parties instead of malicious ones. Our starting point is the notion of rational arguments, a variant of rational proofs (Azar and Micali, STOC 2012) where security holds against rational adversaries that are also computationally bounded.

Rational arguments are an interesting primitive because they generally allow for very efficient protocols, and in particular sublinear verification (i.e. where the Verifier does not have to read the entire input). In this paper we aim at narrowing the gap between literature on rational schemes and real world applications. Our contribution is two-fold.

We provide the first construction of rational arguments for the class of polynomial computations that is practical (i.e., it can be applied to real-world computations on reasonably common hardware) and with logarithmic communication. Techniques-wise, we obtain this result through a compiler from information-theoretic protocols and rational proofs for polynomial evaluation. The latter could be of independent interest.

As a second contribution, we propose a new notion of extractability for rational arguments. Through this notion we can obtain arguments where knowledge of a witness is incentivized (rather than incentivizing mere soundness). We show how our aforementioned compiler can also be applied to obtain efficient extractable rational arguments for $\mathsf{NP}$.

Samuel Bouaziz–Ermann, Alex B. Grilo, Damien Vergnaud, Quoc-Huy Vu
Published 2024-04-09 PDFPDF

There has been a recent interest in proposing quantum protocols whose security relies on weaker computational assumptions than their classical counterparts. Importantly to our work, it has been recently shown that public-key encryption (PKE) from one-way functions (OWF) is possible if we consider quantum public keys. Notice that we do not expect classical PKE from OWF given the impossibility results of Impagliazzo and Rudich (STOC'89).

However, the distribution of quantum public keys is a challenging task. Therefore, the main question that motivates our work is if quantum PKE from OWF is possible if we have classical public keys. Such protocols are impossible if ciphertexts are also classical, given the impossibility result of Austrin et al.(CRYPTO'22) of quantum enhanced key-agreement (KA) with classical communication.

In this paper, we focus on black-box separation for PKE with classical public key and quantum ciphertext from OWF under the polynomial compatibility conjecture, first introduced in Austrin et al.. More precisely, we show the separation when the decryption algorithm of the PKE does not query the OWF. We prove our result by extending the techniques of Austrin et al. and we show an attack for KA in an extended classical communication model where the last message in the protocol can be a quantum state.