Communications in Cryptology IACR CiC

Volume 2, Issue 2

Published 2025-07-07
Baloo: Algebraic Lookup Arguments

Arantxa Zapico, Ariel Gabizon, Dmitry Khovratovich, Mary Maller, Carla Ràfols

Research paper PDFPDF

We present Baloo, a protocol for lookup tables where the prover work is linear on the number of lookups and independent of the table size. Baloo is built over previous lookup arguments, and the framework for SNARKs from Ràfols and Zapico (CRYPTO 21). Our protocol supports commit-and-prove expansions: the prover selects the subtable containing the elements used in the lookup, that is unknown to the verifier, commits to it and later proves its relation with the committed elements. This feature makes Baloo especially suitable for proving input-output relations on hash functions, and in particular to instantiate the Ethereum Virtual Machine (EVM).

On the Security of Group Ring Learning with Errors

Andrew Mendelsohn, Charles Grover, Cong Ling

Research paper PDFPDF

We propose a dimension-reducing transformation on Group Ring Learning with Errors (GRLWE) samples. We exhibit an efficiently computable isomorphism which takes samples defined over the group rings used in the construction of GRLWE to twice as many samples defined over matrix rings, in half the dimension. This is done by composing two maps: the first map is a transformation showing that the group rings used are orders in central simple algebras, and the second map takes the obtained central simple algebra to a matrix ring. When combined with lattice reduction on the resulting matrix samples, this gives an attack on the GRLWE problem. We extend this attack to other groups proposed for cryptographic use by the creators of GRLWE, and display some numerical results quantifying the effects of the transformation, using the `Lattice Estimator'. We then give a family of groups from which GRLWE-style group rings can be constructed which are immune to our attack, namely the generalized quaternion groups. Finally, we discuss the merits and vulnerabilities of a number of different forms of structured LWE.

Revisiting Module Lattice-based Homomorphic Encryption and Application to Secure-MPC

Anisha Mukherjee, Sujoy Sinha Roy

Research paper PDFPDF

Homomorphic encryption (HE) schemes have gained significant popularity in modern privacy-preserving applications across various domains. While research on HE constructions based on learning with errors (LWE) and ring-LWE has received major attention from both cryptographers and software-hardware designers alike, their module-LWE-based counterpart has remained comparatively under-explored in the literature. A recent work provides a module-LWE-based instantiation (MLWE-HE) of the Cheon-Kim-Kim-Song (CKKS) scheme and showcases several of its advantages such as parameter flexibility and improved parallelism. However, a primary limitation of this construction is the quadratic growth in the size of the relinearization keys. Our contribution is two-pronged: first, we present a new relinearization key-generation technique that addresses the issue of quadratic key size expansion by reducing it to linear growth. Second, we extend the application of MLWE-HE in a multi-group homomorphic encryption (MGHE) framework, thereby generalizing the favorable properties of the single-keyed HE to a multi-keyed setting as well as investigating additional flexibility attributes of the MGHE framework.

Shared OT and Its Applications

Lucas Piske, Jeroen van de Graaf, Anderson C. A. Nascimento, Ni Trieu

Research paper PDFPDF

We present unconditionally perfectly secure protocols in the semi-honest setting for several functionalities: (1) private elementwise equality; (2) private bitwise integer comparison; and (3) bit-decomposition. These protocols are built upon a new concept called Shared Oblivious Transfer (Shared OT). Shared OT extends the one-out-of-N String OT by replacing strings with integers modulo M and allowing additive secret-sharing of all inputs and outputs. These extensions can be implemented by simple local computations without incurring additional OT invocations. We believe our Shared OT may be of independent interest.

Our protocols demonstrate the best round, communication, and computational complexities compared to all other protocols secure in a similar setting. Moreover, all of our protocols involve either 2 or 3 rounds.

Incompressible Encryption Beyond CPA/CCA Security

Venkata Koppula, Abhinav Kumar, Mahesh Sreekumar Rajasree, Harihar Swaminathan

Research paper PDFPDF

An incompressible encryption scheme offers protection against adversaries who possess the entire secret key but can store only a portion of the ciphertext. In recent years, there has been growing interest in developing such primitives in both public-key and secret-key settings, as well as in the multi-user scenario.

In this work, we extend the concept of incompressible encryption to incorporate anonymity and key-dependent message security. We introduce the following schemes:

  1. The first key-dependent message incompressible SKE scheme secure against unbounded adversaries.
  2. The first anonymous incompressible SKE scheme secure against unbounded encryption queries.

Furthermore, we present the public key versions of these schemes.

Highly Scalable Searchable Symmetric Encryption for Boolean Queries from NTRU Lattice Trapdoors

Debadrita Talapatra, Sikhar Patranabis, Debdeep Mukhopadhyay

Research paper PDFPDF

Searchable symmetric encryption (SSE) enables query execution directly over symmetrically encrypted databases. To support realistic query executions over encrypted document collections, one needs SSE schemes capable of supporting both conjunctive and disjunctive keyword queries. Unfortunately, existing solutions are either practically inefficient (incur large storage overheads and/or high query processing latency) or are quantum-unsafe.

In this paper, we present the first practically efficient SSE scheme with fast conjunctive and disjunctive keyword searches, compact storage, and security based on the (plausible) quantum-hardness of well-studied lattice-based assumptions. We present NTRU-OQXT – a highly compact NTRU lattice-based conjunctive SSE scheme that outperforms all existing conjunctive SSE schemes in terms of search latency. We then present an extension of NTRU-OQXT that additionally supports disjunctive queries, we call it NTRU-TWINSSE. Technically, both schemes rely on a novel oblivious search protocol based on highly optimized Fast-Fourier trapdoor sampling algorithms over NTRU lattices. While such techniques have been used to design other cryptographic primitives (such as digital signatures), they have not been applied before in the context of SSE.

We present prototype implementations of both schemes, and experimentally validate their practical performance over a large real-world dataset. Our experiments demonstrate that NTRU-OQXT achieves $2\times$ faster conjunctive keyword searches as compared to all other conjunctive SSE schemes (including the best quantum-unsafe conjunctive SSE schemes), and substantially outperforms many of these schemes in terms of storage requirements. These efficiency benefits also translate to NTRU-TWINSSE, which is practically competitive with the best quantum-unsafe SSE schemes capable of supporting both conjunctive and disjunctive queries.

POBA: Privacy-Preserving Operator-Side Bookkeeping and Analytics

Dennis Faut, Valerie Fetzer, Jörn Müller-Quade, Markus Raiber, Andy Rupp

Research paper PDFPDF

Many user-centric applications face a common privacy problem: the need to collect, store, and analyze sensitive user data. Examples include check-in/check-out based payment systems for public transportation, charging/discharging electric vehicle batteries in smart grids, coalition loyalty programs, behavior-based car insurance, and more. We propose and evaluate a generic solution to this problem. More specifically, we provide a formal framework integrating privacy-preserving data collection, storage, and analysis, which can be used for many different application scenarios, present an instantiation, and perform an experimental evaluation of its practicality.

We consider a setting where multiple operators (e.g., different mobility providers, different car manufacturers and insurance companies), who do not fully trust each other, intend to maintain and analyze data produced by the union of their user sets. The data is collected in an anonymous (wrt. all operators) but authenticated way and stored in so-called user logbooks. In order for the operators to be able to perform analyses at any time without requiring user interaction, the logbooks are kept on the operator's side. Consequently, this potentially sensitive data must be protected from unauthorized access. To achieve this, we combine several selected cryptographic techniques, such as threshold signatures and oblivious RAM. The latter ensures that user anonymity is protected even against memory access pattern attacks.

To the best of our knowledge, we provide and evaluate the first generic framework that combines data collection, operator-side data storage, and data analysis in a privacy-preserving manner, while providing a formal security model, a UC-secure protocol, and a full implementation. With three operators, our implementation can handle over two million new logbook entries per day.

Accurate and Composable Noise Estimates for CKKS with Application to Exact HE Computation

Jean-Philippe Bossuat, Anamaria Costache, Christian Mouchet, Lea Nürnberger, Juan Ramón Troncoso-Pastoriza

Research paper PDFPDF

All RLWE-based FHE schemes are inherently noisy. The CKKS scheme (Cheon, Kim, Kim, Song, Asiacrypt 2017) considers the noise as a part of the message, yielding approximate computations but also considerable performance gains. Since it grows with each homomorphic operation and incurs a precision loss, it is paramount for users to be able to estimate the noise level throughout a given circuit in order to appropriately estimate parameters and control the precision loss in the message. In this work, we develop a noise model that allows for tight estimates of the precision loss, and propose a tool prototype for computing these estimates on any given circuit. Our noise model relies on a novel definition, the component-wise noise, which makes the average-case noise estimates tighter and more composable. As a result, our model and tool can derive accurate estimates of complex circuits such as bootstrapping. We experimentally demonstrate the tightness of our noise estimates by showing that our theoretical estimates never deviate by more than 0.01 bits from experimental estimates, even for large circuits, and hold with high probability. Furthermore, we demonstrate how to apply our techniques to obtain an exact version of the CKKS scheme in which the decryption removes all the noise (with high probability). Such a scheme has many applications, as it allows to take advantage of the efficiency of CKKS, while preserving an exact message space, hence further strengthening CKKS against IND-CPA-D attacks.

Don't Use It Twice: Reloaded! On the Lattice Isomorphism Group Action

Alessandro Budroni, Jesús-Javier Chi-Domínguez, Ermes Franch

Research paper PDFPDF

Group actions have emerged as a powerful framework in post-quantum cryptography, serving as the foundation for various cryptographic primitives. The Lattice Isomorphism Problem (LIP) has recently gained attention as a promising hardness assumption for designing quantum-resistant protocols. Its formulation as a group action has opened the door to new cryptographic applications, including a commitment scheme and a linkable ring signature.

In this work, we analyze the security properties of the LIP group action and present new findings. Specifically, we demonstrate that it fails to satisfy the weak unpredictability and weak pseudorandomness properties when the adversary has access to as few as three and two instances with the same secret, respectively. This significantly improves upon prior analysis by Budroni et al. (PQCrypto 2024).

As a direct consequence of our findings, we reveal a vulnerability in the linkable ring signature scheme proposed by Khuc et al. (SPACE 2024), demonstrating that the hardness assumption underlying the linkable anonymity property does not hold.

Circular Insecure Encryption: from Long Cycles to Short Cycles

Zehou Wu

Research paper PDFPDF

A length $n$ encryption cycle consists of a sequence of $n$ keys, each encrypting the next, forming a cycle, and an encryption scheme is $n$-circular secure if a length $n$ encryption cycle is computationally indistinguishable from encryptions of zeros. An interesting problem is whether CPA security implies circular security. This is shown to be not true. Using standard cryptographic assumptions and LWE, it was shown that within the class of CPA secure encryption schemes, for any $n$, there exists an $n$-circular insecure encryption scheme. Furthermore, there exists a particular encryption scheme that is $\ell$-circular insecure for all $\ell$. Following these results, it is natural to ask whether a circular insecurity of a particular length implies circular insecurity of different lengths and of multiple lengths. We answer this problem with an affirmative in this paper. We constructively prove that a CPA secure encryption scheme that is insecure in the presence of encryption cycles of length $(n+1)$ implies the existence of such a scheme for encryption cycles of any length less than $(n+1)$. The constructed $(\le n)$-circular insecure construction may have the same message space as the $(n+1)$-circular insecure encryption scheme, and our results apply to both public key and symmetric key settings.

On TRP-RF Switch in the Quantum Query Model

Ashwin Jha

Research paper PDFPDF

The tweakable random permutation (TRP) to random function (RF) switch in the quantum query model (Hosoyamada and Iwata, IACR ASIACRYPT 2019) is tightened. This immediately improves the security bounds for TNT and LRWQ against quantum chosen-plaintext attacks. We further demonstrate the utility of this tightened switch by establishing birthday-bound security for two additional TRP-based modes, including the cascade function.

Round-Optimal Authenticated Key Exchange with Full Forward Privacy

Koki Matsui, Shoma Kanzaki, Wakaha Ogata, Keitaro Hashimoto

Research paper PDFPDF

Privacy-preserving authenticated key exchange (PPAKE) is a cryptographic protocol that enables two users to exchange a session key while protecting users' privacy (i.e., hiding the user's identity) against the machine-in-the-middle adversary. To hide user identities, PPAKE messages are broadcast to the network, increasing communication complexity. In ASIACRYPT2022, Lyu et al. introduced a concept of robustness to reduce communication complexity. Roughly, robust PPAKE allows receivers to decide whether it is the intended user by processing the first message with its long-term secret key. As a result, only the intended user replies to the first message, and thus, messages in the network are reduced. However, if a user's secret key is leaked, an adversary can also use it to determine whether the past first message was intended for the user, and thus, the PPAKE scheme of Lyu et al. does not have full forward privacy. Lyu et al. leave an open problem of constructing a PPAKE scheme with robustness and full forward privacy. In this work, we solve this problem by introducing a new framework called key updatable PPAKE (kuPPAKE). In kuPPAKE schemes, a long-term secret key is updated so that the updated key does not work for past messages. Therefore, robustness no longer conflicts with full forward privacy. We propose a generic construction of a 2-round kuPPAKE and show a concrete scheme in the standard model from DH-style assumptions over bilinear groups.

On Round-Optimal Computational VSS

Karim Baghery, Navid Ghaedi Bardeh, Shahram Khazaei, Mahdi Rahimi

Research paper PDFPDF

In ASIACRYPT 2011, Backes, Kate, and Patra (BKP) introduced two computationally secure round-optimal (2-round) Verifiable Secret Sharing (VSS) schemes in the honest-majority setting, one based on non-homomorphic commitments and the other on homomorphic ones. Their scheme based on non-homomorphic commitments has $O(n^2)$ computational complexity and necessitates $O(n^2\lambda)$ public and private communication for the dealer, where $n$ denotes the number of parties and $\lambda$ is the security parameter. They showed that these costs are $n$ times higher compared to their round-optimal VSS scheme employing homomorphic commitments and posed a research question regarding the inevitability of this gap. In this paper, we fill this gap by introducing a new variant of the recently proposed unified framework $\mathbf{\Pi}$ by Baghery at PKC 2025, designed to enable the construction of more efficient round-optimal VSS schemes in the honest-majority setting. Compared to the original framework, our variant reduces the required rounds by one while maintaining compatibility with any commitments and achieving comparable efficiency. Leveraging this new general construction, we develop several round-optimal VSS schemes that surpass state-of-the-art alternatives. Particularly noteworthy is the new round-optimal VSS scheme based on non-homomorphic commitments, which improves the BKP scheme by a factor of $n$ across all efficiency metrics. Compared to their schemes based on homomorphic commitments, our schemes demonstrate significantly expedited verification and reconstruction. Implementation results further validate the practicality of these new VSS schemes. For example, for $(n, t)=(256, 127)$, where $t$ represents the threshold, compared to the hash-based BKP VSS scheme, our proposed scheme showcases speed-ups exceeding $120,000\times$ (and $50\times$) for the dealer (and parties, respectively), while also requiring $365\times$ (and $512\times$) less communication.

Diagonally dominant matrices for cryptography

Andrea Lesavourey, Kazuhide Fukushima, Thomas Plantard, Arnaud Sipasseuth

Research paper PDFPDF

Diagonally dominant lattices have already been used in cryptography, notably in the GGH and DRS schemes. This paper further studies the possibility of using diagonally dominant matrices in the context of lattice-based cryptography. To this end we study geometrical and algorithmic properties of lattices generated by such matrices. We prove novel bounds for the first minimum and the covering radius with respect to the max norm. Using these new results, we propose DRE (Diagonal Reduction Encryption) as an application example: a decryption failure free encryption scheme using diagonally dominant matrices and provide an experimental implementation to prove its suitability as a research direction. The trapdoor neither uses floating point arithmetic nor polynomial rings, and yet is less than 10 times slower than other optimised unstructured lattice-based standardisation candidates. This work could apply to cryptosystems based on the Lattice Isomorphism Problem as well. As a bonus, we also propose solutions to patch the DRS signature scheme, in particular using parameters leading to the use of sparse matrices.

Construction of Maiorana-McFarland type cryptographically significant Boolean functions with good implementation properties

Deng Tang, Anupam Chattopadhyay, Manmatha Roy, Bimal Mandal, Subhamoy Maitra

Research paper PDFPDF

We present a new construction of cryptographically significant Boolean functions defined over a large number of variables, with an emphasis on efficient circuit realizability. Our method is based on a variant of the well-known Maiorana-McFarland (MM) construction, adapted to enable circuit structures with less than $6n$ gates on the number of input bits $n$. We evaluate the circuit efficiency in terms of the total number of logic gates (for example AND, OR, NOT, and XOR – each with a maximum fan-in of two) required to implement a given function. While prior studies have explored cryptographic parameters of such functions in theory, they often overlooked circuit-level efficiency, especially in high-dimensional settings. In this work, we construct a class of balanced functions with high nonlinearity, low absolute autocorrelation and high algebraic degree, yet realizable using a small number of logic gates. Towards application, this work provides additional design directions for cryptographic primitives in domains such as fault-resistant cryptography and homomorphic encryption, where both security and circuit efficiency at scale are critical. Further investigations are required towards actual hardware implementation of our proposed functions as well as to exploit them in concrete cipher designs.

Round-Efficient Adaptively Secure Threshold Signatures with Rewinding

Yanbo Chen

Research paper PDFPDF

A threshold signature scheme allows distributing a signing key to $n$ users, such that any $t$ of them can jointly sign, but any $t-1$ cannot. It is desirable to prove adaptive security of threshold signature schemes, which considers adversaries that can adaptively corrupt honest users even after interacting with them. For a class of signatures that relies on security proofs with rewinding, such as Schnorr signatures, proving adaptive security entails significant challenges. This work proposes two threshold signature schemes that are provably adaptively secure with rewinding proofs. Our proofs are solely in the random oracle model (ROM), without relying on the algebraic group model (AGM). - We give a 3-round scheme based on the algebraic one-more discrete logarithm (AOMDL) assumption. The scheme outputs a standard Schnorr signature. - We give a 2-round scheme based on the DL assumption. Signatures output by the scheme contain one more scalar than a Schnorr signature. We follow the recent work by Katsumata, Reichle, and Takemure (Crypto 2024) that proposed the first threshold signature scheme with a rewinding proof of full adaptive security. Their scheme is a 5-round threshold Schnorr scheme based on the DL assumption. Our results significantly improve the round complexity. The protocol by Katsumata, Reichle, and Takemure can be viewed as applying a masking technique to Sparkle, a threshold Schnorr signature scheme by Crites, Komlo, and Maller (Crypto 2023). This work shows wider applications of the masking technique. Our first scheme is obtained by masking FROST, a threshold Schnorr protocol by Komlo and Goldberg (SAC 2020). The second scheme is obtained by masking a threshold version of HBMS, a multi-signature scheme by Bellare and Dai (Asiacrypt 2021). Katsumata, Reichle, and Takemure masked Sparkle at the cost of 2 additional rounds. Our main insight is that this cost varies across schemes, especially depending on how to simulate signing in the security proofs. The cost is 1 extra round for our first scheme, and is 0 for our second scheme.

Modular Reduction in CKKS

Jaehyung Kim, Taeyeong Noh

Research paper PDFPDF

The Cheon–Kim–Kim–Song (CKKS) scheme is renowned for its efficiency in encrypted computing over real numbers. However, it lacks an important functionality that most exact schemes have, an efficient modular reduction. This derives from the fundamental difference in encoding structure. The CKKS scheme encodes messages to the least significant bits, while the other schemes encode to the most significant bits (or in an equivalent manner). As a result, CKKS could enjoy an efficient rescaling but lost the ability to modular reduce inherently.

Instead of homomorphically approximating the modular reduction function, we suggest to use the inherent modular reduction over $\mathbb{Z}_q[X]/(X^N+1)$. We construct a novel homomorphic modular reduction algorithm using the discrete bootstrapping from Bae et al. [Asiacrypt'24] and a new discretization algorithm from modulus switching. One of the key advantages of our modular reduction is that its computational complexity grows sublinearly ($O(\log k)$) as we increase the input range $[0,k)$, which is asymptotically better than the state-of-the-art with $\geq O(k)$.

We checked our algorithms with concrete experiments. Notably, our modulo 1 function for input range $[0, 2^{20})$ takes only 44.9 seconds with 13.3 bits of (mean) precision, in a single-threaded CPU. Recall that modular reduction over such a large range was almost infeasible in the previous works, as they need to evaluate a polynomial of degree $> 2^{20}$ (or equivalent). As an application of our method, we compared a bit decomposition based on our framework with the state-of-the-art method from Drucker et al. [J.Cryptol'24]. Our method is $7.1 \times$ faster while reducing the failure probability by more than two orders of magnitude.

A Holistic Framework for Impossible Boomerang Attacks

Yincen Chen, Qinggan Fu, Ning Zhao, Jiahao Zhao, Ling Song, Qianqian Yang

Research paper PDFPDF

In 2011, Lu introduced the impossible boomerang attack at DCC. This powerful cryptanalysis technique combines the strengths of the impossible differential and boomerang attacks, thereby inheriting the advantages of both cryptographic techniques. In this paper, we propose a holistic framework comprising two generic and effective algorithms and a MILP-based model to search for the optimal impossible boomerang attack systematically. The first algorithm incorporates any key guessing strategy, while the second integrates the meet-in-the-middle (MITM) attack into the key recovery process. The MILP-based model combines the generic key recovery algorithms and supports the arbitrary location of the contradiction. Our highly flexible framework treats the distinguisher and the extended part as a whole, returning the optimal attack parameters and complexity. When applying our framework to Deoxys-BC-256, Deoxys-BC-384, Joltik-BC-128, Joltik-BC-192, and SKINNYe v2, we achieve several significant improvements. We achieve the first 11-round impossible boomerang attacks on Deoxys-BC-256 and Joltik-BC-128. For SKINNYe v2, we achieve the first 33-round impossible boomerang attack, then using the MITM approach in the key recovery attack, the time complexity is significantly reduced. Additionally, for the 14-round Deoxys-BC-384 and Joltik-BC-192, the time complexity of the impossible boomerang attack is reduced by factors exceeding $2^{27}$ and $2^{12}$, respectively.

Binding Security of Implicitly-Rejecting KEMs and Application to BIKE and HQC

Juliane Krämer, Patrick Struck, Maximiliane Weishäupl

Research paper PDFPDF

In this work, we continue the analysis of the binding properties of implicitly-rejecting key-encapsulation mechanisms (KEMs) obtained via the Fujisaki-Okamoto (FO) transform. These binding properties, in earlier literature known under the term robustness, thwart attacks that can arise when using KEMs in complex protocols. Recently, Cremers et al. (CCS'24) introduced a framework for binding notions, encompassing previously existing but also new ones. While implicitly-rejecting FO-KEMs have been analyzed with respect to multiple of these notions, there are still several gaps. We complete the picture by providing positive and negative results for the remaining notions. Further, we show how to apply our results to the code-based KEMs BIKE and HQC, which were round-4 candidates in NIST's PQC standardization process. Through this, we close a second gap as our results complete the analysis of the binding notions for the NIST round-4 KEMs. Finally, we give a modified version of the FO transform that achieves all binding notions.

HAWK: Having Automorphisms Weakens Key

Daniël M. H. van Gent, Ludo N. Pulles

Research paper PDFPDF

The search rank-2 module Lattice Isomorphism Problem (smLIP), over a cyclotomic ring of degree a power of two, can be reduced to an instance of the Lattice Isomorphism Problem (LIP) of at most half the rank if an adversary knows a nontrivial automorphism of the underlying integer lattice. Knowledge of such a nontrivial automorphism speeds up the key recovery attack on HAWK at least quadratically, which would halve the number of security bits.

Luo et al. (ASIACRYPT 2024) recently found an automorphism that breaks omSVP, the initial underlying hardness assumption of HAWK. The team of HAWK amended the definition of omSVP to include this so-called symplectic automorphism in their submission to the second round of NIST's standardization of additional signatures. This work provides confidence in the soundness of this updated definition, assuming smLIP is hard, since there are plausibly no more trivial automorphisms that allow winning the omSVP game easily.

Although this work does not affect the security of HAWK, it opens up a new attack avenue involving the automorphism group that may be theoretically interesting on its own.

Public Traceability in Threshold Decryption

Sébastien Canard, Nathan Papon, Duong Hieu Phan

Research paper PDFPDF

Tracing techniques have been used to identify users who have leaked their decryption keys in a secure multi-receiver encryption system. Very recently, in the field of distributed cryptography, where trust is distributed, Boneh et al. extended traitor tracing to the framework of threshold decryption, where a single user doesn't hold the whole secret to decrypt but needs to collaborate with others. However, the tracing capacity in their collusion-secure codes-based schemes is still centralized: only the authority holding the secret tracing key can perform tracing. We continue in the direction of not relying on a single entity and propose decentralizing tracing in this context so that the tracing procedure does not need to rely on any secret key and can be done by anyone. Technically, as binary collusion-secure codes only support secret tracing, we switch to robust $q$-ary IPP codes supporting public tracing. This requires us to generalize the bipartite threshold KEM for two users in Boneh et al.'s paper to $q$-partite KEM for q users. In terms of security, their static one-sided security in the binary case is not appropriate, which requires us to define an adaptive one-sided security notion for $q$-partite KEM to be compatible with $q$-ary IPP codes. Finally, we generalize the Boneh et al. construction to achieve this security notion and achieve public traceability for threshold decryption without degrading efficiency.

Revisiting Discrete Logarithm Reductions

Maiara F. Bollauf, Roberto Parisella, Janno Siim

Research paper PDFPDF

A reduction showing that the hardness of the discrete logarithm (DL) assumption implies the hardness of the computational Diffie-Hellman (CDH) assumption, given a suitable auxiliary input as advice, was first presented by den Boer [Crypto, 88]. We consider groups of prime order p, where p-1 is somewhat smooth (say, every prime q that divides p-1 is less than $2^{100}$). Several practically relevant groups satisfy this condition.

  1. We present a concretely efficient version of den Boer's reduction for such groups. In particular, among practically relevant groups, we obtain the most efficient and tightest reduction in the literature for BLS12-381, showing that DL = CDH.
  2. By generalizing den Boer's reduction, we show that in these groups the n-Power DL (n-PDL) assumption implies n-Diffie-Hellman Exponent (n-DHE) assumption, where n is polynomial in the security parameter.

On the negative side, we show there is no generic reduction, which could demonstrate that n-PDL implies the n-Generalized Diffie-Hellman Exponent (n-GDHE) assumption. This is in stark contrast with the algebraic group model, where this implication holds.

Cracking the Mask: SASCA Against Local-Masked NTT for CRYSTALS-Kyber

Rafael Carrera Rodriguez, Florent Bruguier, Emanuele Valea, Pascal Benoit

Research paper PDFPDF

Soft-Analytical Side-Channel Attacks (SASCAs) on lattice-based cryptography implementations have become a prominent vector of attack in the recent years, specially against the Number-Theoretic Transform (NTT). To address this issue, local masking with twiddle factors has been proposed as a countermeasure to protect the NTT against such attacks. In this paper we propose an adaptation of SASCA to local-masked NTT implementations, by modifying the factor graph representation to include the masking nodes. We evaluate the success rate of the attack with respect to the level of noise of simulated traces and the number of masks $u$ per layer. We show that the attack proves very successful in the lower values of $u$, by even outperforming the attack on the unmasked case. When $u$ is increased there is a gradual augmentation of security, which comes with an important overhead on performance. Thus, we question the practicality of this countermeasure when compared to other analyzed countermeasures in the state of the art, such as shuffling.

Turning Hash-Based Signatures into Distributed Signatures and Threshold Signatures Delegate Your Signing Capability, and Distribute it Among Trustees

John Kelsey, Nathalie Lang, Stefan Lucks

Research paper PDFPDF

We introduce techniques to transform existing stateful hash based signature (HBS) schemes, such as LMS or XMSS, into efficient threshold and distributed signature schemes. Our approach requires a trusted dealer for setup, and uses a large (up to a few GiB, typically) common reference value for each new public key. The dealer generates the keypair and distributes shares of the signing key to the trustees, while creating the CRV. Signing involves an untrusted aggregator communicating point-to-point with a set of trustees. Only the aggregator needs access to the CRV; the trustees need only a PRF key and enough space to remember which one-time keys they have helped to sign with so far. Signing requires two round trips between the aggregator and each participating trustee, and only a little more computation from the trustees and aggregator than is done when signing with the underlying HBS scheme. We reduce the security of our scheme to that of the underlying HBS scheme, assuming the availability of a secure PRF. A dishonest aggregator or tampered CRV can prevent valid signatures from being constructed, but does not allow forgeries. Our techniques offer a powerful practical defense against accidental reuse of a one-time key in stateful HBS schemes by requiring multiple trustees to fail in the same way in order for key reuse to occur.

Lattice-based Multi-key Homomorphic Signatures Forward-unforgeable against Signing Key Leakage

Ye Xu, Takashi Nishide

Research paper PDFPDF

Homomorphic signature (HS) schemes enable an untrusted server to run some computation over the data signed under the same key and derive a short signature for authenticating the computation result. Fiore et al. (Asiacrypt'16) introduced novel lattice-based multi-key homomorphic signatures (MKHS) to support an evaluation of signatures under multiple/different keys, and anyone can verify the resultant signature by using corresponding public verification keys. However, a limitation of their scheme is that even if only one signing key is leaked, a malicious server can forge a signature on a fake computation result involving the inputs of uncorrupted signers. To address this issue, we propose a new scheme built upon the work of Fiore et al., aiming to achieve a stronger security guarantee, which we call forward unforgeability, against signing key leakage. Our MKHS scheme is constructed based on the short integer solution (SIS) problem as Fiore et al., and can be forward-unforgeable even if an adversary obtains all the signing keys. Furthermore, we propose a variant by introducing a helper entity to amortize the overhead of signature verifications.

Optimizing Key Recovery in Classic McEliece: Advanced Error Correction for Noisy Side-Channel Measurements

Nicolas Vallet, Pierre-Louis Cayrel, Brice Colombier, Vlad-Florin Drăgoi, Vincent Grosso

Research paper PDFPDF

Classic McEliece was one of the code-based Key Encapsulation Mechanism finalists in the NIST post-quantum cryptography standardization process. Several key-recovery side-channel attacks on the decapsulation algorithm have already been published. However none of them discusses the feasibility and/or efficiency of the attack in the case of noisy side-channel acquisitions. In this paper, we address this issue by proposing two improvements on the recent key-recovery attack published by Drăgoi et al.. First, we introduce an error correction algorithm for the lists of Hamming weights obtained by side-channel measurements, based on the assumption, validated experimentally, that the error on a recovered Hamming weight is bounded to $\pm1$. We then offer a comparison between two decoding efficiency metrics, the theoretical minimal error correction capability and an empirical average correction probability. We show that the minimal error correction capability, widely used for linear codes, is not suitable for the (non-linear) code formed by the lists of Hamming weights. Conversely, experimental results show that out of 1 million random erroneous lists of $2t=128$ Hamming weights, only 2 could not be corrected by the proposed algorithm. This shows that the probability of successfully decoding a list of erroneous Hamming weights is very high, regardless of the error weight. In addition to this algorithm, we describe how the secret Goppa polynomial $g$, recovered during the first step of the attack, can be exploited to reduce both the time and space complexity of recovering the secret permuted support $\mathcal{L}$.

Improved Constant-Sized Polynomial Commitment Schemes Without Trusted Setup

Shihui Fu

Research paper PDFPDF

Argument systems are a fundamental ingredient in many cryptographic constructions. The best-performing argument systems to date largely rely on a trusted setup, which is undesirable in trust-minimized applications. While transparent argument systems avoid this trust assumption, they have historically been inefficient, typically exhibiting polylogarithmic proof sizes compared to their trusted counterparts. In 2023, Arun et al. (PKC 2023) constructed the first transparent constant-sized polynomial commitment scheme (PCS), leading to transparent constant-sized arguments. However, the evaluation proof still comprises 66 group elements in a group of unknown order (GUO), rendering it rather impractical. In this work, we address this challenge by presenting a set of novel batching and aggregation techniques tailored for proofs of knowledge of ranges in GUOs. These techniques may also be of independent interest and are readily applicable to enhance and shorten other existing schemes in GUOs. Consequently, by applying these techniques, we immediately achieve an improved PCS with an evaluation proof consisting of only 10 group elements—an impressive 85% reduction. To our knowledge, this represents the shortest PCS in the transparent setting. Thus compiling known information-theoretic proof systems using our improved PCS yields highly compact transparent argument systems when instantiated in a class group, which is more practical than prior constant-sized schemes.

Sequential Indifferentiability of STH and EDM

Nilanjan Datta, Avijit Dutta, Sougata Mandal, Hrithik Nandi

Research paper PDFPDF

The notion of indifferentiability was proposed by Maurer et al. to bound the distinguishing advantage of a construction built on a public primitive, from a public random function. In Indocrypt'10, Mandal et al. have shown that the sum of two independent permutations is indifferentiable from a public random function up to $2^{2n/3}$ queries. Later in ACNS'15, Mennink and Preneel identified an analytical flaw of Mandal et al's result and revised the security bound to $2^{2n/3}/n$. In Eurocrypt'18, Bhattacharya and Nandi have improved their indifferentiable bound to $2^n$ queries, which was again identified as incorrect in the analysis by Gunsing et al. In this paper, we study the indifferentiability of a few other PRF constructions, namely STH and EDM constructions. We will show that neither STH nor STH2 is indifferentiable, which led us to propose a generalized version called gSTH. We have shown that gSTH achieves a tight $l$-bit security bound, where $l$ denotes the size of the constants in terms of bits used in the construction. While we show that EDM achieves a tight $n/2$-bit indifferentiable bound with respect to our proposed simulator, single-keyed EDM is not indifferentiable from a public random function. We would like to mention that all the proofs and the attacks have been done in the sequential indifferentiability model.

Simpler and Faster Pairings from the Montgomery Ladder

Giacomo Pope, Krijn Reijnders, Damien Robert, Alessandro Sferlazza, Benjamin Smith

Research paper PDFPDF

We show that Montgomery ladders compute pairings as a by-product, and explain how a small adjustment to the ladder results in simple and efficient algorithms for the Weil and Tate pairing on elliptic curves using cubical arithmetic. We demonstrate the efficiency of the resulting cubical pairings in several applications from isogeny-based cryptography. Cubical pairings are simpler and more performant than pairings computed using Miller's algorithm: we get a speed-up of over 40 per cent for use-cases in SQIsign, and a speed-up of about 7 per cent for use-cases in CSIDH. While these results arise from a deep connection to biextensions and cubical arithmetic, in this article we keep things as concrete (and digestible) as possible. We provide a concise and complete introduction to cubical arithmetic as an appendix.