Communications in Cryptology IACR CiC

Volume 1, Issue 4

Published 2025-01-13
Lattice-based Multi-Authority/Client Attribute-based Encryption for Circuits

Valerio Cini, Russell W. F. Lai, Ivy K. Y. Woo

Research paper PDFPDF

Multi-authority/input attribute-based encryption (MA-/MI-ABE) are multi-party extensions of ABE which enable flavours of decentralised cryptographic access control. This work aims to advance research on multi-party ABE and their lattice-based constructions in several directions:

- We introduce the notion of multi-client (MC-)ABE. This can be seen as an augmentation of MI-ABE with the addition of a ciphertext identity (CID) in the syntax, or a specialisation of multi-client functional encryption (MC-FE) to the ABE setting.

- We adapt the 2-input (2I-)ABE of Agrawal et al. (CRYPTO'22), which is heuristically secure yet without a security proof, into a 2-client (2C-)ABE, and prove it satisfies a variant of very-selective security under the learning with errors (LWE) assumption.

- We extend Wee's ciphertext-policy (CP-)ABE (EUROCRYPT'22) to the MA setting, yielding an MA-ABE. Furthermore, combining techniques in Boneh et al.'s key-policy ABE (EUROCRYPT'14) and our MA-ABE, we construct an MC-ABE. We prove that they satisfy variants of very-selective security under the evasive LWE, tensor LWE, and LWE assumptions.

All our constructions support policies expressed as arbitrary polynomial-size circuits, feature distributed key generation (for MA) and encryption (for 2C/MC), and are proven secure in the random oracle model. Although our constructions only achieve limited security against corrupt authorities/clients, the fully distributed key generation/encryption feature makes them nevertheless non-trivial and meaningful.

Prior to this work, existing MA-ABEs only support up to NC1 policies regardless of their security against corrupt authorities; existing MI-ABEs only support up to constant-many encryptors/clients and do not achieve any security against corrupt encryptors/clients; and MC-ABEs only existed in the form of MC-FEs for linear and quadratic functions.

Simulation-Secure Threshold PKE from LWE with Polynomial Modulus

Daniele Micciancio, Adam Suhl

Research paper PDFPDF

In LWE based cryptosystems, using small (polynomially large) ciphertext modulus improves both efficiency and security. In threshold encryption, one often needs simulation security: the ability to simulate decryption shares without the secret key. Existing lattice-based threshold encryption schemes provide one or the other but not both. Simulation security has seemed to require superpolynomial flooding noise, and the schemes with polynomial modulus use Renyi divergence based analyses that are sufficient for game-based but not simulation security.

In this work, we give the first construction of simulation-secure lattice-based threshold PKE with polynomially large modulus. The construction itself is relatively standard, but we use an improved analysis, proving that when the ciphertext noise and flooding noise are both Gaussian, simulation is possible even with very small flooding noise. Our modulus is small not just asymptotically but also concretely: this technique gives parameters roughly comparable to those of highly optimized non-threshold schemes like FrodoKEM. As part of our proof, we show that LWE remains hard in the presence of some types of leakage; these results and techniques may also be useful in other contexts where noise flooding is used.

Haven++: Batched and Packed Dual-Threshold Asynchronous Complete Secret Sharing with Applications

Nicolas Alhaddad, Mayank Varia, Ziling Yang

Research paper PDFPDF

Asynchronous complete secret sharing (ACSS) is a foundational primitive in the design of distributed algorithms and cryptosystems that require confidentiality. ACSS permits a dealer to distribute a secret to a collection of N servers so that everyone holds shares of a polynomial containing the dealer's secret.

This work contributes a new ACSS protocol, called Haven++, that uses packing and batching to make asymptotic and concrete advances in the design and application of ACSS for large secrets. Haven++ allows the dealer to pack multiple secrets in a single sharing phase, and to reconstruct either one or all of them later. For even larger secrets, we contribute a batching technique to amortize the cost of proof generation and verification across multiple invocations of our protocol.

The result is an asymptotic improvement in the worst-case amortized communication and computation complexity, both for ACSS itself and for its application to asynchronous distributed key generation. Our ADKG based on Haven++ achieves, for the first time, an optimal worst case amortized communication complexity of κN without a trusted setup. To show the practicality of Haven++, we implement it and find that it outperforms the work of Yurek et al. (NDSS 2022) by more than an order of magnitude when there are malicious, faulty parties.

Erebor and Durian: Full Anonymous Ring Signatures from Quaternions and Isogenies

Giacomo Borin, Yi-Fu Lai, Antonin Leroux

Research paper PDFPDF

We construct two efficient post-quantum ring signatures with anonymity against full key exposure from isogenies, addressing the limitations of existing isogeny-based ring signatures.

First, we present an efficient concrete distinguisher for the SQIsign simulator when the signing key is provided using one transcript. This shows that turning SQIsign into an efficient full anonymous ring signature requires some new ideas.

Second, we propose a variant of SQIsign (Asiacrypt'20) that is resistant to the distinguisher attack with only a x1.4 increase in size and we render it to a ring signature, that we refer to as Erebor. This variant introduces a new zero-knowledge assumption that ensures full anonymity. The efficiency of Erebor remains comparable to that of SQIsign, with only a proportional increase due to the ring size. This results in a signature size of 0.71 KB for 4 users and 1.41 KB for 8 users, making it the most compact post-quantum ring signature for up to 29 users.

Third, we revisit the GPS signature scheme (Asiacrypt'17), developing efficient subroutines to make the scheme more efficient and significantly reduce the resulting signature size. By integrating our scheme with the paradigm by Beullens, Katsumata, and Pintore (Asiacrypt’20), we achieve an efficient logarithmic ring signature, that we call Durian, resulting in a signature size of 9.87 KB for a ring of size 1024.

A New Paradigm for Server-Aided MPC

Alessandra Scafuro, Tanner Verber

Research paper PDFPDF

The server-aided model for multiparty computation (MPC) was introduced to capture a real-world scenario where clients wish to off-load the heavy computation of MPC protocols to dedicated servers. A rich body of work has studied various trade-offs between security guarantees (e.g., semi-honest vs malicious), trust assumptions (e.g., the threshold on corrupted servers), and efficiency.

However, all existing works make the assumption that all clients must agree on employing the same servers, and accept the same corruption threshold. In this paper, we challenge this assumption and introduce a new paradigm for server-aided MPC, where each client can choose their own set of servers and their own threshold of corrupted servers. In this new model, the privacy of each client is guaranteed as long as their own threshold is satisfied, regardless of the other servers/clients. We call this paradigm per-party private server-aided MPC to highlight both a security and efficiency guarantee: (1) per-party privacy, which means that each party gets their own privacy guarantees that depend on their own choice of the servers; (2) per-party complexity, which means that each party only needs to communicate with their chosen servers. Our primary contribution is a new theoretical framework for server-aided MPC. We provide two protocols to show feasibility, but leave it as a future work to investigate protocols that focus on concrete efficiency.

HELP: Everlasting Privacy through Server-Aided Randomness

Yevgeniy Dodis, Jiaxin Guan, Peter Hall, Alison Lin

Research paper PDFPDF

Everlasting (EL) privacy offers an attractive solution to the Store-Now-Decrypt-Later (SNDL) problem, where future increases in the attacker's capability could break systems which are believed to be secure today. Instead of requiring full information-theoretic security, everlasting privacy allows computationally-secure transmissions of ephemeral secrets, which are only "effective" for a limited periods of time, after which their compromise is provably useless for the SNDL attacker.

In this work we revisit such everlasting privacy model of Dodis and Yeo (ITC'21), which we call Hypervisor EverLasting Privacy (HELP). HELP is a novel architecture for generating shared randomness using a network of semi-trusted servers (or "hypervisors"), trading the need to store/distribute large shared secrets with the assumptions that it is hard to: (a) simultaneously compromise too many publicly accessible ad-hoc servers; and (b) break a computationally-secure encryption scheme very quickly. While Dodis and Yeo presented good HELP solutions in the asymptotic sense, their solutions were concretely expensive and used heavy tools (like large finite fields or gigantic Toeplitz matrices).

We abstract and generalize the HELP architecture to allow for more efficient instantiations, and construct several concretely efficient HELP solutions. Our solutions use elementary cryptographic operations, such as hashing and message authentication. We also prove a very strong composition theorem showing that our EL architecture can use any message transmission method which is computationally-secure in the Universal Composability (UC) framework. This is the first positive composition result for everlasting privacy, which was otherwise known to suffer from many "non-composition" results (Müller-Quade and Unruh; J of Cryptology'10).

Fault-tolerant Verifiable Dynamic SSE with Forward and Backward Privacy

Bibhas Chandra Das, Nilanjan Datta, Avishek Majumder, Subhabrata Samajder

Research paper PDFPDF

Dynamic Searchable Symmetric Encryption (DSSE) allows users to securely outsource their data to cloud servers while enabling efficient searches and updates. The verifiability property of a DSSE construction ensures that users do not accept incorrect search results from a malicious server while the fault-tolerance property guarantees the construction functions correctly even with faulty queries from the client (e.g., adding a keyword to a document multiple times, deleting a keyword from a document that was never added). There have been very few studies on fault-tolerant verifiable DSSE schemes that achieve forward privacy, and none of the existing constructions achieve backward privacy. In this paper, we aim to design an efficient fault-tolerant verifiable DSSE scheme that provides both forward and backward privacy. First, we propose a basic fault-tolerant verifiable DSSE scheme, dubbed $\textsf{FVS1}$, which achieves forward privacy and stronger backward privacy with the update pattern (BPUP). However, the communication complexity for the search operation of this scheme is $O(u)$, where $u$ is the total number of updates for the search keyword. To address this issue, we propose an efficient variant of the previous DSSE scheme, called $\textsf{FVS2}$, which achieves the same functionality with an optimized communication complexity of $O(m+u')$ for search queries. Here $m$ is the size of the result set and $u'$ is the number of update operations made on the queried keyword after the previous search made on the keyword. This improvement comes at the cost of some additional information leakage, but it ensures the construction achieves backward privacy with the link pattern (BPLP).

Proximity Gaps in Interleaved Codes

Benjamin E. Diamond, Angus Gruen

Research paper PDFPDF

A linear error-correcting code exhibits proximity gaps if each affine line of words either consists entirely of words which are close to the code or else contains almost no such words. In this short note, we prove that for each linear code which exhibits proximity gaps within the unique decoding radius, that code's interleaved code also does. Combining our result with a recent argument of Angeris, Evans and Roh ('24), we extend those authors' sharpening of the tensor-based proximity gap of Diamond and Posen (Commun. Cryptol. '24) up to the unique decoding radius, at least in the Reed–Solomon setting.

Masked Computation of the Floor Function and Its Application to the FALCON Signature

Pierre-Augustin Berthet, Justine Paillet, Cédric Tavernier, Lilian Bossuet, Brice Colombier

Research paper PDFPDF

FALCON is a signature selected for standardisation of the new Post-Quantum Cryptography (PQC) primitives by the National Institute of Standards and Technology (NIST). However, it remains a challenge to define efficient countermeasures against side-channel attacks (SCA) for this algorithm. FALCON is a lattice-based signature that relies on rational numbers, which is unusual in the cryptography field. Although recent work proposed a solution to mask the addition and the multiplication, some roadblocks remain, most noticeably, how to protect the floor function. In this work, we propose to complete the first existing tests of hardening FALCON against SCA. We perform the mathematical proofs of our methods as well as formal security proofs in the probing model by ensuring Multiple Input Multiple Output Strong Non-Interference (MIMO-SNI) security. We provide performances on a laptop computer of our gadgets as well as of a complete masked FALCON. We notice significant overhead in doing so and discuss the deployability of our method in a real-world context.

More Efficient Lattice-Based Electronic Voting from NTRU

Patrick Hough, Caroline Sandsbråten, Tjerand Silde

Research paper PDFPDF

In recent years, there has been much focus on developing core cryptographic primitives based on lattice assumptions, driven by the NIST call for post-quantum key encapsulation and digital signature algorithms. However, more work must be conducted on efficient privacy-preserving protocols based on quantum-safe assumptions. Electronic voting is one such privacy-preserving protocol whose adoption is increasing across the democratic world. E-voting offers both a fast and convenient alternative to postal voting whilst further ensuring cryptographic privacy of votes and offering full verifiability of the process. Owing to the sensitivity of voting and its infrastructure challenges, it is crucial to ensure security against quantum computers is baked into e-voting solutions. We present an e-voting scheme from quantum-safe assumptions based on the hardness of the RLWE and NTRU lattice problems, providing concrete parameters and an efficient implementation. Our design achieves a factor $5.3 \times$ reduction in ciphertext size, $2.5 \times$ reduction in total communication cost, and $2 \times$ reduction in total computation time compared to the state-of-the-art lattice-based voting scheme by Aranha et al. (ACM CCS 2023). We argue that the efficiency of this scheme makes it suitable for real-world elections. Our scheme makes use of non-ternary NTRU secrets to achieve optimal parameters. In order to compute the security of our design, we extend the ternary-NTRU work of Ducas and van Woerden (ASIACRYPT 2021) by determining the concrete fatigue point (for general secrets) of NTRU to be $q = 0.0058 \cdot \sigma^2 \cdot d^{2.484}$ (above which parameters become overstretched) for modulus $q$, ring dimension $d$, and secrets drawn from a Gaussian of parameter $\sigma$. We consider this relation to be of independent interest and demonstrate its significance by improving the efficiency of the (partially) blind signature scheme by del Pino and Katsumata (CRYPTO 2022).

Scaling Lattice Sieves across Multiple Machines

Martin R. Albrecht, Joe Rowell

Research paper PDFPDF

Lattice sieves are algorithms for finding short vectors in lattices. We present an implementation of two such sieves – known as "BGJ1" and "BDGL" in the literature - that scales across multiple servers (with varying success). This class of algorithms requires exponential memory which had put into question their ability to scale across sieving nodes. We discuss our architecture and optimisations and report experimental evidence of the efficiency of our approach.

Folding Schemes with Privacy Preserving Selective Verification

Joan Boyar, Simon Erfurth

Research paper PDFPDF

Folding schemes are an exciting new primitive, transforming the task of performing multiple zero-knowledge proofs of knowledge for a relation into performing just one zero-knowledge proof, for the same relation, and a number of cheap inclusion-proofs. Recently, folding schemes have been used to amortize the cost associated with proving different statements to multiple distinct verifiers, which has various applications. We observe that for these uses, leaking information about the statements folded together can be problematic, yet this happens with previous constructions. Towards resolving this issue, we give a natural definition of privacy preserving folding schemes, and what security they should offer. To construct privacy preserving folding schemes, we first define statement hiders, a primitive which might be of independent interest. In a nutshell, a statement hider hides an instance of a relation as a new instance in the same relation. The new instance is in the relation if and only if the initial instance is. With this building block, we can utilize existing folding schemes to construct a privacy preserving folding scheme, by first hiding each of the statements. Folding schemes allow verifying that a statement was folded into another statement, while statement hiders allow verifying that a statement was hidden as another statement.

Authenticity in the Presence of Leakage using a Forkcipher

Francesco Berti, François-Xavier Standaert, Itamar Levi

Research paper PDFPDF

Robust message authentication codes (MACs) and authenticated encryption (AE) schemes that provide authenticity in the presence of side-channel leakage are essential primitives. These constructions often rely on primitives designed for strong leakage protection, among others including the use of strong-unpredictable (tweakable) block-ciphers. This paper extends the strong-unpredictability security definition to the versatile and new forkcipher primitive. We show how to construct secure and efficient MAC and AEs that guarantee authenticity in the presence of leakage. We present a leakage-resistant MAC, ForkMAC, and two leakage-resistant AE schemes, ForkDTE1 and ForkDTE2, which use forkciphers instead of traditional secure (tweakable) block-ciphers as compared to the prior art. We prove and analyze their security in the presence of leakage based on a strong unpredictable forkcipher. A comparison with the state-of-the-art in terms of both security and efficiency is included in the paper. Key advantages and highlights promoted by the proposed constructions are that for the minimal assumptions they require, unpredictability with leakage-based security, the tag-generation of ForkMAC is the most efficient among leakage-resilient MAC proposals, like the block cipher based HBC. ForkDTE1 and 2 have a more efficient encryption than any other scheme, achieving integrity with leakage (and also providing misuse-resistance).

A Key-Recovery Attack on a Leaky SeaSign Variant

Shai Levin

Research paper PDFPDF

We present a key-recovery attack on a variant of the SeaSign signature scheme presented by [Kim24], which attempts to avoid rejection sampling by presampling vectors f such that the f-e is contained in an acceptable bound, where e is the secret key. We show that this choice leads to a bias of these vectors such that, in a small number of signatures, the secret key can either be completely recovered or its keyspace substantially reduced. In particular, given 20 signatures, with parameter set II of their paper, the attack reduces the private key to 128 possibilities.

On Loopy Belief Propagation for SASCAs An Analysis and Empirical Study of the Inference Problem

Rishub Nagpal, Gaëtan Cassiers, Robert Primas, Christian Knoll, Franz Pernkopf, Stefan Mangard

Research paper PDFPDF

Profiled power analysis is one of the most powerful forms of passive side-channel attacks. Over the last two decades, many works have analyzed their impact on cryptographic implementations as well as corresponding countermeasure techniques. To date, the most advanced variants of profiled power analysis are based on Soft-analytical Side-Channel Attacks (SASCA). After the initial profiling phase, a SASCA adversary creates a probabilistic graphical model, called a factor graph, of the target implementation and encodes the results of the previous step as prior information. Then, an inference algorithm such as loopy Belief Propagation (BP) can be used to recover the distribution of a target variable in the graph, i.e., sensitive data/keys.

Designers of cryptographic implementations aim to reduce information leakage as much as possible and assess how much leakage can be allowed without compromising security requirements. Despite the existence of many works on profiled power analysis, it is still notoriously difficult to state under which conditions a cryptographic implementation provides sufficient protection against a profiling attacker with certain capabilities. In particular, it is unknown when a BP-based attack is optimal or whether tuning some heuristics in that algorithm may significant strengthen the attack.

This knowledge gap led us to investigate the effectiveness of BP for SASCAs by studying the modes of failures of BP in the context of the SASCA, and systematically analyzing the behavior of BP on practically-relevant factor graphs. We use exact inference to gauge the quality of the approximation provided by BP. Through this assessment, we show that there exists a significant disparity between BP and exact inference in terms of guessing entropy when performing SASCAs on several classes of factor graphs. We further review and analyze various BP improvement heuristics from the literature.

Learning with Errors from Nonassociative Algebras

Andrew Mendelsohn, Cong Ling

Research paper PDFPDF

We construct a provably-secure structured variant of Learning with Errors (LWE) using nonassociative cyclic division algebras, assuming the hardness of worst-case structured lattice problems, for which we are able to give a full search-to-decision reduction, improving upon the construction of Grover et al. named `Cyclic Learning with Errors' (CLWE). We are thus able to create structured LWE over cyclic algebras without any restriction on the size of secret spaces, which was required for CLWE as a result of its restricted security proof. We reduce the shortest independent vectors problem in ideal lattices, obtained from ideals in orders of such algebras, to the decision variant of LWE defined for nonassociative CDAs. We believe this variant has greater security and greater freedom with parameter choices than CLWE, and greater asymptotic efficiency of multiplication than module LWE. Our reduction requires new results in the ideal theory of such nonassociative algebras, which may be of independent interest. We then adapt an LPR-like PKE scheme to hold for nonassociative spaces, and discuss the efficiency and security of our construction, showing that it is immune to certain subfield attacks. Finally, we give example parameters to construct algebras for cryptographic use.

MAYO Key Recovery by Fixing Vinegar Seeds

Sönke Jendral, Elena Dubrova

Research paper PDFPDF

As the industry prepares for the transition to post-quantum secure public key cryptographic algorithms, vulnerability analysis of their implementations is gaining importance. A theoretically secure cryptographic algorithm should also be able to withstand the challenges of physical attacks in real-world environments. MAYO is a candidate in the ongoing second round of the NIST post-quantum standardization process for selecting additional digital signature schemes. This paper demonstrates three first-order single-execution fault injection attacks on the official MAYO implementation on the ARM Cortex-M4. By using voltage glitching to disrupt the computation of the vinegar seed during the signature generation, we enable the recovery of the secret key directly from the faulty signatures. Our experimental results show that the success rates of the fault attacks in a single execution are 36%, 82%, and 99%, respectively. They emphasize the importance of developing countermeasures against fault attacks prior to the widespread deployment of post-quantum algorithms like MAYO.

On Quantum Simulation-Soundness

Behzad Abdolmaleki, Céline Chevalier, Ehsan Ebrahimi, Giulio Malavolta, Quoc-Huy Vu

Research paper PDFPDF

Non-interactive zero-knowledge (NIZK) proof systems are a cornerstone of modern cryptography, but their security has received little attention in the quantum settings. Motivated by improving our understanding of this fundamental primitive against quantum adversaries, we propose a new definition of security against quantum adversary. Specifically, we define the notion of quantum simulation soundness (SS-NIZK), that allows the adversary to access the simulator in superposition.

We show a separation between post-quantum and quantum security of SS-NIZK, and prove that Sahai’s construction for SS-NIZK (in the CRS model) can be made quantumly-simulation-sound. As an immediate application of our new notion, we prove the security of the Naor-Yung paradigm in the quantum settings, with respect to a strong quantum IND-CCA security notion. This provides the quantum analogue of the classical dual key approach to prove the security of encryption schemes. Along the way, we introduce a new notion of quantum-query advantage functions, which may be used as a general framework to show classical/quantum separation for other cryptographic primitives, and it may be of independent interest.

Scalable Nonlinear Sequence Generation using Composite Mersenne Product Registers

David Gordon, Arman Allahverdi, Simon Abrelat, Anna Hemingway, Adil Farooq, Isabella Smith, Nitya Arora, Allen Ian Chang, Yongyu Qiang, Vincent John Mooney III

Research paper PDFPDF

We introduce a novel composition method that combines linear feedback registers into larger nonlinear structures and generalizes earlier methods such as cascade connections. We prove a Chaining Period Theorem which provides the cycle structure of these register constructions. We then use this Chaining Period Theorem and a new construction we call a Product Register (PR) to introduce a flexible and scalable register family with desirable properties, which we term Composite Mersenne Product Registers (CMPRs). We provide an algorithm to estimate the linear complexity of a chosen CMPR and investigate the statistical properties and security of a CMPR-based pseudorandom generator. Finally, we propose a family of CMPR-based stream ciphers and provide comparisons with the TRIVIUM stream cipher in terms of hardware area and security.

Technology-Dependent Synthesis and Optimization of Circuits for Small S-boxes

Zihao Wei, Siwei Sun, Fengmei Liu, Lei Hu, Zhiyu Zhang

Research paper PDFPDF

Boolean formula minimization is a notoriously hard problem. Circuit minimization, typically studied in the context of a much broader subject known as synthesis and optimization of circuits, introduces another layer of complexity since ultimately those technology-independent representations (e.g., Boolean formulas and truth tables) has to be transformed into a netlist of cells of the target technology library. To manage those complexities, the industrial community typically separates the synthesis process into two steps: technology-independent optimization and technology mapping. In each step, this approach only tries to find the local optimal solution and relies heavily on heuristics rather than a systematic search. However, for small S-boxes, a more systematic exploration of the design space is possible. Aiming at the global optimum, we propose a method which can synthesize a truth table for a small S-box directly into a netlist of the cells of a given technology library. Compared with existing technology-dependent synthesis tools like LIGHTER and PEIGEN, our method produces improved results for many S-boxes with respect to circuit area. In particular, by applying our method to the GF(2^4)-inverter involved in the tower field implementation of the AES S-box, we obtain the currently known lightest implementation of the AES S-box. The search framework can be tweaked to take circuit delay into account. As a result, we find implementations for certain S-boxes with both latency and area improved.

Cryptography is Rocket Science Analysis of BPSec

Benjamin Dowling, Britta Hale, Xisen Tian, Bhagya Wimalasiri

Research paper PDFPDF

Space networking has become an increasing area of development with the advent of commercial satellite networks such as those hosted by Starlink and Kuiper, and increased satellite and space presence by governments around the world. Yet, historically such network designs have not been made public, leading to limited formal cryptographic analysis of the security offered by them. One of the few public protocols used in space networking is the Bundle Protocol, which is secured by Bundle Protocol Security (BPSec), an Internet Engineering Task Force (IETF) standard. We undertake a first analysis of BPSec under its default security context, building a model of the secure channel security goals stated in the IETF standard, and note issues therein with message loss detection. We prove BPSec secure, and also provide a stronger construction, one that supports the Bundle Protocol's functionality goals while also ensuring destination awareness of missing message components.

On the Privacy of Sublinear-Communication Jaccard Index Estimation via Min-hash

Mingyu Liang, Seung Geol Choi, Dana Dachman-Soled, Linsheng Liu, Arkady Yerukhimovich

Research paper PDFPDF

The min-hash sketch is a well-known technique for low-communication approximation of the Jaccard index between two input sets. Moreover, there is a folklore belief that min-hash sketch-based protocols protect the privacy of the inputs. In this paper, we consider variants of private min-hash sketch based-protocols and investigate this folklore to quantify the privacy of the min-hash sketch.

We begin our investigation by presenting a highly-efficient two-party protocol for estimating the Jaccard index while ensuring differential privacy. This protocol adds Laplacian noise to the min-hash sketch counts to provide privacy protection.

Then, we aim to understand what privacy, if any, is guaranteed if the results of the min-hash are released without any additional noise, such as in the case of historical data. We begin our investigation by considering the privacy of min-hash in a centralized setting where the hash functions are chosen by the min-hash functionality and are unknown to the participants. We show that in this case the min-hash output satisfies the standard definition of differential privacy (DP) without any additional noise.

We next consider a more practical distributed setting, where the hash function must be shared among all parties and is typically public.

Unfortunately, we show that in this public hash function setting, the min-hash output is no longer DP. We therefore consider the notion of distributional differential privacy (DDP) introduced by Bassily et al. (FOCS 2013). We show that if the honest party's set has sufficiently high min-entropy, the min-hash output achieves DDP without requiring noise.

Our findings provide guidance on how to use the min-hash sketch for private Jaccard index estimation and clarify the extent to which min-hash protocols protect input privacy, refining the common belief in their privacy guarantees.

Building a BBB Pseudorandom Permutation using Lai-Massey Networks

Ritam Bhaumik, Mohammad Amin Raeisi

Research paper PDFPDF

In spite of being a popular technique for designing block ciphers, Lai-Massey networks have received considerably less attention from a security analysis point of view than Feistel networks and Substitution-Permutation networks. In this paper we study the beyond-birthday-bound (BBB) security of Lai-Massey networks with independent random round functions against chosen-plaintext adversaries. Concretely, we show that five rounds are necessary and sufficient to achieve BBB security.

Zero-Knowledge Proofs of Quantumness

Duong Hieu Phan, Weiqiang Wen, Xingyu Yan, Jinwei Zheng

Research paper PDFPDF

With the rapid development of quantum computers, proofs of quantumness have recently become an interesting and intriguing research direction. However, in all current schemes for proofs of quantumness, quantum provers almost invariably face the risk of being maliciously exploited by classical verifiers. In fact, through malicious strategies in interaction with quantum provers, classical verifiers could solve some instances of hard problems that arise from the specific scheme in use. In other words, malicious verifiers can break some schemes (that quantum provers are not aware of) through interaction with quantum provers. All this is due to the lack of formalization that prevents malicious verifiers from extracting useful information in proofs of quantumness.

To address this issue, we formalize zero-knowledge proofs of quantumness. Intuitively, the zero-knowledge property necessitates that the information gained by the classical verifier from interactions with the quantum prover should not surpass what can be simulated using a simulated classical prover interacting with the same verifier. As a result, the new zero-knowledge notion can prevent any malicious verifier from exploiting quantum advantage. Interestingly, we find that the classical zero-knowledge proof is sufficient to compile some existing proofs of quantumness schemes into zero-knowledge proofs of quantumness schemes.

Due to some technical reason, it appears to be more general to require zero-knowledge proof on the verifier side instead of the prover side. Intuitively, this helps to regulate the verifier's behavior from malicious to be honest-but-curious. As a result, both parties will play not only one role in the proofs of quantumness but also the dual role in the classical zero-knowledge proof.

Specifically, the two principle proofs of quantumness schemes: Shor's factoring-based scheme and learning with errors-based scheme in [Brakerski et al, FOCS, 2018], can be transformed into zero-knowledge proofs of quantumness by requiring an extractable non-interactive zero-knowledge argument on the verifier side. Notably, the zero-knowledge proofs of quantumness can be viewed as an enhanced security notion for proofs of quantumness. To prevent malicious verifiers from exploiting the quantum device's capabilities or knowledge, it is advisable to transition existing proofs of quantumness schemes to this framework whenever feasible.

Ultra Low-Latency Block Cipher uLBC

Guoxiao Liu, Qingyuan Yu, Liyuan Tang, Shihe Ma, Congming Wei, Keting Jia, Lingyue Qin, Xiaoyang Dong, Yantian Shen

Research paper PDFPDF

In recent years, there has been a growing interest in low-latency ciphers. Since the first low-latency block cipher PRINCE was proposed at ASIACRYPT 2012, many low-latency primitives sprung up, such as Midori, MANTIS, QARMA and SPEEDY. Some ciphers, like SPEEDY and Orthros, introduce bit permutations to achieve reduced delay. However, this approach poses a challenge in evaluating the resistance against some cryptanalysis, especially differential and linear attacks. SPEEDY-7-192, was fully broken by Boura et.al. using differential attack, for example. In this paper, we manage to propose a novel low-latency block cipher, which guarantees security against differential and linear attacks. Revisiting the permutation technique used in Orthros, we investigate the selection of nibble permutations and propose a method for selecting them systematically rather than relying on random search. Our new nibble permutation method ensures the existence of impossible differential and differential trails for up to 8 rounds, while the nibble permutations for both branches of Orthros may lead to a 9-round impossible differential trail. Furthermore, we introduce a new approach for constructing low-latency coordinate functions for 4-bit S-boxes, which involves a more precise delay computation compared to traditional methods based solely on circuit depth. The new low-latency primitive uLBC we propose, is a family of 128-bit block ciphers, with three different versions of key length, respectively 128-bit and 256-bit key, as well as a 384-bit tweakey version with variable-length key. According to the key length, named uLBC-128, uLBC-256 and uLBC-384t. Our analysis shows that uLBC-128 exhibits lower latency and area requirements compared to ciphers such as QARMA9-128 and Midori128. On performance, uLBC-128 has excellent AT performance, the best performance except SPEEDY-6, and even the best performance in UMC 55nm in our experiments.

Security Guidelines for Implementing Homomorphic Encryption

Jean-Philippe Bossuat, Rosario Cammarota, Ilaria Chillotti, Benjamin R. Curtis, Wei Dai, Huijing Gong, Erin Hales, Duhyeong Kim, Bryan Kumara, Changmin Lee, Xianhui Lu, Carsten Maple, Alberto Pedrouzo-Ulloa, Rachel Player, Yuriy Polyakov, Luis Antonio Ruiz Lopez, Yongsoo Song, Donggeon Yhee

SoK paper PDFPDF

Fully Homomorphic Encryption (FHE) is a cryptographic primitive that allows performing arbitrary operations on encrypted data. Since the conception of the idea in [RAD78], it has been considered a holy grail of cryptography. After the first construction in 2009 [Gen09], it has evolved to become a practical primitive with strong security guarantees. Most modern constructions are based on well-known lattice problems such as Learning With Errors (LWE). Besides its academic appeal, in recent years FHE has also attracted significant attention from industry, thanks to its applicability to a considerable number of real-world use-cases. An upcoming standardization effort by ISO/IEC aims to support the wider adoption of these techniques. However, one of the main challenges that standards bodies, developers, and end users usually encounter is establishing parameters. This is particularly hard in the case of FHE because the parameters are not only related to the security level of the system, but also to the type of operations that the system is able to handle. In this paper we provide examples of parameter sets for LWE targeting particular security levels, that can be used in the context of FHE constructions. We also give examples of complete FHE parameter sets, including the parameters relevant for correctness and performance, alongside those relevant for security. As an additional contribution, we survey the parameter selection support offered in open-source FHE libraries.

XorSHAP: Privacy-Preserving Explainable AI for Decision Tree Models

Dimitar Jetchev, Marius Vuille

Research paper PDFPDF

Explainable AI (XAI) refers to the development of AI systems and machine learning models in a way that humans can understand, interpret and trust the predictions, decisions and outputs of these models. A common approach to explainability is feature importance, that is, determining which input features of the model have the most significant impact on the model prediction. Two major techniques for computing feature importance are LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). While very generic, these methods are computationally expensive even when the data is not encrypted. Applying them in the privacy-preserving setting when part or all of the input data is private is therefore a major computational challenge. In this paper, we present XorSHAP - the first practical data-oblivious algorithm for computing SHAP values for decision tree ensemble models. The algorithm is applicable in various privacy-preserving settings such as SMPC, FHE and differential privacy. Our algorithm has complexity $O(T \widetilde{M} D 2^D)$, where $T$ is the number of decision trees in the ensemble, $D$ is the depth of the decision trees and $\widetilde{M}$ is the maximum of the number of features $M$ and $2^D$ (the number of leaf nodes of a tree), and scales to real-world datasets. We implement the algorithm in the semi-honest Secure Multiparty Computation (SMPC) setting with full threshold using Inpher's Manticore framework. Our implementation simultaneously computes the SHAP values for 100 samples for an ensemble of $T = 60$ trees of depth $D = 4$ and $M = 100$ features in just 7.5 minutes, meaning that the SHAP values for a single prediction are computed in just 4.5 seconds for the same decision tree ensemble model. Additionally, it is parallelization-friendly, thus, enabling future work on massive hardware acceleration with GPUs.

Perfectly Secure Fluid MPC with Abort and Linear Communication Complexity

Alexander Bienstock, Daniel Escudero, Antigoni Polychroniadou

Research paper PDFPDF

The Fluid multiparty computation (MPC) model, introduced in (Choudhuri et al. CRYPTO 2021), addresses dynamic scenarios where participants can join or leave computations between rounds. Communication complexity initially stood at $\Omega(n^2)$ elements per gate, where $n$ is the number of parties in a committee online at a time. This held for both statistical security (honest majority) and computational security (dishonest majority) in (Choudhuri et al. CRYPTO'21) and (Rachuri and Scholl, CRYPTO'22), respectively. The work of (Bienstock et al. CRYPTO'23) improved communication to $O(n)$ elements per gate. However, it's important to note that the perfectly secure setting with one-third corruptions per committee has only recently been addressed in the work of (David et al. CRYPTO'23). Notably, their contribution marked a significant advancement in the Fluid MPC literature by introducing guaranteed output delivery. However, this achievement comes at the cost of prohibitively expensive communication, which scales to $\Omega(n^9)$ elements per gate.

In this work, we study the realm of perfectly secure Fluid MPC under one-third active corruptions. Our primary focus lies in proposing efficient protocols that embrace the concept of security with abort. Towards this, we design a protocol for perfectly secure Fluid MPC that requires only linear communication of $O(n)$ elements per gate, matching the communication of the non-Fluid setting. Our results show that, as in the case of computational and statistical security, perfect security with abort for Fluid MPC comes “for free” (asymptotically linear in $n$) with respect to traditional non-Fluid MPC, marking a substantial leap forward in large scale dynamic computations, such as Fluid MPC.

Round-Optimal Compiler for Semi-Honest to Malicious Oblivious Transfer via CIH

Varun Madathil, Alessandra Scafuro, Tanner Verber

Research paper PDFPDF

A central question in the theory of cryptography is whether we can build protocols that achieve stronger security guarantees, e.g., security against malicious adversaries, by combining building blocks that achieve much weaker security guarantees, e.g., security only against semi-honest adversaries; and with the minimal number of rounds. An additional focus is whether these building blocks can be used only as a black-box. Since Oblivious Transfer (OT) is the necessary and sufficient building block to securely realize any two-party (and multi-party) functionality, theoreticians often focus on proving whether maliciously secure OT can be built from a weaker notion of OT.

There is a rich body of literature that provides (black-box) compilers that build malicious OT from OTs that achieve weaker security such as semi-malicious OT and defensibly secure OT, within the minimal number of rounds. However, no round-optimal compiler exists that builds malicious OT from the weakest notion of semi-honest OT, in the plain model.

Correlation intractable hash (CIH) functions are special hash functions whose properties allow instantiating the celebrated Fiat-Shamir transform, and hence reduce the round complexity of public-coin proof systems.

In this work, we devise the first round-optimal compiler from semi-honest OT to malicious OT, by a novel application of CIH for collapsing rounds in the plain model. We provide the following contributions. First, we provide a new CIH-based round-collapsing construction for general cut-and-choose. This gadget can be used generally to prove the correctness of the evaluation of a function. Then, we use our gadget to build the first round-optimal compiler from semi-honest OT to malicious OT.

Our compiler uses the semi-honest OT protocol and the other building blocks in a black-box manner. However, for technical reasons, the underlying CIH construction requires the upper bound of the circuit size of the semi-honest OT protocol used. The need for this upper-bound makes our protocol not fully black-box, hence is incomparable with existing, fully black-box, compilers.

A Note on the Minimality of One-Way Functions in Post-Quantum Cryptography

Sam Buxbaum, Mohammad Mahmoody

Research paper PDFPDF

In classical cryptography, one-way functions (OWFs) play a central role as the minimal primitive that (almost) all primitives imply. The situation is more complicated in quantum cryptography, in which honest parties and adversaries can use quantum computation and communication, and it is known that analogues of OWFs in the quantum setting might not be minimal.

In this work we ask whether OWFs are minimal for the intermediate setting of post-quantum cryptography, in which the protocols are classical while they shall resist quantum adversaries. We show that for a wide range of natural settings, if a primitive Q implies OWFs, then so does its (uniformly or non-uniformly secure) post-quantum analogue. In particular, we show that if a primitive Q implies any other primitive P that has a 2-message security game (e.g., OWFs) through a black-box classical security reduction R, then one can always (efficiently) turn any polynomial-size quantum adversary breaking P into a polynomial-size quantum adversary breaking Q. Note that this result holds even if the implementation of P using that of Q is arbitrarily non-black-box.

We also prove extensions of this result for when the reduction R anticipates its oracle adversary to be deterministic, whenever either of the following conditions hold: (1) the adversary needs to win the security game of Q only with non-negligible probability (e.g., Q is collision-resistant hashing) or (2) that either of P and Q have “falsifiable” security games (this is the case when P is OWFs). Our work leaves open answering our main question when Q implies OWFs through a non-black-box security reduction, or when P uses a more complicated security game than a two-message one.

Publicly-Detectable Watermarking for Language Models

Jaiden Fairoze, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Mingyuan Wang

Research paper PDFPDF

We present a publicly-detectable watermarking scheme for LMs: the detection algorithm contains no secret information, and it is executable by anyone. We embed a publicly-verifiable cryptographic signature into LM output using rejection sampling and prove that this produces unforgeable and distortion-free (i.e., undetectable without access to the public key) text output. We make use of error-correction to overcome periods of low entropy, a barrier for all prior watermarking schemes. We implement our scheme and find that our formal claims are met in practice.

On the Key-Commitment Properties of Forkcipher-based AEADs

Mostafizar Rahman, Samir Kundu, Takanori Isobe

Research paper PDFPDF

Forkcipher-based AEADs have emerged as lightweight and efficient cryptographic modes, making them suitable for resource-constrained environments such as IoT devices and distributed decryption through MPC. These schemes, including prominent examples like Eevee (Jolteon, Espeon, and Umbreon), PAEF, RPAEF, and SAEF, leverage the properties of forkciphers to achieve enhanced performance. However, their security in terms of key commitment, a critical property for certain applications such as secure cloud services, as highlighted by Albertini et al. (USENIX 2022), has not been comprehensively analyzed until now.

In this work, we analyze the key-commitment properties of forkcipher-based AEADs. We found that some of the forkcipher-based AEAD schemes lack key-commitment properties, primarily due to the distinctive manner in which they process associated data and plaintext. For two different keys and the same nonce, an adversary can identify associated data and plaintext blocks that produce identical ciphertext-tags with a complexity of $O(1)$. Our findings apply to various forkcipher-based AEADs, including Eevee, PAEF, and SAEF, and naturally extend to less strict frameworks, such as CMT-1 and CMT-4.

These findings highlight a significant limitation in the robustness of forkcipher-based AEADs. While these modes are attractive for their lightweight design and efficiency, their deployment should be restricted in scenarios where explicit robustness or key-commitment security is required.

Circuit Privacy for FHEW/TFHE-Style Fully Homomorphic Encryption in Practice

Kamil Kluczniak

Research paper PDFPDF

A fully homomorphic encryption (FHE) scheme allows a client to encrypt and delegate its data to a server that performs computation on the encrypted data that the client can then decrypt. While FHE gives confidentiality to clients' data, it does not protect the server's input and computation. Nevertheless, FHE schemes are still helpful in building delegation protocols that reduce communication complexity, as the ciphertext's size is independent of the size of the computation performed on them.

We can further extend FHE by a property called circuit privacy, which guarantees that the result of computing on ciphertexts reveals no information on the computed function and the inputs of the server. Thereby, circuit private FHE gives rise to round optimal and communication efficient secure two-party computation protocols. Unfortunately, despite significant efforts and much work put into the efficiency and practical implementations of FHE schemes, very little has been done to provide useful and practical FHE supporting circuit privacy. In this work, we address this gap and design the first randomized bootstrapping algorithm whose single invocation sanitizes a ciphertext and, consequently, serves as a tool to provide circuit privacy. We give an extensive analysis, propose parameters, and provide a C++ implementation of our scheme. Our bootstrapping can sanitize a ciphertext to achieve circuit privacy at an 80-bit statistical security level in between 1.3 and 0.9 seconds, depending which Gaussian sampling algorithm is used, and whether the parameter set targets a fast Fourier or a number theoretic transform-based implementation. In addition, we can perform non-sanitized bootstrapping in around 0.27 or 0.14 seconds. Crucially, we do not need to increase the parameters to perform computation before or after sanitization takes place. For comparison's sake, we revisit the Ducas-Stehlé washing machine method. In particular, we give a tight analysis, estimate efficiency, review old, and provide new parameters.

Foundations of Data Availability Sampling

Mathias Hall-Andersen, Mark Simkin, Benedikt Wagner

Research paper PDFPDF

Towards building more scalable blockchains, an approach known as data availability sampling (DAS) has emerged over the past few years. Even large blockchains like Ethereum are planning to eventually deploy DAS to improve their scalability. In a nutshell, DAS allows the participants of a network to ensure the full availability of some data without any one participant downloading it entirely. Despite the significant practical interest that DAS has received, there are currently no formal definitions for this primitive, no security notions, and no security proofs for any candidate constructions. For a cryptographic primitive that may end up being widely deployed in large real-world systems, this is a rather unsatisfactory state of affairs.

In this work, we initiate a cryptographic study of data availability sampling. To this end, we define data availability sampling precisely as a clean cryptographic primitive. Then, we show how data availability sampling relates to erasure codes. We do so by defining a new type of commitment schemes which naturally generalizes vector commitments and polynomial commitments. Using our framework, we analyze existing constructions and prove them secure. In addition, we give new constructions which are based on weaker assumptions, computationally more efficient, and do not rely on a trusted setup, at the cost of slightly larger communication complexity. Finally, we evaluate the trade-offs of the different constructions.

An efficient combination of quantum error correction and authentication

Yfke Dulek, Garazi Muguruza, Florian Speelman

Research paper PDFPDF

When sending quantum information over a channel, we want to ensure that the message remains intact. Quantum error correction and quantum authentication both aim to protect (quantum) information, but approach this task from two very different directions: error-correcting codes protect against probabilistic channel noise and are meant to be very robust against small errors, while authentication codes prevent adversarial attacks and are designed to be very sensitive against any error, including small ones.

In practice, when sending an authenticated state over a noisy channel, one would have to wrap it in an error-correcting code to counterbalance the sensitivity of the underlying authentication scheme. We study the question of whether this can be done more efficiently by combining the two functionalities in a single code. To illustrate the potential of such a combination, we design the threshold code, a modification of the trap authentication code which preserves that code's authentication properties, but which is naturally robust against depolarizing channel noise. We show that the threshold code needs polylogarithmically fewer qubits to achieve the same level of security and robustness, compared to the naive composition of the trap code with any concatenated CSS code. We believe our analysis opens the door to combining more general error-correction and authentication codes, which could improve the practicality of the resulting scheme.