19 results sorted by publication date
Isogeny-based schemes often come with special requirements on the field of definition of the involved elliptic curves. For instance, the efficiency of SQIsign, a promising candidate in the NIST signature standardisation process, requires a large power of two and a large smooth integer $T$ to divide $p^2-1$ for its prime parameter $p$. We present two new methods that combine previous techniques for finding suitable primes: sieve-and-boost and XGCD-and-boost. We use these methods to find primes for the NIST submission of SQIsign. Furthermore, we show that our methods are flexible and can be adapted to find suitable parameters for other isogeny-based schemes such as AprèsSQI or POKE. For all three schemes, the parameters we present offer the best performance among all parameters proposed in the literature.
We study signatures well suited for sensitive applications (e.g. whistleblowing) where both the signer's anonymity and deniability are important. Two independent lines of work have tackled these two goals: ring signatures ensure the signer's anonymity (within a set of signers, called a ring), and — separately — multi designated verifier signatures ensure that all the intended recipients agree on whether a signature is valid, while maintaining the signer's deniability by preventing the intended recipients from convincing an outsider of the validity of the signature. In this paper, we introduce multi designated verifier ring signatures (MDVRS), which simultaneously offer both signer anonymity and deniability. This makes MDVRS uniquely suited for sensitive scenarios.
Following the blueprint of Damgård et al (TCC'20) for multi designated verifier signatures, we introduce provably simulatable designated verifier ring signatures (PSDVRS) as an intermediate building block which we then compile into an MDVRS. We instantiate PSDVRS in a concretely efficient way from discrete logarithm based sigma protocols, encryption and commitments.
Blind signature schemes enable a user to obtain a digital signature on a message from a signer without revealing the message itself. Among the most fundamental examples of such a scheme is blind Schnorr, but recent results show that it does not satisfy the standard notion of security against malicious users, One-More Unforgeability (OMUF), as it is vulnerable to the ROS attack. However, blind Schnorr does satisfy the weaker notion of sequential OMUF, in which only one signing session is open at a time, in the Algebraic Group Model (AGM) + Random Oracle Model (ROM), assuming the hardness of the Discrete Logarithm (DL) problem.
This paper serves as a first step towards characterizing the security of blind Schnorr in the limited concurrency setting. Specifically, we show that blind Schnorr satisfies OMUF when at most two signing sessions can be concurrently open (in the AGM+ROM, assuming DL). Our argument suggests that it is plausible that blind Schnorr satisfies OMUF for up to polylogarithmically many concurrent signing sessions. Our security proof involves interesting techniques from linear algebra and combinatorics.
Most of the previous attacks on Dilithium exploit side-channel information which is leaked during the computation of the polynomial multiplication cs1, where s1 is a small-norm secret and c is a verifier's challenge. In this paper, we present a new attack utilizing leakage during secret key unpacking in the signing algorithm. The unpacking is also used in other post-quantum cryptographic algorithms, including Kyber, because inputs and outputs of their API functions are byte arrays. Exploiting leakage during unpacking is more challenging than exploiting leakage during the computation of cs1 since c varies for each signing, while the unpacked secret key remains constant. Therefore, post-processing is required in the latter case to recover a full secret key. We present two variants of post-processing. In the first one, a half of the coefficients of the secret s1 and the error s2 is recovered by profiled deep learning-assisted power analysis and the rest is derived by solving linear equations based on t = As1 + s2, where A and t are parts of the public key. This case assumes knowledge of the least significant bits of t, t0. The second variant uses lattice reduction to derive s1 without the knowledge of t0. However, it needs a larger portion of s1 to be recovered by power analysis. We evaluate both variants on an ARM Cortex-M4 implementation of Dilithium-2. The experiments show that the attack assuming the knowledge of t0 can recover s1 from a single trace captured from a different from profiling device with a non-negligible probability.
Restricted syndrome decoding problems (R-SDP and R-SDP($G$)) provide an interesting basis for post-quantum cryptography. Indeed, they feature in CROSS, a submission in the ongoing process for standardizing post-quantum signatures.
This work improves our understanding of the security of both problems. Firstly, we propose and implement a novel collision attack on R-SDP($G$) that provides the best attack under realistic restrictions on memory. Secondly, we derive precise complexity estimates for algebraic attacks on R-SDP that are shown to be accurate by our experiments. We note that neither of these improvements threatens the updated parameters of CROSS.
Salient in many cryptosystems, the exponent-inversion technique began without randomization in the random oracle model (SCIS '03, PKC '04), evolved into the Boneh-Boyen short signature scheme (JoC '08) and exerted a wide influence. Seen as a notable case, Gentry's (EuroCrypt '06) identity-based encryption (IBE) applies exponent inversion on a randomized base in its identity-based trapdoors. Making use of the non-static q-strong Diffie-Hellman assumption, Boneh-Boyen signatures are shown to be unforgeable against q-chosen-message attacks, while a variant q-type decisional assumption is used to establish the security of Gentry-IBE. Challenges remain in proving their security under weaker static assumptions.
Supported by the dual form/system framework (Crypto '09, AsiaCrypt '12), we propose dual form exponent-inversion Boneh-Boyen signatures and Gentry-IBE, with security proven under the symmetric external Diffie-Hellman (SXDH) assumption. Starting from our signature scheme, we extend it into P-signatures (TCC '08), resulting in the first anonymous credential scheme from the SXDH assumption, serving as a competitive alternative to the static-assumption construction of Abe et al. (JoC '16). Moreover, from our Gentry-IBE variant, we propose an accountable-authority IBE scheme also from SXDH, surpassing the fully secure Sahai-Seyalioglu scheme (PKC '11) in efficiency and the generic Kiayias-Tang transform (ESORICS '15) in security. Collectively, we present a suite of results under static assumptions.
Raccoon is a lattice-based scheme submitted to the NIST 2022 call for additional post-quantum signatures. One of its main selling points is that its design is intrinsically easy to mask against side-channel attacks. So far, Raccoon's physical security guarantees were only stated in the abstract probing model. In this paper, we discuss how these probing security results translate into guarantees in more realistic leakage models. We also highlight that this translation differs from what is usually observed (e.g., in symmetric cryptography), due to the algebraic structure of Raccoon's operations. For this purpose, we perform an in-depth information theoretic evaluation of Raccoon's most innovative part, namely the AddRepNoise function which allows generating its arithmetic shares on-the-fly. Our results are twofold. First, we show that the resulting shares do not enforce a statistical security order (i.e., the need for the side-channel adversary to estimate higher-order moments of the leakage distribution), as usually expected when masking. Second, we observe that the first-order leakage on the (large) random coefficients manipulated by Raccoon cannot be efficiently turned into leakage on the (smaller) coefficients of its long-term secret. Concretely, our information theoretic evaluations for relevant leakage functions also suggest that Raccoon's masked implementations can ensure high security with less shares than suggested by a conservative analysis in the probing model.
For more than two decades, pairings have been a fundamental tool for designing elegant cryptosystems, varying from digital signature schemes to more complex privacy-preserving constructions. However, the advancement of quantum computing threatens to undermine public-key cryptography. Concretely, it is widely accepted that a future large-scale quantum computer would be capable to break any public-key cryptosystem used today, rendering today's public-key cryptography obsolete and mandating the transition to quantum-safe cryptographic solutions. This necessity is enforced by numerous recognized government bodies around the world, including NIST which initiated the first open competition in standardizing post-quantum (PQ) cryptographic schemes, focusing primarily on digital signatures and key encapsulation/public-key encryption schemes. Despite the current efforts in standardizing PQ primitives, the landscape of complex, privacy-preserving cryptographic protocols, e.g., zkSNARKs/zkSTARKs, is at an early stage. Existing solutions suffer from various disadvantages in terms of efficiency and compactness and in addition, they need to undergo the required scrutiny to gain the necessary trust in the academic and industrial domains. Therefore, it is believed that the migration to purely quantum-safe cryptography would require an intermediate step where current classically secure protocols and quantum-safe solutions will co-exist. This is enforced by the report of the Commercial National Security Algorithm Suite version 2.0, mandating transition to quantum-safe cryptographic algorithms by 2033 and suggesting to incorporate ECC at 192-bit security in the meantime. To this end, the present paper aims at providing a comprehensive study on pairings at 192-bit security level. We start with an exhaustive review in the literature to search for all possible recommendations of such pairing constructions, from which we extract the most promising candidates in terms of efficiency and security, with respect to the advanced Special TNFS attacks. Our analysis is focused, not only on the pairing computation itself, but on additional operations that are relevant in pairing-based applications, such as hashing to pairing groups, cofactor clearing and subgroup membership testing. We implement all functionalities of the most promising candidates within the RELIC cryptographic toolkit in order to identify the most efficient pairing implementation at 192-bit security and provide extensive experimental results.
In this work we first present an explicit forking lemma that distills the information-theoretic essence of the high-moment technique introduced by Rotem and Segev (CRYPTO '21), who analyzed the security of identification protocols and Fiat-Shamir signature schemes. Whereas the technique of Rotem and Segev was particularly geared towards two specific cryptographic primitives, we present a stand-alone probabilistic lower bound, which does not involve any underlying primitive or idealized model. The key difference between our lemma and previous ones is that instead of focusing on the tradeoff between the worst-case or expected running time of the resulting forking algorithm and its success probability, we focus on the tradeoff between higher moments of its running time and its success probability.
Equipped with our lemma, we then establish concrete security bounds for the BN and BLS multi-signature schemes that are significantly tighter than the concrete security bounds established by Bellare and Neven (CCS '06) and Boneh, Drijvers and Neven (ASIACRYPT '18), respectively. Our analysis does not limit adversaries to any idealized algebraic model, such as the algebraic group model in which all algorithms are assumed to provide an algebraic justification for each group element they produce. Our bounds are derived in the random-oracle model based on the standard-model second-moment hardness of the discrete logarithm problem (for the BN scheme) and the computational co-Diffie-Hellman problem (for the BLS scheme). Such second-moment assumptions, asking that the success probability of any algorithm in solving the underlying computational problems is dominated by the second moment of the algorithm's running time, are particularly plausible in any group where no better-than-generic algorithms are currently known.
The adversary model of white-box cryptography includes an extreme case where the adversary, sitting at the endpoint, has full access to a cryptographic scheme. Motivating by the fact that most existing white-box implementations focus on symmetric encryption, we present implementations for hash-based signatures so that the security against white-box attackers (who have read-only access to data with a size bounded by a space-hardness parameter M) depends on the availability of a white-box secure cipher (in addition to a general one-way function). We also introduce parameters and key-generation complexity results for white-box secure instantiation of stateless hash-based signature scheme SPHINCS+, one of the NIST selections for quantum-resistant digital signature algorithms, and its older version SPHINCS. We also present a hash tree-based solution for one-time passwords secure in a white-box attacker context. We implement the proposed solutions and share our performance results.
One of the central questions in cryptology is how efficient generic constructions of cryptographic primitives can be. Gennaro, Gertner, Katz, and Trevisan [SIAM J. of Compt., 2005] studied the lower bounds of the number of invocations of a (trapdoor) one-way permutation in order to construct cryptographic schemes, e.g., pseudorandom number generators, digital signatures, and public-key and symmetric-key encryption.
Recently, quantum machines have been explored to _construct_ cryptographic primitives other than quantum key distribution. This paper studies the efficiency of _quantum_ black-box constructions of cryptographic primitives when the communications are _classical_. Following Gennaro et al., we give the lower bounds of the number of invocations of an underlying quantumly-computable quantum-one-way permutation when the _quantum_ construction of pseudorandom number generator and symmetric-key encryption is weakly black-box. Our results show that the quantum black-box constructions of pseudorandom number generator and symmetric-key encryption do not improve the number of invocations of an underlying quantumly-computable quantum-one-way permutation.
Biscuit is a recent multivariate signature scheme based on the MPC-in-the-Head paradigm. It has been submitted to the NIST competition for additional signature schemes. Signatures are derived from a zero-knowledge proof of knowledge of the solution of a structured polynomial system. This extra structure enables efficient proofs and compact signatures. This short note demonstrates that it also makes these polynomial systems easier to solve than random ones. As a consequence, the original parameters of Biscuit failed to meet the required security levels and had to be upgraded.
In a multiparty signing protocol, also known as a threshold signature scheme, the private signing key is shared amongst a set of parties and only a quorum of those parties can generate a signature. Research on multiparty signing has been growing in popularity recently due to its application to cryptocurrencies. Most work has focused on reducing the number of rounds to two, and as a result: (a) are not fully simulatable in the sense of MPC real/ideal security definitions, and/or (b) are not secure under concurrent composition, and/or (c) utilize non-standard assumptions of different types in their proofs of security. In this paper, we describe a simple three-round multiparty protocol for Schnorr signatures that is secure for any number of corrupted parties; i.e., in the setting of a dishonest majority. The protocol is fully simulatable, secure under concurrent composition, and proven secure in the standard model or random-oracle model (depending on the instantiations of the commitment and zero-knowledge primitives). The protocol realizes an ideal Schnorr signing functionality with perfect security in the ideal commitment and zero-knowledge hybrid model (and thus the only assumptions needed are for realizing these functionalities).
In our presentation, we do not assume that all parties begin with the message to be signed, the identities of the participating parties and a unique common session identifier, since this is often not the case in practice. Rather, the parties achieve consensus on these parameters as the protocol progresses.
This paper describes a generic methodology for obtaining unified, and then complete formulas for a prime-order group abstraction homomorphic to a subgroup of an elliptic curve with even order. The method is applicable to any curve with even order, in finite fields of both even and odd characteristic; it is most efficient on curves with order equal to 2 modulo 4, dubbed "double-odd curves". In large characteristic fields, we obtain doubling formulas with cost as low as 1M + 5S, and the resulting group allows building schemes such as signatures that outperform existing fast solutions, e.g. Ed25519. In binary fields, the obtained formulas are not only complete but also faster than previously known incomplete formulas; we can sign and verify in as low as 18k and 27k cycles on x86 CPUs, respectively.
The LowMC family of block ciphers was proposed by Albrecht et al. in Eurocrypt 2015, specifically targeting adoption in FHE and MPC applications due to its low multiplicative complexity. The construction operates a 3-bit quadratic S-box as the sole non-linear transformation in the algorithm. In contrast, both the linear layer and round key generation are achieved through multiplications of full rank matrices over GF(2). The cipher is instantiable using a diverse set of default configurations, some of which have partial non-linear layers i.e., in which the S-boxes are not applied over the entire internal state of the cipher.
The significance of cryptanalysing LowMC was elevated by its inclusion into the NIST PQC digital signature scheme PICNIC in which a successful key recovery using a single plaintext/ciphertext pair is akin to retrieving the secret signing key. The current state-of-the-art attack in this setting is due to Dinur at Eurocrypt 2021, in which a novel way of enumerating roots of a Boolean system of equation is morphed into a key-recovery procedure that undercuts an ordinary exhaustive search in terms of time complexity for the variants of the cipher up to five rounds.
In this work, we demonstrate that this technique can efficiently be enriched with a specific linearization strategy that reduces the algebraic degree of the non-linear layer as put forward by Banik et al. at IACR ToSC 2020(4). This amalgamation yields new attacks on certain instances of LowMC up to seven rounds.
Predicate encryption (PE) is a type of public-key encryption that captures many useful primitives such as attribute-based encryption (ABE). Although much progress has been made to generically achieve security against chosen-plaintext attacks (CPA) efficiently, in practice, we also require security against chosen-ciphertext attacks (CCA). Because achieving CCA-security on a case-by-case basis is a complicated task, several generic conversion methods have been proposed, which typically target different subclasses of PE such as ciphertext-policy ABE. As is common, such conversion methods may sacrifice some efficiency. Notably, for ciphertext-policy ABE, all proposed generic transformations incur a significant decryption overhead. Furthermore, depending on the setting in which PE is used, we may also want to require that messages are signed. To do this, predicate signature schemes can be used. However, such schemes provide a strong notion of privacy for the signer, which may be stronger than necessary for some practical settings at the cost of efficiency.
In this work, we propose the notion of predicate extension, which transforms the predicate used in a PE scheme to include one additional attribute, in both the keys and the ciphertexts. Using predicate extension, we can generically obtain CCA-security and signatures from a CPA-secure PE scheme. For the CCA-security transform, we observe that predicate extension implies a two-step approach to achieving CCA-security. This insight broadens the applicability of existing transforms for specific subclasses of PE to cover all PE. We also propose a new transform that incurs slightly less overhead than existing transforms. Furthermore, we show that predicate extension allows us to create a new type of signatures, which we call PE-based signatures. PE-based signatures are weaker than typical predicate signatures in the sense that they do not provide privacy for the signer. Nevertheless, such signatures may be more suitable for some practical settings owing to their efficiency or reduced interactivity. Lastly, to show that predicate extensions may facilitate a more efficient way to achieve CCA-security generically than existing methods, we propose a novel predicate-extension transformation for a large class of pairing-based PE, covered by the pair and predicate encodings frameworks. In particular, this yields the most efficient generic CCA-conversion for ciphertext-policy ABE.
Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations. It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations. In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme. Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.
In this work, we assess the real-world practicality of CSIDH, an isogeny-based non-interactive key exchange. We provide the first thorough assessment of the practicality of CSIDH in higher parameter sizes for conservative estimates of quantum security, and with protection against physical attacks.
This requires a three-fold analysis of CSIDH. First, we describe two approaches to efficient high-security CSIDH implementations, based on SQALE and CTIDH. Second, we optimize such high-security implementations, on a high level by improving several subroutines, and on a low level by improving the finite field arithmetic. Third, we benchmark the performance of high-security CSIDH. As a stand-alone primitive, our implementations outperform previous results by a factor up to 2.53×.
As a real-world use case considering network protocols, we use CSIDH in TLS variants that allow early authentication through a NIKE. Although our instantiations of CSIDH have smaller communication requirements than post-quantum KEM and signature schemes, even our highly-optimized implementations result in too-large handshake latency (tens of seconds), showing that CSIDH is only practical in niche cases.