Communications in Cryptology IACR CiC


Dates are inconsistent
2 results sorted by publication date
Carsten Baum, Jens Berlips, Walther Chen, Ivan B. Damgård, Kevin M. Esvelt, Leonard Foner, Dana Gretton, Martin Kysel, Ronald L. Rivest, Lawrence Roy, Francesca Sage-Ling, Adi Shamir, Vinod Vaikuntanathan, Lynn Van Hauwe, Theia Vogel, Benjamin Weinstein-Raun, Daniel Wichs, Stephen Wooster, Andrew C. Yao, Yu Yu
Published 2024-10-07 PDFPDF

Oblivious Pseudorandom Functions (OPRFs) allow a client to evaluate a pseudorandom function (PRF) on her secret input based on a key that is held by a server. In the process, the client only learns the PRF output but not the key, while the server neither learns the input nor the output of the client. The arguably most popular OPRF is due to Naor, Pinkas and Reingold (Eurocrypt 2009). It is based on an Oblivious Exponentiation by the server, with passive security under the Decisional Diffie-Hellman assumption. In this work, we strengthen the security guarantees of the NPR OPRF by protecting it against active attacks of the server. We have implemented our solution and report on the performance. Our main result is a new batch OPRF protocol which is secure against maliciously corrupted servers, but is essentially as efficient as the semi-honest solution. More precisely, the computation (and communication) overhead is a multiplicative factor $o(1)$ as the batch size increases. The obvious solution using zero-knowledge proofs would have a constant factor overhead at best, which can be too expensive for certain deployments. Our protocol relies on a novel version of the DDH problem, which we call the Oblivious Exponentiation Problem (OEP), and we give evidence for its hardness in the Generic Group model. We also present a variant of our maliciously secure protocol that does not rely on the OEP but nevertheless only has overhead $o(1)$ over the known semi-honest protocol. Moreover, we show that our techniques can also be used to efficiently protect threshold blind BLS signing and threshold ElGamal decryption against malicious attackers.

Rustem Takhanov
Published 2024-10-07 PDFPDF

Almost pairwise independence (API) is a quantitative property of a class of functions that is desirable in many cryptographic applications. This property is satisfied by Learning with errors (LWE)-mappings and by special Substitution-Permutation Networks (SPN). API block ciphers are known to be resilient to differential and linear cryptanalysis attacks. Recently, security of protocols against neural network-based attacks became a major trend in cryptographic studies. Therefore, it is relevant to study the hardness of learning a target function from an API class of functions by gradient-based methods.

We propose a theoretical analysis based on the study of the variance of the gradient of a general machine learning objective with respect to a random choice of target function from a class. We prove an upper bound and verify that, indeed, such a variance is extremely small for API classes of functions. This implies the resilience of actual LWE-based primitives against deep learning attacks, and to some extent, the security of SPNs. The hardness of learning reveals itself in the form of the barren plateau phenomenon during the training process, or in other words, in a low information content of the gradient about the target function. Yet, we emphasize that our bounds hold for the case of a regular parameterization of a neural network and the gradient may become informative if a class is mildly pairwise independent and a parameterization is non-regular. We demonstrate our theory in experiments on the learnability of LWE mappings.