Bit Security as Cost to Demonstrate Advantage

. We revisit the question of what the definition of bit security should be, previously answered by Micciancio-Walter (Eurocrypt 2018) and Watanabe-Yasunaga (Asiacrypt 2021). Our new definition is simple, but (i) captures both search and decision primitives in a single framework like Micciancio-Walter, and (ii) has a firm operational meaning like Watanabe-Yasunaga. It also matches intuitive expectations and can be well-formulated regarding Hellinger distance. To support and justify the new definition, we prove several classic security reductions with respect to our bit security. We also provide pathological examples that indicate the ill-definedness of bit security defined in Micciancio-Walter and Watanabe-Yasunaga.


Introduction
Bit security (a.k.a.security level) is a central concept in cryptography, which bridges asymptotic and concrete regimes.Bit security summarizes complex security descriptions of a concrete instantiation of a cryptographic scheme in a single number, being a simple enough measure for the level of security.Whereas the asymptotic approach does not provide any guidance on concrete parameter selection, bit security helps us choose an appropriate set of parameters to guarantee a certain level of security when deploying cryptographic schemes.When we say a scheme has λ-bit security, we roughly expect that it costs more than 2 λ resources to break the scheme or that the scheme is as secure as its idealized version with a λ-bit secret key. 1 However, despite its importance, we still do not have a well-accepted formal definition of bit security.

Conventional Definition
The most common definition of bit security is min log(T /ε).Here, the minimum is taken over all possible adversaries A, T is the cost (e.g.runtime) of A, and ε is the advantage of A. The definition captures trade-offs between cost and advantage for an idealized primitive with a λ-bit secret key.Two trivial extreme attacks are (i) brute-force search with T = 2 λ and ε = 1 and (ii) guessing at random with T = 1 and ε = 1/2 λ .
Another intuition behind the conventional definition is the following.When A (with cost T and advantage ε) is given, we can run A for N ≈ 1/ε times to obtain an amplified adversary with cost N • T ≈ T /ε and advantage 1 − (1 − ε) N ≈ N • ε ≈ 1.That is, when such an adversary A is given, we can break the scheme with a cost of roughly T /ε.However, the above intuitions work only for (certain) search primitives (e.g., one-way functions).In particular, brute-force search or probability amplification is not allowed for decision primitives (e.g., pseudorandom generators).Moreover, we quantify the advantage differently for decision and search primitives, namely ε = |P − 1/2| for decision primitives and ε = P for search primitives, where P is the success probability.Thus, using the same definition of bit security min log(T /ε) for both types of primitives sounds already problematic.However, this is the widely used definition in the literature.Rather obviously, this conventional definition led to several paradoxical situations, such as the following.

Peculiar Case of Linear Test against PRG
It is folklore, which goes back at least to [AGHP90], that there is a non-uniform2 attack (linear tests) against pseudorandom number generators (PRG) with λ-bit seed, which achieves advantage Ω(2 −λ/2 ) in time O(λ).Thus, according to the conventional definition of bit security, a PRG with λ-bit seed can guarantee not much more than λ/2-bit security.This contradicts our expectation that λ-bit security of a PRG reflects the security of the ideal PRG with λ-bit seed.

Peculiar Case of Distribution Approximation
When constructing cryptographic schemes (especially in lattice-based cryptography [Reg05,Pei16]), we often make use of certain distributions (e.g., discrete Gaussian).That is, sampling from a particular distribution is often an essential part of executing cryptographic schemes.And their security proofs assume an ideal situation where we can sample the distributions exactly.In actual implementations, however, we can only sample from an approximate distribution due to limited resources.
The question is how these approximations affect the security of schemes.In terms of the statistical distance (a.k.a.total variation distance), the standard measure in cryptography, it is an easy fact that λ-bit precision is sufficient to maintain λ-bit security.While this sounds quite natural already, ambitious researchers have proved that it is enough to achieve λ/2-bit closeness with respect to other nice divergences (e.g., Rényi [PDG14, BLL + 15], max-log [MW17]), yielding much better parameters for practical uses.
However, all mentioned results apply only to search primitives,3 and a corresponding result for decision primitives has eluded researchers.The paradox is that it is generally believed that the security of encryption schemes (which is a decision primitive) is more robust against approximation errors than that of signature schemes (which is a search primitive).4

Ad Hoc Definition
Observing the peculiar cases, the definition of bit security for decision primitives seems right to be doubled, i.e., min log(T /ε 2 ).We remark that this ad hoc definition was considered by classic works [GL89,HILL99] and is often used in the community without satisfactory understanding.That is, the questions remain: What is the source of this quadratic gap between decision and search primitives?What is the right definition for bit security?

Previous Approaches
Micciancio-Walter [MW18] explicitly pointed out these situations for the first time and provided a general formal definition of bit security; their definition resolves the above peculiar cases and captures both search and decision primitives in a single framework.The approach of Micciancio-Walter was to consider a general cryptographic game in which an adversary has to guess an n-bit string and define a general advantage of an adversary that captures both search (for large n) and decision (for n = 1) games, building on concepts from information theory.However, in the definition, they introduce a hypothetical random variable that lacks intuitive meaning without a satisfactory explanation.(Refer to Section 5.3 for details.) Watanabe-Yasunaga [WY21] pointed out this weakness of Micciancio-Walter as a lack of operational meaning and provided another definition as cost for winning certain games with high probability -which also resolves the peculiar cases mentioned above and has an operational meaning by nature.However, they defined the games qualitatively differently for search and decision primitives, losing the generality that Micciancio-Walter sought.(Refer to Section 5.4 for details.)

Our Contributions
Our main result is a new definition of bit security (Def.11).Our definition is so simple that we can put it in plain language: We define bit security as the cost to demonstrate advantage of adversaries.That is, we measure the total work done by an adversary to allow an observer to distinguish it from a dummy adversary by observing wins and loses while repeating games.Our simple definition (i) captures both search and decision primitives in a single framework like Micciancio-Walter [MW18] and (ii) has a firm operational meaning like Watanabe-Yasunaga [WY21].Indeed, our definition also resolves the peculiar cases introduced above, matching the intuitive expectations (Remark 10 and Thm. 2).Moreover, our bit security can be well-formulated in terms of Hellinger distance, supporting the practical usability of our definition (Thm. 1 and Def.13).
Besides, to support and justify our new definition of bit security, we: • prove several security reductions with respect to our definition.Our proofs are arguably simpler and more intuitive than the previous proofs of [ MW18,WY21].Namely, we prove: a theorem on distribution approximation in cryptographic schemes.Our theorem states that, with respect to the Hellinger distance, λ/2-bit precision is sufficient to maintain λ-bit security for any security games.This resolves the peculiar case introduced above.Our proof is much shorter than the previous proofs, leveraging nice properties of the Hellinger distance.(Section 4.1) the hybrid argument.Our proof is essentially the same as the conventional proof, whereas [MW18,WY21] had to develop new techniques.This is due to the structural similarity between the conventional bit security and ours.
In particular, like the conventional definition, our definition of bit security depends only on the success probability of adversaries and is independent of other information.(Section 4.2) natural decision-to-search reductions.This includes the reductions from PRG to OWF, from DDH to CDH, and from IND-CPA to OW-CPA.Our reductions are tight, like the conventional proofs.(Section 4.3) • point out several weaknesses of the previous definitions by [MW18,WY21].
-Their definitions only cover security games with specific structures.In particular, their definitions do not even cover the EUF-CMA game for digital signature schemes.In contrast, our definition of security games is as inclusive as possible: Our framework captures all security games covered by the definitions of falsifiable assumptions of Gentry-Wichs [GW11].(Section 5.1) -According to their definitions, search primitives always have finite bits of security.This circumstance is arguably counter-intuitive considering the presence of unconditionally secure search primitives, e.g., information-theoretic MAC.In contrast, according to our definition, cryptographic schemes with informationtheoretic security always satisfy ∞-bit security.(Section 5.5) -Under their definitions, there are pathological examples where two security games, G and G ′ , are essentially the same in common sense, but G is defined as a decision game and G ′ is defined as a search game.Moreover, the bit security of G and G ′ differ in their definitions.(Section 5.2 and 5.5)

Overview
The main body of this paper consists of three parts: Definitions (Section 3), Theorems (Section 4), and Comparisons (Section 5).The essence of this paper is Section 3, where our bit security is defined.Other sections are to support and justify the new definition.
In Section 3, we first formally define security game (Def.3) to clarify the scope of our new definition of bit security.Then, bit security is formally defined as the cost to demonstrate advantage (Def.11), leveraging a meta-game which we call advantage observation game (Def.10).We also show that our bit security can be tightly estimated in terms of Hellinger distance (Thm.1).We make several remarks on this new definition, including how it relates to the conventional bit security (Remark 9) and how it embraces the quadratic gap between decision and search primitives (Remark 10).
In Section 4, we prove several security reductions concerning our bit security: a distribution approximation theorem (Section 4.1), the hybrid argument (Section 4.2), and decision-to-search reductions (Section 4.3).In Section 5, we review the previous definitions of bit security proposed by Micciancio-Walter [MW18] and Watanabe-Yasunaga [WY21].Then, we compare their definitions with our new definition.This section also points out several weaknesses of the previous definitions.

Notations and Terminologies
We denote the logarithm to the base 2 by log(•) and the one to the base e by ln(•).We use standard arithmetic over extended non-negative real numbers [0, ∞], i.e. a 0 = ∞ for a ∈ (0, ∞).The minimum of the empty set is defined to be ∞.We denote the total variation distance and Hellinger distance by d TV (, ) and d H (, ), respectively (Section 2.2).For notational convenience, we often identify the Bernoulli distribution B(p) with the probability p itself.For example, we use d TV (p, q) and d H (p, q) in place of d TV B(p), B(q) and d H B(p), B(q) .We do not strictly distinguish the terms hardness and security and often use them interchangeably.For an algorithm A, we denote its cost by T A . 5 We denote the complement of a relation R by R.

Statistical Distances
In this section, we recall definitions and a few properties of two statistical distances: total variation distance and Hellinger distance.For proofs and detailed discussions, please refer to, e.g., [PW].
Definition 1 (Total Variation Distance).For two discrete distributions D 0 and D 1 on the same domain X , we denote and define their total variation distance (a.k.a. the statistical distance) as follows.
Definition 2 (Hellinger Distance).For two discrete distributions D 0 and D 1 on the same domain X , we denote and define their Hellinger distance as follows.
2 Proposition 1 (Properties of Hellinger Distance).Let D 0 , D 1 , D 2 be discrete distributions on the same domain X .
(a) Triangle Inequality: The Hellinger distance is a metric.In particular, the following inequality holds.
(b) Data-Processing Inequality: Squared Hellinger distance is an f -divergence.In particular, the following inequality holds for any function g : X → Y.
(c) Strong Decomposition Property on Product Distributions: For any positive integer N , the following equality holds.
Relation with Total Variation Distance: The following inequalities hold.

General Security Game
We first formally define the security game to clarify the scope of our new definition of bit security.Our framework is abstract enough to capture every game-based security definition in the cryptography literature.The definition has already implicitly appeared in the definition of falsifiable assumption of Gentry-Wichs [GW11] (See also [Nao03]).
Definition 3 (Security Game).A security game G = (X, L) consists of an interactive challenger X and a decidable winning condition L ⊂ {0, 1} * . 6The game is played by an adversary A interacting with X.During the game, if view X ∈ L, X outputs a special symbol win and we say A wins G. Here, view X ∈ {0, 1} * is the view of the game from the perspective of X, i.e. the transcript and randomness used by X.
Remark 1 (Comparison).Our definition of security games is as inclusive as possible.We do not put any restrictions on the structure of the games.Our framework captures all security games covered by the definitions of falsifiable assumptions of Gentry-Wichs [GW11] and thus complexity assumptions of Goldwasser-Kalai [GK16].On the other hand, previous frameworks of [MW18] and [WY21] only capture certain types of games, as they assert specific structures to the games.In particular, they do not include the very basic EUF-CMA game for signature schemes (Example 5).For a detailed discussion, refer to Section 5.1.

Examples
We give examples of security games regarding our framework.We also define specific classes of games for later discussions.Readers may skip the examples and come back when needed.
Definition 4 (Decision Game).A decision game is a security game G = (X, L) which has a certain structure on X and L as follows: 1. (Challenge) At the beginning of the game, the challenger X chooses a uniform random challenge bit b ∈ {0, 1}.

(Query)
The adversary A is allowed to send certain queries to X. Whenever X receives a legitimate query, it sends a corresponding response to A.

(Answer) The game ends when
A sends its answer b ′ ∈ {0, 1} to X.

(Winning Condition)
Example 1 (Pseudorandomness).For a pseudorandom generator (PRG) f : {0, 1} ℓ → {0, 1} m , we define its pseudorandomness by a decision game where the only allowed query for an adversary is to send a special symbol sample to the challenger.Whenever the challenger receives sample, it responds with y = f (x) for a uniform random x ∈ {0, 1} ℓ when b = 0 and a uniform random y ∈ {0, 1} m when b = 1.
Example 2 (IND-CPA7 ).For a public-key encryption scheme (Gen, Enc, Dec), we define its IND-CPA security by the following decision game: Before the first query, the challenger runs Gen and sends the public key to the adversary.The only allowed query for an adversary is to send a special symbol LR to the challenger together with messages m 0 and m 1 .Whenever the challenger receives (LR, m 0 , m 1 ), it responds with Enc(m b ).
Example 3 (DDH).For a cyclic group G and its generator g, we define the decisional Diffie-Hellman (DDH) game on (G, g) as the following decision game: The only allowed query for an adversary is to send a special symbol sample to the challenger.Whenever the challenger receives sample, it responds with (g x , g y , g z ) where z = xy with uniform random x, y when b = 0 and x, y, z are all uniform random when b = 1.
Definition 5 (Distribution Distinguishing Game).The distribution distinguishing game for distributions D 0 and D 1 is a decision game, where the only allowed query for an adversary A is to send a special symbol sample to X. Whenever X receives sample, it draws a sample from the distribution D b and sends the result to A.
The following examples belong to a class so-called search games, although we do not precisely define search games in this paper.(See Remark 3.) Example 4 (One-wayness).For a one-way function (OWF) f : {0, 1} ℓ → {0, 1} m , we define its one-wayness by the following security game: At the beginning of the game, the challenger chooses uniform random x ∈ {0, 1} ℓ and sends y = f (x) to the adversary.The adversary sends an answer x ′ ∈ {0, 1} ℓ and it wins the game if y = f (x ′ ).
Example 5 (EUF-CMA).For a digital signature scheme (Gen, Sign, Verify), we define its EUF-CMA security by the following game: At the beginning of the game, the challenger runs Gen and sends the public key pk to the adversary.The only allowed query for an adversary is to send a special symbol Sign to the challenger together with a message m.When the challenger receives (Sign, m), it responds with a signature of m.The adversary sends a pair (m ′ , σ ′ ) and it wins the game if Verify pk (m ′ , σ ′ ) is true and m ′ was never queried by the adversary.
Example 6 (CDH).For a cyclic group G and its generator g, we define the computational Diffie-Hellman (CDH) game on (G, g) as follows: At the beginning of the game, the challenger sends (g x , g y ) to the adversary where x, y are uniform random.The adversary sends an answer g z ∈ G and it wins the game if g z = g xy .
Example 7 (OW-CPA).For a public-key encryption scheme (Gen, Enc, Dec), we define its OW-CPA security by the following game: At the beginning of the game, the challenger runs Gen and chooses a message m randomly.Then, it sends Enc(m) and the public key to the adversary.The adversary sends an answer m ′ and it wins the game if m ′ = m.

Baseline Probability
Next, we define baseline probability of a game that plays an important role in refining the definition of (conventional) advantage and describing our new definition of bit security.The baseline probability of a game is the maximal success probability of dummy adversaries who do not learn anything while playing the game with the challenger.
Definition 6 (Success Probability).Let G be a security game.We denote and define the success probability of an adversary A against G as the following.
Definition 7 (Dummy Adversary).An adversary A against a security game G is called dummy if messages sent from A to the challenger do not depend on any previous messages that A has received.That is, the outputs of a dummy adversary are independent of the randomness of the challenger.
Remark 2. We note that the concept of dummy adversaries is not intended to capture every trivial attack.For an extreme example, consider the following game: At the beginning of the game, the challenger chooses uniform random s ∈ {0, 1} 128 and sends s itself to the adversary.The adversary sends an answer s ′ ∈ {0, 1} 128 and it wins the game if s = s ′ .Then, an adversary has a trivial strategy of just forwarding the received message.However, such a trivial adversary is not dummy according to our definition.Looking ahead, a security game will have small bits of security if there are (trivial) attacks performing much better than dummy adversaries.(See Remark 5.) Definition 8 (Baseline Probability).We denote and define the baseline probability of a security game G as the following.
We call a dummy adversary A against G a baseline adversary if We also define baseline probability for cases where available resources are limited.

Definition 9 (Bounded Baseline Probability). An adversary A is called T -bounded if
T A ≤ T holds.We denote and define T -bounded baseline probability of G as the following.

Examples
To describe how our definitions apply to security games, we compute some baseline probabilities.They will be referred to in later discussions.
Example 8 (Decision Game).In a decision game (Def.4), an adversary wins the game if it correctly guesses the challenge bit b ∈ {0, 1} chosen by the challenger during the game.However, by definition, dummy adversaries do not learn anything during the game.Thus, a dummy adversary cannot do better than a random guess on {0, 1}, and the baseline probability P G ∅ [T ] of a decision game G is 1/2 for any T .
Example 9 (One-wayness).In the one-wayness game for f : {0, 1} ℓ → {0, 1} m (Example 4), an adversary wins the game if it correctly outputs x ′ such that f (x ′ ) = y, where y is chosen by the challenger during the game.Thus, a dummy adversary cannot do much better than a random guess on {0, 1} ℓ under reasonable assumptions.For example, if f maps at most k inputs to an output (i.e., at most k-to-one), the baseline probability of the game is not greater than k/2 ℓ .
Example 10 (EUF-CMA).In the EUF-CMA game for a signature scheme (Example 5) with signature space S, an adversary wins the game only if it correctly outputs a pair of message and signature (m ′ , σ ′ ) such that Verify pk (m ′ , σ ′ ) is true, where pk is chosen by the challenger during the game.Thus, a dummy adversary cannot do much better than a random guess on S for σ ′ under reasonable assumptions.For example, if there are at most k valid signatures for a pair of a public key and a message, the baseline probability of the EUF-CMA game is not greater than k/|S|.
Example 11 (CDH).In the CDH game on (G, g) (Example 6), an adversary wins the game if it correctly outputs the element g xy where x, y are chosen by the challenger during the game.Thus, the baseline probability of the CDH game is 1/|G|.
Example 12 (OW-CPA).In the OW-CPA game for a public-key encryption scheme (Example 7) with message space M, an adversary wins the game if it correctly outputs a message m ′ such that m ′ = m, where m is chosen by the challenger during the game.Thus, the baseline probability of the CDH game is 1/|M|.
Remark 3 (Search Game).Unlike decision games (Def.4), we do not precisely define search games in this work.Previous definitions of search games in [MW18,WY21] assert specific structures to security games, leading to several problematic situations.(For detailed discussions, refer to Section 5.2.)Meanwhile, Example 9 -12 demonstrate that search primitives are expected to have negligible baseline probabilities.We take this extremely small baseline probability as a fuzzy characterization of search games.Looking ahead, our new definition of bit security only depends on baseline probability and is independent of any other features of games.Thus, characterizing search games with extremely small baseline probability would suffice to see how our definition applies to search games.

Conventional Advantage
As said, under our framework (Def.8), we can refine and unite conventional definitions of advantage, where |P G A − 1/2| is used for decision games and P G A is used for search games.We define the (conventional) advantage of an adversary as the (rectified) difference between the success probability of the adversary and the baseline probability.
Example 13 (Conventional Advantage).We denote and define the (conventional) advantage of A against G as the following.
For a decision game, where the baseline probability is 1/2 (Example 8), the advantage is max P G A − 1/2, 0 according to our definition.Thus, our definition matches with the conventional definition for decision games ).For a search game, note that the baseline probability is expected to be extremely small (Remark 3).Then, we can easily see that our definition approximately matches the conventional definition for search games P G A (when P G ∅ [T A ] is sufficiently small compared with P G A to be approximated as zero).
Remark 4 (Reformulation of Conventional Advantage).We note that our definition of conventional advantage can be reformulated in terms of total variation distance (a.k.a.statistical distance) between two Bernoulli distributions as the following.(Refer to Section 2 for notations.)This reformulation will later be used to see how our new definitions of advantage and bit security relate to conventional definitions (Remark 9). adv

Information-Theoretic Security
Under our framework of baseline probability, we can obtain a natural and simple characterization of information-theoretic security.
Example 14 (Information-Theoretic Security).By definition, information-theoretic security guarantees that adversaries gain no information at all during the security game.
In terms of our framework, this corresponds to the situation where all adversaries are essentially dummies.That is, we can characterize (or even define) information-theoretic security for game G as the condition where P G A ≤ P G ∅ [T A ] and thus adv G (A) = 0 hold for any adversary A (Example 13).This natural and simple characterization of informationtheoretic security cannot be obtained if we adopt P G A as the definition of advantage for search primitives.Using such a definition, search primitives always yield negligible but positive advantages for some adversaries.

Bit Security as Cost to Demonstrate Advantage
In this section, we propose a new definition of bit security.Intuitively, we define bit security as the cost to demonstrate that an adversary A indeed has an advantage over dummy adversaries for game G, i.e., P G A > P G ∅ [T A ] holds, 9 when A is given in a black-box manner.In other words, it can also be described as the cost to enable a third party to empirically experience or observe the advantage of an adversary.
For a formal presentation, we first introduce a meta-game, which we call advantage observation game.The advantage observation game for adversary A against game G is a decision game where an adversary B tries to distinguish A from dummy adversaries.If B wins the game, then we may say it observed the advantage of A during the game. 8The intuition of defining advantage as the absolute value |P G A − 1/2| for decision games is that we can always transform an adversary with success probability P G A into one with 1 − P G A by switching the output.However, this transformation does not apply to search games.We consider this issue as a roadblock to a unified definition and thus use a rectified version rather than an absolute value.This issue and treatment were previously considered by Bernstein-Hülsing when defining Decisional Second-Preimage Resistance (DSPR) in [BH19].DSPR is defined by an unbalanced decision game whose baseline probability is not 1/2. 9This condition captures the idea that there are two sources of advantage, namely success probability and cost.For example, if

Note that the conventional definition of advantage (P G
A for search primitives and |P G A − 1/2| for decision primitives) only considers the success probability.
Definition 10 (Advantage Observation Game).Let A be an adversary against a security game G = (X, L).The advantage observation game for A against G is denoted as ĜA and defined as follows (See also Figure 1.): 1. (Setting) Let A 0 be A and A 1 be a T A -baseline adversary against G (Def. 9).Remark 5. We note that the advantage observation game is defined with respect to a dummy adversary, which is not intended to capture every trivial attack.(See Remark 2.) Remark 6.We note that an adversary can never win the advantage observation game ĜA when . This additional restriction captures the fact that there is no advantage to observe in such a case.(See also Footnote 8.) We now define bit security in terms of the advantage observation game.As said, we define bit security as the cost to demonstrate the advantage of the adversary A against the game G, i.e., the total cost of invoking A multiple times10 to allow adversary B to win the advantage observation game ĜA with high probability.To elaborate, we measure the total cost as the cost T A of A multiplied by the query complexity N B of B against the game ĜA .
Definition 11 (Bit Security as Cost to Demonstrate Advantage).For any security game G, we denote and define its (demonstration) bit security (with respect to error probability 0 < δ < 1/2) as the following.
Our unified definition of bit security is independent of the structures of security games (e.g., decision or search), unlike Watanabe-Yasunaga [WY21].At the same time, our definition enjoys a clear and firm operational meaning from the advantage observation game, unlike Micciancio-Walter [MW18].Nonetheless, we will see how our definition still resolves hiccups of the conventional definition of bit security (Remark 10, Thm. 2).For a more detailed discussion, refer to Section 5.3 and 5.4.

Estimation
Here, we examine demonstration bit security (Def.11) in a quantitative manner.The goal is to find a more computable characterization of our definition.We first review a folklore fact in learning theory: the sample complexity of distinguishing discrete distributions D 0 and Proof.Refer to Appendix A.
We now give an estimation of demonstration bit security (Def.11) leveraging Prop. 2. The following theorem suggests that we can well-estimate the security in terms of Hellinger distance.(Refer to Section 2 for notations.) Theorem 1 (Estimation of Bit Security).For 0 < δ ≤ (2 − √ 3)/4, we have the following estimation of the demonstration bit security, up to a small additive error α, satisfying 0 ≤ α ≤ 1 + log ln( 1 2δ ).

BS δ Dem (G) = min
A: Proof.Let B * be an adversary against the advantage observation game ĜA * , which has the minimal query complexity N B * among adversaries with success probability at least 1 − δ.The theorem is easy to prove after dividing it into two cases.
Case 1: Suppose 2 ≤ 1/2.By Prop.2, we have the following bounds.Remark 8 (Tightness).We note that the estimation of Thm. 1 is tight in the sense that the additive error is bounded by a double-logarithm of 1/δ.In particular, when δ = 2 −128 , the additive error is smaller than 7.5.

Bit Security in terms of Hellinger Distance
Although our definition of bit security based on demonstration of advantage (Def.11) provides a nice operational meaning, there remain issues with its practical usability.In particular, the definition is parameterized by statistical significance δ, which might hinder the adoption of the definition.In this respect, we redefine bit security, dropping the dependency on the choice of δ and directly exploiting the Hellinger distance.
Definition 12 (Hellinger-Advantage).For any security game G and adversary A against G, we denote and define the Hellinger-advantage of A against G as the following. adv Definition 13 (Hellinger-Bit Security).For any security game G, we denote and define its Hellinger-bit security as the following.
Regarding Thm. 1, our Hellinger-based definition can be understood as a normalized version of Def.11 deleting the log(1/δ) term.Thus, Hellinger-bit security (BS H 2 ) is a more conservative measure compared to Def. 11 (BS δ Dem ), when δ is reasonably small.Remark 9 (Comparison with the Conventional Definition).Our Hellinger-based definitions share the same structure as the conventional definitions of advantage and bit security (Remark 4).That is, our definition only differs from the conventional definition in the choice of how to measure the difference between the success probability of an adversary and baseline probability.The conventional definition utilizes the total variation distance (a.k.a.statistical distance), whereas our definition utilizes the square of the Hellinger distance.While our new definition with an operational meaning (Section 3.3) resolves hiccups of the conventional definition (Remark 10, Thm. 2), this similarity in structures does not seriously harm existing proof outlines and techniques (Section 4).
Next, as examples, we compute the Hellinger-advantage of several security games.In particular, we apply our definition to decision and search primitives to demonstrate how our definition embraces the quadratic gap (See Introduction).
Example 15 (Decision Primitives).Let G be a decision game (Def.4).Then, we have ).In addition, let A be an adversary against G with conventional advantage ε > 0 (Example 13).That is, we have P G A = 1/2 + ε.Then, we can compute the Hellinger-advantage of A against G as the following.The last equality is obtained from a Taylor approximation.
Example 16 (Search Primitives).Let G be a search game.That is, we may assume P G ∅ [T ] to be 0 (Remark 3).In addition, let A be an adversary against G with conventional advantage ε > 0 (Example 13).That is, we have P G A = ε.Then, we can compute Hellinger-advantage of A against G as the following.The last equality is obtained from a Taylor approximation.
Remark 10 (Decision/Search Primitives).Through Example 15 and 16, we checked how our Hellinger-advantage embraces the quadratic gap of conventional bit security between decision and search primitives.That is, according to our definition, bit security of a decision (resp.search) primitive is roughly min log(T /ε 2 ) (resp.min log(T /ε)), where ε is conventional advantage (Example 13).As the quadratic gap is the central question in the line of works, the definitions of previous works [MW18,WY21] also capture this gap.However, (i) unlike Watanabe-Yasunaga [WY21], we capture the gap in a single unified framework, and (ii) unlike Micciancio-Walter [MW18], our definition has a clear and firm operational meaning.For a more detailed discussion, refer to Section 5.3 and 5.4.
Example 17 (Information-Theoretic Security).Let G be a security game with informationtheoretic security.That is, for any adversary A against G, we have H 2 (A) = 0 for all A and thus BS H 2 (G) = ∞.This is something one might expect from definitions of advantage and bit security.However, according to previous definitions [MW18,WY21], some unconditionally secure primitives do not enjoy infinite bit security.For a more detailed discussion, refer to Section 5.5.

Theorems
In this section, we state and prove several security reductions with respect to Hellinger-bit security (Def.13).We show how our new definition (i) resolves the peculiar situation of the conventional definition regarding distribution approximations (Section 4.1) and (ii) still provides the classic results proven under the conventional definition (Section 4.2, 4.3).In addition, throughout this section, we demonstrate that Hellinger-bit security is not too difficult to use compared to the conventional definition and often provides simpler and more intuitive proofs compared to previous definitions of [MW18,WY21].These support the suitability of our new definition.

Distribution Approximation
We first prove that we can replace distributions in λ-bit secure games with a λ/2-bit close distribution (with respect to Hellinger distance) while preserving security.Our result holds regardless of the structures of the game (e.g., decision/search).This resolves the peculiar situation of the conventional definition, where the theorem was proved only for search primitives (See Introduction).
The proof outline is very intuitive as we use a standard trick with triangle inequality.Other parts easily follow from nice properties of Hellinger distance.We emphasize that we prove the theorem as a whole and do not divide cases into decision and search primitives.
Theorem 2 (Distribution Approximation).Let G 0 = (X 0 , L) and G 1 = (X 1 , L) be identical security games except that challenger X 0 uses distribution D 0 at points where X 1 uses D 1 .Assume that the challengers sample from the distributions at most c 2 • T times when playing with adversaries with cost T .11If G 1 is λ-bit secure and d H (D 0 , D 1 ) ≤ 2 −λ/2 , then G 0 is λ − 2 log(2c + 1) -bit secure, with respect to Hellinger-bit security.
Proof.We first note that any adversary A with cost T satisfies the following inequalities.The first inequality is from the data-processing inequality (Prop.1(b)) together with the fact that A can learn information on at most c 2 • T samples.
Thus, we have the following inequality.
Consider an adversary A against G 0 with a positive Hellinger advantage and let ∅ b denote T A -baseline adversary (Def.8) against G b for b = 0, 1.Then, we have the following inequalities.The first inequality is from the definition of baseline adversary, and the second is from the standard triangle inequality (Prop.1(a)).For the third inequality, the first and third terms are bounded from the above note, and the second term is bounded by the fact that G 1 is λ-bit secure.
Remark 11 (Comparison).Previous works [MW18, WY21] also prove similar theorems.However, their multi-page proof consists of a handful of computations, even after borrowing some results from [MW17]. 12In particular, they had to prove the theorem by dividing the case into decision and search primitives.
On the other hand, our proof is much simpler and only uses standard techniques.In particular, our proof fits on a single page.Moreover, our proof is unified in the sense that we do not handle search and decision primitives separately.We believe this indicates that our Hellinger-bit security is a more suitable definition, compared to the previous works.

Hybrid Argument
We prove the hybrid argument with respect to our Hellinger-bit security.Our proof is essentially no different from the proof for conventional bit security.The only difference is that we lose roughly 2 log n-bits of security, whereas we lose log n-bits in the conventional setting.This gap seems natural as our Hellinger advantage is roughly the square of the conventional advantage for decision games (Remark 10).The gap appears also in the previous works [MW18,WY21].
Theorem 3 (Hybrid Argument).Let n be a positive integer.For 0 ≤ i, j ≤ n, let G i,j be a decision game (Def.4), where the challenger acts as an algorithm Proof.Let A be an adversary against G 0,n with success probability 1/2+ε with 0 < ε ≤ 1/2, i.e. the conventional advantage (Example 13) of A is ε.This implies the existence of an interactive algorithm Ā with cost Then, by the triangle inequality, we have Thus, for some 0 ≤ i * < n, we have which implies the existence of an adversary A * against G i * ,i * +1 with conventional advantage greater than ε/n and cost Thus, Remark 12 (Comparison).Our proof outline is identical to the conventional proof.This is due to the structural similarity of our definition of bit security to the conventional definition (Remark 9).On the other hand, the proof of Micciancio-Walter [MW18] needs to take care of aborts which play a major role in their definition (Def.17).The proof of Watanabe-Yasunaga [WY21] is intuitive and natural but requires a different approach as their definition of bit security (Def.19) depends also on distributional information other than the success probability.

Decision-to-Search Reductions
In the literature, there are several pairs of decision and search games, which allow natural decision-to-search security reductions.These reductions are proved in the same outline: To decide whether a distribution is structured or not, the reduction algorithm calls the given adversary who solves the search problem related to the structure.If the adversary succeeds, the reduction algorithm answers that the distribution is structured, since if the distribution were random the adversary would have failed with a high probability.This approach gives a tight reduction, in a sense that the reduction preserves the conventional advantage up to a constant factor.
We show that this frame still works under our new definition of bit security.However, to achieve tight reductions, extra work must be done.Unlike the conventional proof where the search adversary is called only once, we have to call the adversary several times to amplify the success probability.Otherwise, at worst, we may lose half of the bit security along the reduction: The conventional reduction preserves the conventional advantage ε, but our Hellinger-bit security is roughly min log(T /ε 2 ) for decision primitives and min log(T /ε) for search primitives (Remark 10).We first prove that a PRG (Example 1) is a OWF (Example 4).We again leverage the Hellinger distance and Prop. 2.
Theorem 4 (Pseudorandomness to One-wayness).Let f : {0, 1} ℓ → {0, 1} m be a PRG.If f is λ-bit secure, then f is also (λ−α)-bit secure as a OWF, with respect to the Hellinger-bit security.Here, α = 8 + log(1 + T f ), where T f denotes the cost for evaluating f .Proof.Let A be an adversary against the one-wayness game G on f , with success probability We construct an adversary A ′ against the pseudorandomness game G ′ as follows: Whenever a query is made, A ′ runs A on the sample y and checks whether the output x ′ satisfies y = f (x ′ ).The adversary records Yes if the condition is satisfied and No if not.Note that Pr[Yes|b = 0] = ε and Pr[Yes|b = 1] ≤ 2 ℓ 2 m ε ≤ 1 2 ε.Now the goal is to distinguish the two cases of b = 0 and b = 1 using this Yes/No distribution.By Prop.2, to distinguish two cases with a probability of at least 1 − δ, it is sufficient to count the number of Yes's among N = ln( 1 2δ )/d H (ε, 1 2 ε) 2 samples.For such an adversary A ′ , we have the following inequalities.
Meanwhile, we can bound the second term as follows.
In total, if we choose δ ≈ 0.1, we have the following bound.
The proofs for DDH to CDH (Example 3 and 6) and IND-CPA to OW-CPA (Example 2 and 7) are similar to that of PRG and OWF.
Theorem 5 (DDH to CDH).If DDH on (G, g) is λ-bit hard, then CDH on (G, g) is also (λ − α)-bit hard, with respect to the Hellinger-bit hardness.Here, α = 4 + log(1 + T eq ), where T eq denotes the cost for checking two group elements are the same.
Proof.Let A be an adversary against the CDH game G, with a positive advantage.We construct an adversary A ′ against the DDH game G ′ as follows: Whenever a query is made, A ′ runs A on (g x , g y ) of the sample (g x , g y , g z ) and checks whether g z = A(g x , g y ).Now the goal is to distinguish the two cases of b = 0 and b = 1 using this Yes/No distribution.By Prop.2, to distinguish two cases with a probability of at least 1 − δ, it is sufficient to count the number of Yes's among

The adversary records
For such an adversary A ′ , we have the following bound.
2 Theorem 6 (IND-CPA to OW-CPA).Consider a public-key encryption scheme with a message space M. If it is λ-bit secure against the IND-CPA game, then it is also (λ − α)-bit secure against the OW-CPA game, with respect to the Hellinger-bit security.
Here, α = 4 + log(1 + T M ), where T M denotes the cost for sampling a random message and checking if two messages are the same.
Proof.Let A be an adversary against the OW-CPA game G, with success probability ).We construct an adversary A ′ against the IND-CPA game G ′ as follows: The adversary A ′ always makes a query with randomly chosen messages m 0 and m 1 .Whenever a query is made, A ′ runs A on the sample c and checks whether the output m ′ satisfies m ′ = m 0 .The adversary records Yes if the condition is satisfied and No if not.Note that Pr[Yes|b = 0] = ε and Pr[Yes|b = 1] = 1−ε |M|−1 .Now the goal is to distinguish the two cases of b = 0 and b = 1 using this Yes/No distribution.By Prop.2, to distinguish two cases with a probability of at least 1 − δ, it is sufficient to count the number of Yes's among N = ln( 1 2δ )/d H (ε, 1−ε |M|−1 ) 2 samples.For such an adversary A ′ , we have the following bound.
Meanwhile, we can bound the second term, since In total, if we choose δ ≈ 0.1, we have the following bound.

Comparisons
In this section, we compare our new definitions with the definitions in previous works of Micciancio-Walter [MW18] and Watanabe-Yasunaga [WY21].We indicate several weaknesses of the previous works and strengths of our work to suggest that our Hellingerbit security is a more right definition.

Definition of Security Game
We first review the definitions of security games in the previous works.
Definition 14 ([MW18, Def.5]).An n-bit security game is played by an adversary A interacting with a challenger.At the beginning of the game, the challenger chooses a secret x ∈ {0, 1} n , represented by the random variable X, from some distribution D X .At the end of the game, A outputs some value, which is represented by the random variable A. The adversary A wins the game if it outputs a such that (x, a) ∈ R, where R is some relation.A may output a special symbol ⊥ such that R(x, ⊥) and R(x, ⊥) are both false.
Definition 15 ([WY21, Inner Game13 ]).An n-bit security game G consisting of an algorithm X, a relation R, and an oracle O, is played by an adversary A given oracle access to O.At the beginning of the game, a secret u ∈ {0, 1} n is chosen uniformly at random, and the challenge x is computed as X(u).Given x, the adversary A wins the game if it outputs a value a such that (u, x, a) ∈ R.
Previous definitions assert security games to begin with the challenger choosing a secret and characterize winning conditions as relations between this secret and the final answer of the adversary (plus a challenge in [WY21]).We regard this as an unnatural and restrictive formulation since we often consider security games where the winning condition is affected by the queries of an adversary during the game.This includes the EUF-CMA game for signature schemes (Example 5).Thus, previous definitions fail to capture even the very basic EUF-CMA game.
On the other hand, our definition (Def.3) captures essentially all security games: we do not put any restrictions on the structure of the games and characterize winning conditions regarding the view of the challenger.In particular, our framework captures all security games that are covered by the definitions of falsifiable assumptions of Gentry-Wichs [GW11] and thus complexity assumptions of Goldwasser-Kalai [GK16].

Definition of Search Game
In the previous works of [ MW18,WY21], decision and search games are defined by the length of the secret chosen by the challenger.That is, decision games are defined as 1-bit games, and search games as n-bit games with a large n.We claim that this characterization is problematic.In particular, we construct the following pathological examples.
Example 18 (Pathological Examples).Let G be a decision game, i.e., 1-bit security game, with respect to the definition of [MW18] or [WY21].We can naturally extend G into a redundant n-bit security game G ′ with an arbitrarily large n as follows: The first bit of n-bit secret is chosen following the secret distribution of G, and the remaining bits are chosen uniform randomly.The game G ′ proceeds exactly the same as G regarding the first bit of the secret as the secret bit of G.The winning condition of G ′ is also defined as the same as G by only reading the first bit of the secret.
In the example, G and G ′ are essentially the same game.However, G is a decision game while G ′ is a search game, according to the definition of [MW18,WY21].Thus, we can say that the definition does not capture the core nature of search games in a suitable way.This is especially problematic when the definition of bit security depends on whether the game is decision or search, as in [WY21].In fact, in Section 5.5, we show that G and G ′ generally have different bit security under the definitions of [MW18] and [WY21].
On the other hand, we do not precisely define search games (Remark 3).We rather take the extremely small baseline probability as a fuzzy characterization of search games.Recall that our definition of bit security only depends on baseline probability and is independent of any other structures of games.Thus, such a characterization suffices to apply our definitions to identify the quadratic gap of bit security between decision and search games (Remark 10).Elsewhere, we do not have to distinguish search games from decision games.

Bit Security of Micciancio-Walter
We review the definition of Micciancio-Walter [MW18] and compare it with our definition.

BS MW (G) = min
A log T A adv G MW (A) Our definition and the definition of [MW18] are both unified, in the sense that the bit security is defined in a single framework regardless of game types (e.g., decision/search).The difference between the two works lies in naturalness and interpretability.Micciancio-Walter introduces a hypothetical random variable Y in Def.16, which lacks intuitive meaning without a sufficient explanation.In particular, there remains a question of why Y should be defined in such a way in Case 3 of Def.16.Moreover, there may be controversies on why the aborts must be allowed, why they must not be regarded as failures, and why they must affect the bit security in such a specific way.On the other hand, our definition is based on a simple and natural concept of demonstrating advantage (Def.10), which has a firm operational meaning by nature.

Bit Security of Watanabe-Yasunaga
We review the definition of Watanabe-Yasunaga [WY21] and compare it with our definition.
• When G is a decision game (i.e.1-bit game), the outer game of G with respect to A is played by an outer adversary B, who wins the game if it outputs u ∈ {0, 1} given oracle access to A(X(u)).See Figure 2a.
• When G is a search game (i.e.n-bit game with n > 1), the outer game of G with respect to A is played by an outer adversary B, who invokes G several times and wins if there was any game A won.In other words, B has oracle access to A(X(u)), where u is uniformly chosen from {0, 1} n at the beginning of each query.See Figure 2b.
The outer game of G with respect to A is denoted as ĜWY A .
Definition 19 ([WY21, Def.1]).For any security game G (regarding Def.15), we denote and define its WY-bit security (with respect to error probability 0 < δ < 1/2) as the following, where N B denotes the query complexity of B.  ] both define bit security as the cost to win certain meta-games, yielding firm operational meanings to the definitions.The difference between the two works lies in generality.Although [WY21] defines bit security for both search and decision primitives as the cost for winning meta-games, the designated games for search and decision primitives differ qualitatively.On the other hand, our definition is unified, in the sense that the bit security is defined in terms of a single meta-game, the advantage observation game (Def.10), regardless of game types (e.g., decision/search).

Pathological Examples
It is easy to observe that the bit security is finite for n-bit security games with n > 1, under the definitions of [MW18,WY21].This fact contradicts our expectation that unconditionally secure primitives (e.g., information-theoretic MAC) will have ∞-bit security.On the other hand, our definition reflects the expectation by its nature (Example 17).Now consider a 1-bit game G, where an adversary wins the game if it correctly guesses the secret bit but is not allowed to make any queries.Indeed, the game G is unconditionally secure and achieves ∞-bit security under the definitions of [MW18,WY21].However, the game G ′ (Example 18), which is essentially the same game but with a redundantly large secret, has finite (in fact, very small) bits of security under their definitions.
That is, some games that are essentially the same do not have the same number of bits of security under the previous definitions.We note that the game G is an extreme case considered for the ease of presentation, and this discrepancy happens in general for the games constructed in Example 18.On the other hand, our definition is not affected by such examples since our bit security depends only on success/baseline probabilities.These pathological examples suggest that the previous definitions of [MW18] and [WY21] are ill-defined.
Yes if the condition is satisfied and No if not.Note that Pr[Yes|b = 0] = P G A and Pr[Yes|b = 1] = 1/|G| = P G ∅ [T A ] (Example 11).
Definition 16([MW18, Def.7]).For any security game G (regarding Def.14) and adversary A against G, we denote and define the MW-advantage of A against G as the following.advG MW (A) = I(X; Y ) H(X)Here, I(•; •) is the mutual information, H(•) is the Shannon entropy, and Y (X, A) is the random variable with marginal distributions Y x,a = {Y |X = x, A = a} defined as 1.Y x,⊥ =⊥, for all x.2.Y x,a = x for all (x, a) ∈ R.3.Y x,a = {x ′ ← D X |x ′ ̸ = x}, for all (x, a) ∈ R.Definition 17 ([MW18, Def.8]).For any security game G (regarding Def.14), we denote and define its MW-bit security as the following.

Figure 2 :
Figure 2: Outer Games of [WY21] [BY02,Can17] the Hellinger distance (Section 2.2) between two distributions.The statement and proof are more or less verbatim of[BY02,Can17]with extra care on constants behind the asymptotic expressions.Proposition 2 (Sample Complexity Bounds).Let N δ (D 0 , D 1 ) denote the sample complexity of distinguishing discrete distributions D 0 and D 1 with probability at least 1 − δ.That is, N δ (D 0 , D 1 ) is the minimum among query complexity of adversaries against the distribution distinguishing game on D 0 and D 1 (Def.5) with success probability at least 1 − δ.For 0 < δ < 1/2, we have the following bounds.The upper bound holds always, and the lower bound holds if d H