ZKP and MPC 2

Lec02 Zero-Knowledge Proof under Composition

2.1 Review of Interactive Proof

  1. Interactive Proof

    1. Framework

      Prover \(P\), Verifier \(V\) . \(P,V\) knows \(x\) .

      \(P\) wants to convince \(V\) that \(x\in L\) with interaction .

      \(P,V\) sends messages \(m_1,m_2,\cdots,m_k\) one by one , then \(V\) accept/reject whether \(x\in L\) .

      \(\braket{P,V}(x)=y\in \{0,1\}\) , indicating whether \(x\in L\) .

      transcript : \(\tau=\{m_1,m_2,\cdots,m_k\}\) .

    2. Formal Definition

      \((P,V)\) is an interactive proof for \(L\) if

      • \(V\) is an efficient algorithm ( poly-time )
      • Completeness : \(\forall x\in L,\Pr\{\braket{P,V}(x)=1\}=1\)
      • (Almost) Soundness : \(\forall x\notin L,\forall P^*,\Pr\{\braket{P^*,V}(x)=1\}<\epsilon\)
  2. Error Toleration

    1. Security parameter : \(\kappa\)

    2. For Efficiency : \(|x|=poly(\kappa)\)

    3. For Security

      • Def [Negligible Function] \(\epsilon:\mathbb N\to \mathbb R^*\) is negligible if for all positive polynomial \(p\) , \(\exists c\) , s.t. \(\forall k\ge c,\epsilon(k)<\frac{1}{p(k)}\)

      • Def [Statistically Indistinguishable] : two distribution \(f,g\) are statistically indistinguishable if there exists a negligible function \(\epsilon\) that : \[ SD(f(x),g(x))=\frac{1}{2}\sum_{s}\left|\Pr\{f(x)=s\}-\Pr\{g(x)=s\}\right|<\epsilon \]

      • Def [Computationally Indistinguishable] : two distribution \(f,g\) are computationally indistinguishable if \(\forall D\) (distinguisher) in P.P.T , there exists a negligible function \(\epsilon\) that : \[ \left|\Pr\left\{D\big(f(x)\big)=1\right\}-\Pr\left\{D\big(g(x)\big)=1\right\}\right|<\epsilon \]

  3. Zero Knowledge

    1. Intuition : not gain of Knowledge

      Everything can be computed locally is not a gain of knowledge

    2. Def [View] : \[ View_V^P(x)=(x,r,\tau) \]

    3. Def [dishonest-verifier zero-knowledge] : \((P,V)\) is an IP for \(L\) . If \((P,V)\) achieves perfect/statistical/computational dishonest-verifier zero-knowledge , it satisfies that :

      \(\forall V^*\) in P.P.T. , \(\exists \ \) expected poly-time randomized algorithm \(M^*\) , s.t. \[ \forall x\in L,M^*(x)\sim View_{V^*}^P(x) \] \(\sim\) refers to equivalent / statistically indistinguishable / computationally indistinguishable

    4. Alternative definition of zero-knowledge :

      \(\forall V^*\) in P.P.T. , \(\exists\ \) expected poly-time randomized algorithm \(M^*\) , s.t. \[ \forall x\in L,\braket{P,V^*}(x)\sim M^*(x) \]

      Proof of Equivalence

      Original \(\to\) Alternative : The output of \(V^*\) only depends on its view . If \(M^*\) can simulate the view of \(V^*\) , then it can run \(V^*\) with the view , and get the output of \(V^*\) .

      Alternative \(\to\) Original : The proposition holds for all \(V^*\) ( which can violate the protocol ) , so if \(V^*\) just output its entire view , we still need to be able to find the corresponding \(M^*\) , this means the ability to simulate the view of \(V^*\) .

2.2 Interactive Proof for Graph Isomorphism

  1. Protocol

    Prover Verifier
    \(\pi_r\leftarrow S_n\)
    \(\tilde G:=\pi_r(G_0)\) --\(\tilde G\)->
    <-\(b\)-- \(b\leftarrow \{0,1\}\)
    find \(\pi_b\) , s.t. \(\tilde G=\pi_b(G_b)\) --\(\pi_b\)->
    \(\begin{cases}1&\tilde G=\pi_b(G_b)\\0&\text{otherwise}\end{cases}\)

    Note : If Prover knows \(\pi\) that \(\pi(G_0)=G_1\) , then \(\pi_b\) can be constructed : \[ \pi_b=\begin{cases}\pi_r&b=0\\\pi_r\circ\pi^{-1}&b=1\end{cases} \]

  2. Completeness : By construction of \(\pi_b\) above , \(\Pr\{V\text{ accepts}\}=1\) .

  3. Soundness : \(\forall \tilde G\) , if \(G_0\not\sim G_1\) , then either \(\tilde G\not\sim G_0\) , or \(\tilde G\not\sim G_1\) ,

    If \(\tilde G\not\sim G_{b^*}\) , then \(\Pr\{V\text{ rejects}\}\ge \Pr\{b=b^*\}=\frac{1}{2}\)

    Note : we cannot let Verifier just choose \(b=1\) , since a malicious Prover can violate the protocol , \(\pi_r\) may not be a permutation , and \(\tilde G\) may not isomorphic to \(G_0\) .

    Repeat \(k\) times , \(\Pr\{V\text{ rejects}\}\ge 1-\frac{1}{2^k}\) .

  4. Honest-Verifier Zero-Knowledge : \(P,V\) follows the protocol .

    In \(View_V^P\) , the additional info is \(\tilde G,\pi_b\) , which are both randomly picked s.t. \(\pi_b(G_b)=\tilde G\) .

    \(M\) : samples \(b\gets\{0,1\}\) , samples \(\pi_b\gets S_n\) , computes \(\tilde G=\pi_b(G_b)\) .

  5. Dishonest-Verifier Zero-Knowledge : \(V^*\) may not follow the protocol : \(b\) may be chosen depends on \(\tilde G\) .

    \(M^*\) : perform as \(P\) (but without knowing \(\pi\))

    1. Strategy :

      1. \(M^*\) samples \(b^*\gets \{0,1\}\) ,samples \(\pi_{b^*}\gets S_n\) , computes \(\tilde G=\pi_{b^*}(G_{b^*})\)

      2. \(M^*\) runs with \(V^*\) and obtains \(b\) .

      3. If \(b^*=b\) , reply \(\pi_{b^*}\) and find out the view .

        If \(b^*\neq b\) , go back to (1).

    2. Prove \(M^*(x)\equiv View_{V^*}^P(x)\) :

      Proof Strategy : Hybrid Arguments : make sure that every two adjacent \(Hyb\) are so near .

      1. \(Hyb_0\) : \(P\) runs with \(V^*\) , output \(View_{V^*}^P(x)\)

      2. \(Hyb_1\) : \(\tilde P\) runs with \(V^*\) , receives \(b\) . Then \(\tilde P\) samples \(b^*\gets \{0,1\}\)

        If \(b^*=b\) , \(\tilde P\) continues with \(V^*\) . Otherwise , \(\tilde P\) reruns with \(V^*\) .

        • \(\mathbb E\{\text{repitition time}\}=2\)
        • \(View_{V^*}^P(x)\equiv View_{V^*}^{\tilde P}(x)\)

        Proof : \(\forall v\) , compute \(\Pr\{View_{V^*}^{\tilde P}(x)=v\}\)

        Define \(v_i\) : view of \(V^*\) in the \(i\)-th iteration

        If \(\tilde P\) exactly finishes at \(i\)-th iteration , then \(v_i\equiv View_{V^*}^{P}(x)\) \[ \begin{aligned} \Pr\{View_{V^*}^{\tilde P}(x)=v\}&=\sum_{l=1}^{+\infty} \Pr\{v_l=v\land b_l^*=b_l\land \forall i<l,b_i^*\neq b_i\}\\ &=\sum_{l=1}^{+\infty}\Pr\{v_l=v\}\Pr\{b_l^*=b_l\land \forall i<l,b_i^*\neq b_i\}\\ &=\Pr\{View_{V^*}^P(x)=v\}\sum_{l=1}^{+\infty}\Pr\{b_l^*=b_l\land \forall i<l,b_i^*\neq b_i\}\\ &=\Pr\{View_{V^*}^P(x)=v\} \end{aligned} \]

      3. \(Hyb_2\) : \(\tilde P\) samples \(b^*\gets \{0,1\}\) . Then \(\tilde P\) runs with \(V^*\) , receives \(b\) .

        If \(b^*=b\) , \(\tilde P\) continues with \(V^*\) . Otherwise , \(\tilde P\) reruns with \(V^*\) .

        • \(\mathbb E\{\text{repitition time}\}=2\)
        • \(Hyb_1\equiv Hyb_2\) ( Since we only swap two independent operations )
      4. \(Hyb_3\) : \(\tilde P\) samples \(b^*\gets\{0,1\}\) . Then \(\tilde P\) samples \(\pi_{b^*}\) and computes \(\tilde G=\pi_{b^*}(G_{b^*})\) . Then \(\tilde P\) runs with \(V^*\) , receives \(b\) .

      If \(b^*=b\) , \(\tilde P\) continues with \(V^*\) . Otherwise , \(\tilde P\) reruns with \(V^*\) .

      • \(\mathbb E\{\text{repitition time}\}=2\)
      • \(Hyb_2\equiv Hyb_3\) ( Since the distribution of \((\pi_{b^*},\tilde G)\) is the same )
      • \(Hyb_3\) is exactly \(M^*\)
  6. Zero-knowledge under Repetition (for soundness)

    1. Def [parallel composition] :

      Prover Verifier
      \(\pi_r^i\leftarrow S_n\) for \(i\in [\kappa]\)
      \(\tilde {G^i}:=\pi_r^i(G_0)\) for \(i\in [\kappa]\) --\(\{\tilde {G^i}|i\in [\kappa]\}\)->
      <-\(b\)-- \(b\leftarrow \{0,1\}^\kappa\)
      find \(\pi_b^i\) , s.t. \(\tilde {G^i}=\pi_{b}^i(G_{b_i})\) --\(\{\pi_b^i|i\in [\kappa]\}\)->
      \(\begin{cases}1&\forall i\in [\kappa],\tilde {G^i}=\pi_b^i(G_{b_i})\\0&\text{otherwise}\end{cases}\)
    2. Brute force simulator :

      randomly sample \(b^*\gets\{0,1\}^\kappa\) , if received \(b\neq b^*\) , rerun .

      Good : \(M^*(x)\equiv View_{V^*}^P(x)\)

      Bad : \(\mathbb E\{\text{repitition time}\}=2^{\kappa}\)

    For most ZKP , we don't know how to prove dishonest-verifier zero-knowledge under parallel composition .

2.3 Sequential Composition

  1. Def [sequential composition] : \((P,V)\) is an IP for \(L\) , Let \((P_q,V_q)\) be \((P,V)\) repeat \(q\) times .

  2. Brute force simulator for Graph Isomorphism :

    For \(i=1,2,\cdots,\kappa\) :

    1. \(M^*\) samples \(b^*\gets\{0,1\}\) , samples \(\pi_{b^*}\gets S_n\) , computes \(\tilde G=\pi_{b^*}(G_{b^*})\)

    2. \(M^*\) runs with \(V^*\) and receives \(b^*\)

    3. If \(b=b^*\) , \(M^*\) continues working as \(P\)

      If \(b\neq b^*\) , \(M^*\) restarts from \(i=1\)

    Bad : \(\mathbb E\{\text{repitition time}\}=\mathcal O(2^{\kappa})\)

  3. Idea : Not restarts from \(i=1\) ?

    Store the state of \(V^*\) : \((input,memory,output,randomness)\)

    We can store randomness of \(V^*\) since it can be provided by \(M^*\)

    The randomness of \(V^*\) can be pre-determined

    Simulator \(M^*\) :

    For \(i=1,2,\cdots,\kappa\) :

    1. \(M^*\) saves the state \(st_{i-1}\) of \(V^*\)

    2. \(M^*\) samples \(b_i^*\gets\{0,1\}\) , samples \(\pi_{b_i^*}\gets S_n\) , computes \(\tilde G_i=\pi_{b_i^*}(G_{b_i^*})\)

    3. \(M^*\) runs with \(V^*\) and receives \(b_i^*\)

    4. If \(b_i=b_i^*\) , \(M^*\) continues working as \(P\)

      If \(b_i\neq b_i^*\) , \(M^*\) reloads \(st_{i-1}\) for \(V^*\) and goes back to (2).

  4. Def [Interactive Proof with Auxiliary Input] :

    \((P,V)\) is an interactive zk-proof with auxiliary input for \(L\) if

    • \(V\) is an efficient algorithm ( poly-time )

    • Completeness : \(\forall x\in L,\forall y,z\in \{0,1\}^*,\Pr\{\braket{P(y),V(z)}(x)=1\}=1\)

    • (Almost) Soundness : \(\forall x\notin L,\forall y,z\in \{0,1\}^*,\forall P^*,\Pr\{\braket{P^*(y),V(z)}(x)=1\}<\epsilon\)

    • Zero-Knowledge : \(\forall V^*\) in P.P.T. , \(\exists\) expected poly-time algorithm \(M^*\) , \(\forall x\in L,\forall y,z\in \{0,1\}^*\) , \[ M^*(x,z)\sim View_{V^*(z)}^{P(y)}(x) \]

  5. Lemma [Composition Lemma] :

    \((P,V)\) : zero-knowledge IP for \(L\) , \(q\) is a polynomial of \(\kappa\)

    \((P_q,V_q)\) : An IP runs \((P,V)\) \(q\) times

    Claim : \((P_q,V_q)\) are still zero-knowledge .

    Proof Intuition

    In the \(i\)-th iteration : \(V\) runs like \(V_i^*(x,st_{i-1})\) , construct corresponding \(M_i^*(x,st_{i-1})\)

    Prove \(M_i^*(x,st_{i-1})\sim View_{V_i^*}^P(x)\)