## Talks at CCDWe frequently host talks at CCD. At the moment, all our talks will be virtual, but we are eagerly awaiting the time when speakers can visit us and give talks in person. If you are interested in giving a talk, please reach out to us.
The first half of this presentation will provide a high-level idea of the existing lattice-based schemes and their existing concrete attacks. By “concrete”, I mean that these attack can use additional information like timing of power analysis. I will present several generic tools and I will next focus on examples of attacks that exploit lattice-related features. The other half of the presentation will introduce different algorithmic protection techniques and will highlight the issues that are inherent to lattice structures. An important part of countermeasure design is often dedicated to proving the validity of such countermeasures. I will outline some proof techniques (without going into the 'boring’ details). Some of these techniques were specifically introduced for the lattice-based schemes but they could have wider applications
Witness encryption, first introduced by Garg, Gentry, Sahai and Waters in GGSW13, is an encryption scheme where messages are encrypted with respect to instances of an NP relation, such that in order to decrypt one needs to know a valid witness for the instance that is associated with the ciphertext. Despite of significant efforts in the past decade to construct WE from standard assumptions, to the best of our knowledge all of the existing WE candidates either rely directly on iO or use techniques that also seem to imply iO. In this work we propose a new hardness assumption with regard to lattice trapdoors and show a witness encryption candidate which is secure under it. Intuitively, the assumption says that ‘‘the best an attacker can do with a trapdoor sample is to use it semi-honestly’’ – i.e. that LWE with respect to a public matrix A, given as auxiliary information a trapdoor sample K = A^{-1}(B), is as hard as LWE with respect to the public matrix A|B and no auxiliary information. A similar assumption was recently suggested by Wee at Eurocrypt22 where they derive from it broadcast encryption.
Payment channels (PC) are a promising solution to the scalability issue of cryptocurrencies, allowing users to perform the bulk of the transactions off-chain without needing to post everything on the blockchain. Many PC proposals however, suffer from a severe limitation: Both parties need to constantly monitor the blockchain to ensure that the other party did not post an emph{outdated} transaction. If this event happens, the honest party needs to react promptly and engage in a emph{punishment} procedure. This means that prolonged absence periods (e.g., a power outage) may be exploited by malicious users. As a mitigation, the community has introduced emph{watchtowers}, a third-party monitoring the blockchain on behalf of off-line users. Unfortunately, watchtowers are either trusted, which is critical from a security perspective, or they have to lock a certain amount of coins, called collateral, for each monitored PC in order to be held accountable, which is financially infeasible for a large network. We present emph{Sleepy Channels}, the first bi-directional PC protocol without watchtowers (or any other third party) that supports an unbounded number of payments and does not require parties to be persistently online. The key idea is to confine the period in which PC updates can be validated on-chain to a short, pre-determined time window, which is when the PC parties have to be online. This behavior is incentivized by letting the parties lock a collateral in the PC, which can be adjusted depending on their mutual trust and which they get back much sooner if they are online during this time window. Our protocol is compatible with any blockchain that is capable of verifying digital signatures (e.g., Bitcoin), as shown by our proof of concept. Moreover, our experimental results show that Sleepy Channels impose a communication and computation overhead similar to state-of-the-art PC protocols while removing watchtower's collateral and fees for the monitoring service.
Collision-resistant hash functions (CRH) are a fundamental and ubiquitous cryptographic primitive. Several recent works have studied a relaxation of CRH called t-way multi-collision-resistant hash functions (t-MCRH). These are families of functions for which it is computationally hard to find a t-way collision, even though such collisions are abundant (and even (t-1)-way collisions may be easy to find). The case of t=2 corresponds to standard CRH, but it is natural to study t-MCRH for larger values of t. Multi-collision-resistance seems to be a qualitatively weaker property than standard collision-resistance. Nevertheless, we show a non-blackbox transformation of any moderately shrinking t-MCRH, for t in {3,4}, into an (infinitely often secure) CRH. This transformation is non-constructive - we can prove the existence of a CRH but cannot explicitly point out a construction. Our result partially extends to larger values of t. In particular, we show that for suitable values of t>t’, we can transform a t-MCRH into a t’-MCRH, at the cost of reducing the shrinkage of the resulting hash function family and settling for infinitely often security. This result utilizes the list-decodability properties of Reed-Solomon codes. Based on joint work with Ron Rothblum.
Consider the following two fundamental open problems in complexity theory: (a) Does a hard-on-average language in NP imply the existence of one-way functions?, or (b) Does a hard-on-average language in NP imply a hard-on-average problem in TFNP (i.e., the class of total NP search problem)? Our main result is that the answer to (at least) one of these questions is yes. Both one-way functions and problems in TFNP can be interpreted as promise-true distributional NP search problems — namely, distributional search problems where the sampler only samples true statements. As a direct corollary of the above result, we thus get that the existence of a hard-on-average distributional NP search problem implies a hard-on-average promise-true distributional NP search problem. In other words, ”It is no easier to find witnesses (a.k.a. proofs) for efficiently-sampled statements (theorems) that are guaranteed to be true.” This result follows from a more general study of interactive puzzles — a generalization of average-case hardness in NP—and in particular, a novel round-collapse theorem for computationally-sound protocols, analogous to Babai-Moran's celebrated round-collapse theorem for information-theoretically sound protocols. Joint work with Rafael Pass
Zero-knowledge protocols enable the truth of a mathematical statement to be certified by a verifier without revealing any other information. Such protocols are a cornerstone of modern cryptography and recently are becoming more and more practical. However, a major bottleneck in deployment is the efficiency of the prover and, in particular, the space-efficiency of the protocol. For every NP relation that can be verified in time T and space S, we construct a public-coin zero-knowledge argument in which the prover runs in time T This is a joint work with Alexander Block (Purdue University), Justin Holmgren (NTT Research), Alon Rosen (Bocconi University) and Ron Rothblum (Technion)
We show several connections between data structure problems and cryptography against preprocessing attacks. Our results span data structure upper bounds, cryptographic applications, and data structure lower bounds, as summarized next. First, we apply Fiat-Naor inversion, a technique with cryptographic origins, to obtain a data structure upper bound. In particular, our technique yields a suite of algorithms with space S and (online) time T for a preprocessing version of the N-input 3SUM problem where S^3T = O(N^6). This disproves a strong conjecture (Goldstein et al., WADS 2017) that there is no data structure that solves this problem for S=N^{2−δ} and T = N^{1−δ} for any constant δ>0. Secondly, we show equivalence between lower bounds for a broad class of (static) data structure problems and one-way functions in the random oracle model that resist a very strong form of preprocessing attack. Concretely, given a random function F: N → N (accessed as an oracle) we show how to compile it into a function G^F: N^2 → N^2 which resists S-bit preprocessing attacks that run in query time T where ST=O(N^{2−ε}) (assuming a corresponding data structure lower bound on 3SUM). In contrast, a classical result of Hellman tells us that F itself can be more easily inverted, say with N^{2 Thirdly, we give non-adaptive lower bounds for 3SUM which match the best known lower bounds for static data structure problems. Moreover, we show that our lower bound generalizes to a range of geometric problems, such as three points on a line, polygon containment, and others. This is a joint work with Alexander Golovnev, Thibaut Horel, Sunoo Park and Vinod Vaikuntanathan.
In this talk, I will begin by discussing a recent result for defending against Sybil attacks in dynamic permissionless systems, in the presence of a computationally bounded adversary -ERGO. This technique guarantees a majority of honest identities (IDs) at all times, at a computational cost to the protocol that is comparable to that of the attacker. Next, I will talk about the concept of resource burning - the verifiable consumption of resources. This resource does not necessarily need to be computational power, but can also be communication capacity, computer memory, and human effort, subject to some constraints. Using this insight, I will conclude with a generalizing of our Sybil defense techniques to a variety of systems with churn, in the presence of a resource-bounded adversary.
Byzantine Fault Tolerant protocols in the synchronous setting have often been considered impractical due to the strong synchrony assumption. On the flip side, synchronous protocols can be used to tolerate up to one-half Byzantine faults. In this talk, I will explain my journey towards improving synchronous protocols, both in theory and practice.
We construct a succinct non-interactive publicly-verifiable delegation scheme for any logspace uniform circuit under the sub-exponential Learning With Errors (LWE) assumption. For a circuit C : {0, 1}^N -> {0, 1} of size S and depth D, the prover runs in time poly(S), the communication complexity is D This is based on a joint work with Yael Tauman Kalai, Dakshita Khurana, and Rachel Zhang.
A non-interactive zero-knowledge (NIZK) proof allows a prover to convince a verifier about the truth of a statement by sending a single message, without revealing any other information. NIZKs are fundamental cryptographic primitives that have many applications, such as CCA-secure cryptosystems, signature schemes, blockchains, and more.
We provide the first constructions of non-interactive zero-knowledge and Zap arguments for NP based on the sub-exponential hardness of Decisional Diffie-Hellman against polynomial time adversaries (without use of groups with pairings). Central to our results, and of independent interest, is a new notion of interactive trapdoor hashing protocols. Joint work with Abhishek Jain.
Since the introduction of Bitcoin—the first widespread application driven by blockchains—the interest in the design of blockchain-based applications has increased tremendously. At the core of these blockchain applications are consensus protocols that aim at securely replicating a client request among all replicas, even if some replicas are Byzantine faulty. Unfortunately, modern consensus protocols either yield low throughput or face design limitations. In this work, we present the design of three consensus protocols that facilitate efficient consensus among the replicas. Our protocols help to scale cons ensus through the principles of phase-reduction, parallelization, and geo-scale clustering while ensuring no compromise in fault-tolerance. Further, we believe that the focus on consensus protocols is only one-side of the story. In specific, we present the design of a well-crafted permissioned blockchain fabric (ResilientDB) that can help even a slow consensus protocol outperform a faster protocol. Our results indicate that it is easy to scale BFT protocols to hundreds of replicas and achieve throughputs of the order 350K txns/s. |