Transcripts

Historical Talk on Lattice-Based Cryptography

Date

15 July, 2015

pencil icon

Transcript by

Bryan Bishop

https://twitter.com/kanzure/status/772649346454040577

Lattice cryptography - from computational complexity to fully homomorphic encryption

Introduction 1

So this first of all thanks for coming back from lunch, it's always an achievement. I want to say that this is a special talk within the workshop here at Simons we have a historical cryptopapers seminars series. We try to revisit historical papers and see what led to them and how they impacted things later. One of the talks we asked for was a look into lattice crypto, and it was a perfect opportunity to choose Daniele to give this talk, so I am happy to introduce him. Just to introduce Daniele a little bit, um, he did his PhD at MIT and is currently a professor at UCSD. He is an expert in many things, crypto protocols and lattices specifically he touches everything many of the basic primitives that we saw in the talks over the morning sessions were introduced by him. He has the results on NP-hardness lattices, worst case to average connections and various algorithms in lattices. He also introduced the rigorous analysis of lattice-based crypto, expanding on the work which it appeared by NTRU. He also contributed to our knowledge by writing books and lecture notes, which took a lot of effort and was a great contribution to the community. He also recently created a logic, introduced an algebra for reasoning about multi-party computation, something really forward looking and perhaps before its time. Hopefully the time will come. He has served on many leadership roles, journals, chairs and so on. He has won the Macaty award, NSF career reward, fellowships, and um, I have two things to say which are a bit less formal. First of all, he's really cool. In his young days, he also rode a motorbike. Oh, that is still is true. Still young. That is true, sorry for that. The second thing that I wanted to say is sort of a story that I found amusing, which Daniele himself told me, about how you get into certain areas of research sometimes. At MIT, they have this practice with this exam which is early in your career where you have to read a paper and do some original research related to that paper. Shakti in her wisdom gave him a paper on lattices, and that's how this whole thing started. A great advisor with a great student who really did fantastic things. So please help me welcome Daniele. Thank you.

Introduction 2

Thank you for the introduction, and thank you and Daniel for inviting me to give this historical talk on lattices. Wow, it's quite something. It says quite a bit about how much this area has accomplished. I started working on lattices in the early days when lattices started to be used for cryptographic functions. And, I pretty much worked in that area since that time. So that's great, we get to the point where we can give a historical talk about the subject. And this also says a little bit about my age, that's the price I guess. I do ride a motorcycle, a different one though.

Solving lattices

As you know, we've been talking about lattices and more specifically lattice cryptography. Lattices you saw them a number of times during the bootcamp and earlier this morning. There will not be much very technical in this talk. So this is a lattice. You have a set of points in space that are arranged regularly. One relevant feature of lattices is that these points in real space, euclidean space. So these lattices are mathematical objects that mix together some continuous and discrete properties. They are described by real numbers, but it is a discrete set of points in the topology sense that these points are far apart from each other. And we'll see how these can map in the study of lattice cryptography.

https://www.youtube.com/watch?v=4KVJbEOqB_Q&t=5m40s

Any historical talk about lattices, at least in the context of computer science, necessarily has to start with a mention of the Lenstra-Lenstra-Lovasz algorithm (LLL algorithm) from the 1982 paper "Factoring polynomials with rational coefficients". It was an algorithmic breakthrough, gave a efficient approximate solution to lattice problems. It had an exponential approximation factor, but very good in practice. The killer app was cryptanalysis.

The paper is mostly known for these approximation algorithm that can be used to efficiently find solutions approximate solutions to lattice problems. The approximation factor grows exponentially with the dimension of the lattice, but the algorithm works pretty well in practice and it has been useful in cryptanalysis. This is not what I will be talking about, though.

What I will be talking about is what happened afterwards, when Ajtai 1996 published "Generating hard instances of lattice problems", which marked the beginning of the modern use of lattices in the design of cryptographic functions. It was really a turning point for lattice cryptography. It marks the beginning of the modern use of lattices in the construction of cryptographic functions. Up to this point, lattices were used as a distractive tool, a tool to break and attack cryptography, like cryptosystems that were related to lattices, and also cryptosystems that had nothing to do with lattices, there are attacks against RSA based on lattice reduction. In this 1996 paper, we started to see a different use of lattices in the construction of the cryptographic functions. I will talk a little bit about that.

https://www.youtube.com/watch?v=4KVJbEOqB_Q&t=7m36s

Let me start with a few quotes from these papers of Ajtai. So, this slide doesn't have any picture, just like the paper. And uh (laughter) which is good and bad, okay. So you know, some picture is work 1000 words, but some pictures can be worth 1001 words. I'll go back to this point at later in the talk. In this paper, I think it's a really nice and really good reading. If any of you didn't read it, because there's so much to read, and you want to only read recent work, perhaps you should go take a look. It's not as scary as it looks.

In the paper, besides the technical results, he argued about the importance of the problem. In cryptography, it involves the generation of instances that have instances of problems in NP, typically with a solution and you want to generate instances that are thought to be difficult. Now, we don't know based on our knowledge of complexity theory, how to prove something is difficult. If something involves quite a bit of time, if you stay within NP. So in order to find these problems, one usually resorts to one of two possible methods. One is to tap into computational complexity and the theory of NP-hard problems. NP-hard problems and NP-complete problems are as hard as any other problem in the same class. If any problem in NP is hard, then that problem is hard, so you're fine. Alternatively you can pick a question that has been studied for a long time, such as prime factorization and factoring. We believe that factoring is hard, at least classically, and there's evidence of hardness there. However, it's difficult to solve in the worst case. So that means there's no algorithm that solves the problem for every possible instance. There's no algorithm that could put there as an algorithmic solution to a problem. And the impact on cryptographic aapplications that there's no guidance about how to create hard instances of the problem. A possible solution, it has long been realized that one possible solution to this issue is to find a set of random instances of the problem, and then show and prove that if you can solve these random instances with some positive probability, then you can solve the hard famous mathematical problem in the worst case. And this gives you a way to pick the random instances, and you can pick them according to a random distribution, and you can notice that they are as hard as solving the famous question. And that's exactly what Ajtai's paper did.

These are important results beyond lattices. He used lattices as a tool. He could have used any other problem. Lattices simply worked, they did the job. And so, there was a question that comes up pretty often, asked many times in my talks about lattice cryptography. The question is typically dismissed by the speaker is that there's an answer and I won't answer it now because I want to talk about my work and the question isn't about my work... the question is usually, don't we already have something like that in cryptography? For example, the discrete logarithm problem is a random self-reducible. In the discrete log problem (DLOG problem) more specifically, let's consider this version. So you are given a prime number p, which defines a multiplicative group ZpStar, and also a generator of the entire group or some prime order subgroup called ZpStar, and the input is an element of the group g generator by these element. And the question is, recover i, recover the exponent of this element. This problem is random self-reducible in the sense that if you can solve this problem at random with some positive, even with very small non-negligble probability, when g and h are chosen at random, then you can solve the problem also in the worst case. In the previous slide I described how you can have this relation to find hard instances, the problems don't have to be the same. In the case of the discrete log, they happen to be the same problem, that you can solve on average or solve in the worst case, and that's why it's called random self-reducibility.

Random self-reducibility

https://www.youtube.com/watch?v=4KVJbEOqB_Q&t=13m

Random self-reducibility (RSR) is pretty simple and you've probably seen it before or already figured it out yourself. If you are given an arbitrary worst case instance of the problem, specified by g and h, what you do is to raise these group elements to a random power, g^a and h should go to a and b together. So g^f = g^a and h^f = h^ab for random a, b from ZqStar. Notice that g prime and h prime are almost all uniformaly distributed. They describe a random instance of the problem. Even if you fix g and h, and then you choose ... g and h prime are random. If you can solve this problem at some probability, then you will find the solution of this problem, which is the exponent of g prime, of h prime, with respect to the .. g prime.. whch is i * b.... So you can find i, which is the solution to the original problem, just dividing this number by b. So what does this mean?

It means we know how to pick g and h in the discrete log problem. We know there's a right way to pick g and h to make the problem hard. It's to choose them at random. However, this does not give any guidance about how to choose that group G.

Discrete log problem versus lattices: The lattice assumption

I'll talk about lattices at length, but before we get to lattices, let me talk about this discrete log problem. I think the contrast with lattices illustrates some important points about lattice cryptography. Some of these points are quite relevant to some more recent developments that use lattices.

So what is the difference between these two kinds of hardness? In the lattice case, the assumption is that there's no algorithm that can solve lattice problems efficiently, or rather, the complexity of solving lattice problems in n-dimensional lattices grows superpolynomially (or exponentially) in n. These problems grow exponentially or at least superpolynomially in the dimension of the lattice n. We don't really have a single distribution of problems. We have a set of distributions, one for every value of n. So you might think this is not much different from the question of the discrete log. You also have a set of distributions, one for every group, and the conjecture is that the complexity of solving this problem grows superpolynomially in the bit size of the modulus or the group size. These two problems (the lattice problem and the discrete log problem) are not quite the same. The reason is that for every n you can think of n as a security parameter, there are exponentially many primes p, and so when you make a conjecture of this form, what you're asserting is that, typically this prime p is chosen at random, you set the variable n as a security parameter, and then you choose the modulus as a random prime for that length. The assumption is still an average case assumption. We're assuming that when the prime p is chosen at random that then the problem is hard to solve. And, if you phrase this in the context of complexity theory, if you were to assume that when you pick this prime p at random then you get a hard p with some positive probability like epsilon, that's what is called a weak one-way function. Now oyu can make a stronger assumption and ask that when you choose p at random, then you get a hard p with probability that is exponentially close to 1, and that would be a strong one-way function. When you make an assumption of this form, you're making a super-strong assumption. You're saying that if you pick p at random, then it will be hard with probability 1. It is even stronger than what you assume in the average case setting. So the issue is that we don't know how to relate the complexity of the discrete log problem for different groups and different primes p. So random set reducibility does not give any guidance about the choice of that group.

https://www.youtube.com/watch?v=4KVJbEOqB_Q&t=17m38s

You could try to fix this problem by just picking a prime. Let's say that for every value of n, instead of picking a random prime of length n, you choose the first prime of length n, and we know there is always a prime between 2^n and 2^(n+1) and you could define p_n to be the smallest. Let's not worry about how to find it. Now, that's fine. You get something closer to the question you had with lattices. We're getting there with some... there is a start of the definition of this problem which is kind of arbitrary. Why pick the first one? Maybe the first one is easy. There's no reason to believe the first one is going to be easy. It's clear that this is kind of a cheat.

In the case of lattices, there is more; if you consider the sequence of problems of increasing dimension. We noticed that you can reduce lattice problems in dimension n to a lattice problem in a larger dimension m (where m > n). The way you do it is just by adding some d... coordi.... and just putting some big numbers there. In a technical sense, solving lattice problems in a larger dimension is at least as hard as solving lattice problems in lower dimensions. You can think of these sequences of these distributions in larger and larger distributions as a single problem, for which you consider a subset of growing size. So yes, we have a single problem which is assumed to be hard in the worst case, and there is basically no randomness or unique o... in the assumption. No such reduction is known for the discrete log problem. Even if we take this very specific example, is there any way to show that to reduce the discrete log with p_n prime modulus to get a discrete log in a prime of modulus with larger bit size?

And one last issue is that this discrete log problem, I mean the hardness of the problem seems to depend on the representation of your numbers. The multiplicative group of the numbers modulo p is isomorphic to the additive group of the numbers modulo p - 1. But the discrete log problem in this group is trivial. So the fact that the problem is hard sometimes depends on how the problem is represented. You can always make a problem easier, by choosing some weird representation that implicitly contains the solution to the problem. It's not really a formal argument. But here it's an obvious representation of the natural elements of the group. I'm not exactly sure about what it says about the problem, but for lattices, I don't know about any reasonable representation of lattices that makes them easy.

And also, I talked about the discrete log with modulo prime, but you can also talk about other groups. For example, discrete log modulo the product of two primes. You can take two random primes of the same length. Which of these two families should you use? Let me ask you a question. Prime is what you typically use, for many applications it's sensible. Consider the following question. Assume for different distributions, assume that somehow, for different distributions, discrete log modulo prime or modulo prime product, one of them is polynomial time solvable, and the other one is hard. Which one would you pick? Which group family would you pick? Why you should pick one of the two distributions? Personal preference aside, yes. So the question is, if the discrete log problem, consider the discrete log problem over modulo prime, or over modulo the product of two primes, or an RSA group. So we don't know, there's no mathematical strong mathematical evidence that one class of groups is better than the other other than the personal-known attacks ((22min)). So maybe one of them is easy. Let's say that somehow one of them easy, polynomial time solvable, and one of them is hard. I tell you that, okay? For random p and q. Which one would you go with? I'm making this point that the random set reducibility does not provide any guidance about the choice of the group, and I'm trying to trick you into giving the wrong answer (laughter). So whatever answer you had in mind, use the other one, okay? But this might be part of the trick. Okay, anybody want to take a guess? They are both easy? I guess it could be the case that one of them is hard and one of them is easy... well, if you didn't se it; the reason is that the chinese remainder theorem. You know that Z modulo p q is isomorphic to the product, Zp times Zq. Two different primes. So this essentially tells you that Zp is a subset of Zpq, you can embed Zp inside Zpq. This is a reduction of the discrete log from Zp to discrete log from Zpq. So that's a reduction. If you can solve the problem with modulo pq then you can also solve it with modulo p. Technically, if I give you this instance to be solved, then you should pick another random prime of the same size, and then you turn this problem into the other one, in a dynamical coordinate... if one is polynomial time solvable, and the other one is not, then we know this one is hard and the other one is easy, there is no other possibility. It's not clear how to give a reduction in the other direction, unless you know the factorization of the modulus. If you know how to factor the modulus, then you can break discrete log into the two components. Yes, I expected this question. If you can take the discrete log in modulo pq, then you can factor pq. If you can take the discrete log modulo a prime, then you can factor p, so, I didn't look into this deeply, so you might be able to find a way to prove that the difference between the two problems. But yes, if you can take the discrete log for the easy problem then you can factor p. Factoring big primes is one of the big challenges. I forgot to set that, by the famous quote. I don't remember. Let's move on. That's the end of it.

https://www.youtube.com/watch?v=4KVJbEOqB_Q&t=25m10s

Q: ... find out.. over primes.. but over.. is as hard as..

A: Could be. I'm not entirely sure. Chinese remainder theorem? Another example is, using RSA numbers product of two primes, consider factoring the product of two primes versus factoring the products of three primes. If one of them is polytime solvable and the other one is not, then guess the hard problem. Well, factoring three primes. And otherwise you get a reduction. If both primes are hard, and you look into the exact complexity of solving the problem and the complexity of best algorithm to attack the problem, then you would be better off with a prime modulus. So let's move on.

Q: Do we have any reason to not believe that .. all primes of.. big primes.. hard for all of them? p - 1 small..

A: .. All the known.. in the sense that.. small probability. If you pick a random one, you are still pretty safe. Discrete log is a good problem. I am not arguing that you should not use the discrete log. All I am saying is that random set reducibility does not provide any guidance about the choice of the group for the discrete log problem. That's the only statement that I am making. And that's the remarkable property of Ajtai's result.

Q: What is the moral of this question? You were saying in discrete log, okay, ... lattice problems, you have only one problem? But here you have...

A: These are different problems. They are not the same problem. So they ... if you take each one of them ,we don't know how to show that the problem gets harder and harder as you get a bigger modulus.

Q: ...

A: I know. Oh yes, let's use the problem of discrete problem of modulo of n primes when ni s the security parameter. And all these primes are the same size, so that you could give this reduction for that. If you do that, sure, the problems reduce one to the other. But I'm not sure you can get harder and harder. You don't really, this reduction does not address the question that I posed in the previous slide.

Q: ... in lattices... you know larger the... the harder it is.

A: And yes, in discrete log it is not clear. Okay, now I know why people refused to answer this question. And feel free to ask questions. Okay so now maybe on to lattices.

Q: Well maybe can you tell us about lattices?

Lattices

Ah yes, it looks like I am not talking about lattices, but I am. So let's consider another classic problem, corresponding to the other method to find hard problems. Take another NP-hard problem, a famous one, the subset sum problem. In subset sum, you are given a sequence of numbers, and a target value b, and you want to find a subset of these numbers that add up to the value b. Now you can map this problem to a lattice problem. Subset sum problems and knapsack problems are essentially the same as lattices. And the way you map it to a lattice is by replacing each number by a vector, and the target also to a vector, in the way that finding a subset of the numbers that add up to the value b, is equivalent to finding a subset of these vectors whose sum is close to some target value t. The reduction is pretty simple. You extend each of these numbers into a vector with an extra coordinate. And then you extend also the target vector, putting 1 1 1 in the extra coordinates. It's not hard to see that there's a lattice point in this lattice, within distance square root of n to the target, if and only if there is a solution to the subset sum problem. The proof is all there. You can take a picture or do it as an exercise. This gives a reduction from subset sum to lattices. But in a sense we have a reduction also in the other direction. The knapsack lattice is in a sense the typical lattice. Solving problems on lattices or solving subset sum are closely related problems.

What about knapsack cryptography? It was already there before Ajtai's result. It's something that was first proposed as same time as RSA by Merkle and Hellman in 1978 where they proposed another construction that builds subset sum instances and then uses them for public key encryption with a conjecture that the cryptosystem is hard to break because the subset sum problem is NP-hard. If you try to solve it just as a direct general subset sum algorithm, you will fail unless P is equal to NP. The system got broken because the way they were disguising the structure of knapsack lattice into random ones was something that could be recognized. So this was broken by Shamir 1984 (if you're watching right now, happy birthday) and Lagarias-Odlyzko 1985. They showed that it is easy to expose the structure of the Merkle-Hellman knapsack problem. They gave a general lattice attack that could be thought of a reduction of subset sum to lattice problems. They expose the structure of the Merkle-Hellman cryptosystem. Using LLL as an oracle to solve lattice problems, they completely broke that cryptosystem and solved knapsack problems.

You may think that, so, people kept suggesting other methods to avoid the attack. But they all got broken. Ajtai's result came up at the point when people were finally accepting the idea that subset problem or using knapsack or equivalent NP lattices for cryptography, was just a bad idea. No matter how hard we try, they just get broken. So in this sense, Ajtai's paper was a turning point for the area of cryptography based on subset sum and lattices.

Ajtai's one-way function

Ajtai's one-way function is a very simple function. It is a matrix vector multiplication. The key to the function is a random matrix A with elements chosen modulo some small value q. And the input is a short vector x, typically one vector, and the output is just the vector A times x (Ax). Ajtai proved that for an appropriate choice of the parameters,specifically when m > n log q, which corresponds to making the domain of the function bigger than the range, so you get a non-injective function, and most likely it's non-injective at 1. So for this parameter, solving this inversion problem, inverting this function on the average with some positive probability, is at least as hard as solving some lattice problems in the worst case. You can also see that this problem is a knapsack problem. You take the knapsack over the group of n dimensional vectors modulo q. You can think of it as a reduction from the worst case complexity of lattice problems, to a random instance of lattice of subset sum problem. So that's great. It's a great technical result.

There's something else that I think was a very important point in contribution to the paper beside proving the result, which is to introduce this class of lattices. It's a natural class of lattices, but not only is it natural, we can prove something about it. It's still the class of lattices that are used today, such as for LWE (learning with errors) used over and over in the morning today, that can also be described as a standard lattice problem, the bound distance .... problem over the standard same class of lattices put forward by Ajtai. And specifically, the lattice corresponding to the function is given by the kernel of the function. When you multiply a zero vector x by the matrix a, is what in coding theoretic terms is called computing the syndrome of the error, ... nearest vector problem with respect to this lattice, which is a lattice with some kind of structure. What's called a q-ary lattice, where a lattice is where put forward as a good class to b used in cryptography. These involve only arithmetic modulo q.

This allows us to model, to use, arithmetic modulo q in any application of this lattice. So you can see the contrast between the worst case and average case problem. I nthe worst case, the assumption is that the lattice problem is hard. This is not directly useful in cryptography, because of the worst-case hardness assumption. Second because these are real lattices and you know, dealing with real numbers is something that requires bounds on the magnitude of the numbers, the precision of the numbers, and so on. And it is something that is not that convenient both in you know practical implementation and even in theory. Using the real numbers is not too popular. The only real numbers in... use the real numbers, but the only real numbers in computer science are zero and one.

Ajtai's random lattices, they use integer coordinates and bounded precision. This number q is a small number. It's not hard to factor. It doesn't have to be prime. The factorization of q is largely irrelevant in the factorization of the problem. This is called the small integer solution (SIS) problem. It's a way to represent this problem in a way that does not even mention lattices at all, over discrete groups. And connecting this to subset sum, it's easy and it was remarked in Ajtai's paper. You can use a different modulo for every row in your matrix, and then combine them using chinese remainder theorem, and this will give you a subset sum problem and then you can get the product of all these numbers.

Q: What is the ... guidance for choosing q? 1000?

A: Size?

Q: What is the.. there is a lot of.. in the ... so just the discrete log?

A: You want to use a smaller number. The security parameter is n. For all practical purposes, the most typical application, there's no reason to use more than the machine size. 16 bit numbers seem to be enough.

Q: ...

A: Are you asking about how big of a q, or how to choose a q?

Q: In discrete log, we say the smallest prime... but it's kind of ad hoc.

A: I think Rusty might have an answer.

A: It doesn't matter. You can pick q to the power of q, and it's equivalent to any other q.

Q: It doesn't have to be prime?

A: No, you can give reductions.

A: For random instances of subset sum, it actually, the modulus doesn't matter at all.

A: And most likely for the worst case scenario. So you, the carries, at most log n bits, so you can guess the carries. Solving the subset sum modulus something of given size, they are essentially the same problem. So any modulus reduces to subset sum over the integers in both directions. If you want to give a natural proof, you might have to work on it a bit, but you see why the modulus does not matter.

Q: ... sub-exponential in recent papers.. now it becomes same as discrete log.

A: q only needs to be large enough. "Large enough" might depend on the application. It needs to be large enough and that's all.

A: It can be polynomial. Ajtai's proof had a large polynomial. He used for any polytime attack, there is a pure polynomial q for which it is hard, and then there was work on proving and lowering this exponent, and the best result is that you can put hardness for q as small as n, or even in some settings square root of n, you cna make it a very small polynomial. Square root of n, you get in trouble with errors, but yeah you can prove hardness for small values of q. Bringing q down to 2, that's an open problem. We don't know how to make it as small as that. But a fixed polynomial in Ajtai is fine. And the factorization of q is provably irrelevant, not simply because we don't know how to attack it.

This is for the average case side of the problem. For any q, if you can solve the problem for that q, then you can solve the problem for any lattice where there's not a modulus. So we see the difference between these two things.

https://www.youtube.com/watch?v=4KVJbEOqB_Q&t=43m27s

Shamir 1984 and Lagarias-Odlyzko 1985 could be descrfibed as reductions from knapsack to lattice problems. It can be formalized as reduction from "bounded distance decoding" (BDD) (in knapsack lattices) to the unique shortest vector problem (uSVP) from Lyubashevsky-Micciancio in 2009.

Odlyzko 1990 "The rise and fall of the knapsack cryptosystems"

Nguyen-Stern 1997, 1998, 1999, 2000, 2001, 2005

This was used before Ajtai's work, this was used as a method to attack knapsack cryptosystems. There was a paper from Odylzko, which I think says it all, the rise and fall of the knapsack cryptosystem. That seemed to be enough to stop using knapsack for cryptography. People kept using knapsack as an example for lattices and kept getting beaten up by LLL. Nguyen-Stern kept using lattice reduction to keep breaking other knapsack cryptosystems.

Ajtai in 1996, he could have titled .... he titled it "The fall and rise of knapsack cryptography". Not only is this... it's a worst case to average case reduction. Even if you put aside the assumption that LLL works, and is an exact oracle, which seems to be the case, the biggest experiments you could run were in the range where LLL would run well. In the 90s, I wasn't doing much at first part of the 90s, but.. no, not even that. I had a driver's license, though. But at that point, we knew LLL, er, the shortest vector problem is not polytime solvable. In 1998, I proved it was also NP-hard. Before the NP-hard conjecture that maybe LLL solves the shortest vector problem in polytime, before that it was a reasonable conjecture. If there is some hardness at all, then knapsack is the technique you want to use, because you don't have to guess or bet on one or another distribution. So he titled his paper "The fall and rise of knapsack cryptography" (laughter). How come this paper started after Ajtai's result? Are people still suggesting knapsack cryptosystems even though ... why are they proposing bad cryptosystems after Ajtai found how to do it correctly? Some of these works were attacking Ajtai's functions. "A converse of the Ajtai-Dwork security proof" and "Cryptanalysis of the Ajtai-Dwork cryptosystem". The attack was a reduction in the other direction. And for the conference they chose a more aggressiv title (the one with "cryptanalysis" in the title). And that's how crypto goes on- they are gladiators; 400 people go to Crypto, and they want to see people going there and killing each other just for the sake of the show. But anyway, the punchline is that, these attacks, these reductions, you could always look at them in both ways. When you give a reduction between the two problems, you can think of it as saying, knapsack problem is easy because it's no harder than the other one. But you could also think that we are showing the knapsack problem is hard, you can give reductions in both directions. Either they fail at the same time, or they stand together. This was kind of clear at that point.

Still, Ajtai's function had some shortcomings. One is that it required m > n log q which corresponded to choosing a non-injective function, typically it's a rejective function, which is enough to get one-way functions, you can get commitments, you can get some forms of digital signatures. Getting public key encryption was a big challenge, it was not clear how to get it out of Ajtai's function. There were some proposals. There were two other important papers about at the same time or shortly after Ajtai's work. One is the Ajtai cryptosystem... he posed a theoretical proposal with a worst-case average case reduction.. but he was starting from a different lattice problem, a special short vector problem called the unique shortest vector problem.

Q: ...

A: No, only one direction. He showed that the shortest vector problem is no harder than the general one.

There was a paper much later more than a decade later showing the uSVP is equivalent to the shortest... estimating the length of the shortest vector in an arbitrary lattice, is equivalent up to polynomial factors, to finding the shortest vector when it is unique. But that was not clear at all back then.

Anyway, the other paper is the NTRU cryptosystem, which was in the other class of problems. Based on some mathematical intuition, relation to other mathematical problems, the best known attacks to solve it. It was really trivial and efficient. It was much better than Ajtai's function which had a key size that was quadratic in the security parameter. This one had linear or almost linear for the bit size storage and computation time requirements. So it was very efficient, but it didn't have the same kind of theoretical support as Ajtai's function which reassures us that there's a good way to pick random instances.

To give a sort of, the state of lattice cryptography at the turn of the mellinium I think was well described by an event organized as "Cryptography and Lattice Conference" CaLC 2001. Organizers: Coppersmith, Hoffstein, Odlyzko, A. Lenstra, Nguyen, Silverman. Other participants: Koy, Schnorr, Trolin, Blomer, May, Howgrave-Graham, Wang, Zhu, Kumar, Eisenbrand, Rote, Semaev, El Mahassni, Shparlinski, van Hoeij, Micciancio, Steel.

I don't know who was there. You had the list of people attending the conferene so you could provide a better list. I came up with this list by looking at the organizers and also the papers submitted at the conference, might include people who were not at the conference because maybe only one went to the conference to present their paper. You will also notice the intersection is not that big. There are many names there that you don't see that often in lattice cryptography anymore. More than that, if you look at the content of the conference and the papers, there were about 15 papers in the conference, and of them, 6 were algorithm papers, 5.5 were about cryptanalysis, and 2 I couldn't put them in a clear category, and only one was an application for cryptography. And only one was, this half paper, it was a, invited survey talk titled "Two faces of lattice cryptography" that described the ways lattices could be used to make and break cryptography. There was a clear sense that lattices could be used both ways. But there was only one paper that was trying to do lattice cryptography at the conference. And it wasn't a really good paper (laughter). And I wrote it. (laughter)

In the early 2000s, lattice cryptography was still a small research community. These days, we get more than 100 papers on lattice cryptography every year. If you just search for papers every year, that cite Ajtai's paper, which is the key to see whether the paper is about lattice cryptography or not, there's a bunch, you will really be amazed -how can people even read this many papers? And even if you wanted to read all of them, there's no way. There's really no comparison. The community grew enormously. Pretty big gap between the computational complexity community where lattices were a way to connect the average case complexity with worst case complexity, which is something very interesting regardless of the practical applicability, and the applicability was also very limited because of the fact that we knew only a few ways to build one-way functions at the time. At the same time, we had some community dealing with encryption schemes, and algorithms to break or attack the security of cryptosystems. At the same time there were attacks into the center schemes... sign along the lines of intrude, .. where at the same time, part of the lattice cryptography work back of the kind, lattice cryptography to break cryptography, and I think they were also somehow showing this was important but we don't know how to use this in practice, and they wanted to avoid devastating attacks.

One of the problems was to improve the efficiency of Ajtai's function. .. it's a good encryption scheme, and in fact today we have a better understanding of how it relates to other provable lattice cryptography and why the scheme got broken while the encryption turned out to be a good one. So the efficiency was one of the big open problems. The other one was to prove the security of the injective version of SIS, which seemed to be the key for more powerful applications like public key encryption. This also relates to algorithmic attacks on knapsack where you typically distinguish two settings, low density and high density knapsacks. High density knapsacks can respond to hash functions. Low density knapsacks can't do public key encryption, and this is where lattice attacks were most effective. Getting security for injective versions of Ajtai's function was another big challenge. Something else that was beyond anybody's dream, and if anybody can correct me in the room please do, at the time the match pairing based cryptography was all the rage, there was a small group of people working in lattices but the cool people were playing in match pairing cryptography. The idea that lattice cryptography could surpass in efficiency and popularity would have been a joke at the time.

Q: ...

A: Today? No, I'm going to move that to today, in the last 30 minutes today.

As we get closer and closer today, I will say less and less, some of these things were said earlier this morning. I'm a little bit out of time. Less than 2/3rds. There were some questions so I feel justified taking more time.

Efficiency of Ajtai's function

Let's start with efficiency, one of the two open problems. This problem came in a paper of mine, generalized knapsacks and efficient one-way functions, so what the paper did was to consider a structured version of Ajtai's function. The inefficiency in his function is that you have a 2-dimensional object, and it has a n square storage requirement. If you build a matrix that is structured, you can think of it as a matric with blocks and every block is a ... and every row is a rotation of the previous one. So you can implicitly represent this square matrix with just a vector. So then you get a linear storage requirement matrix. Using FFT and polynomial arithmetic algorithms, you can even compute this function in quasilinear time. You can get a fast function, that matches it in speed by and large. Not only that, what this paper showed is that you an still ... you can get a scaled down version of Ajtai's result. If you can break this ... out of the uniformly random matrices, you can solve .. in the worst case. I showed that if you cna break this problem, a random problem, with a random .. matrix.. then you can solve lattice problems in the worst case, other .. there's a similar restriction, namely cyclic .. and basically rotation of the coordinates. The assumption, any stronger than Ajtai's, it's still worst case assumption, over restrictive but still not the class of lattices. If we believe that lattice problems are hard to solve in the worst case over cyclic lattices, then we know how to build hard random instances.

Q: .. worst case.. reduction.. does the dimension of the lattice get much bigger?

A: By a log n factor. It can even be a constant factor if the input is a 0, 1 vector then you need log n blocks to make the function injective.... if the input is... most q to the epsilon, then you can get the 1/epsilon blow-up in size, so you can make it constant if you want.

There are a few differences. For general lattices, if you increase the dimension, you are taking a harder problem because you can add coordinates. It's not clear how to reduce hard problems on cyclic lattices in dimension n to cyclic lattices in dimension n+1. But what does this say? This says that the proof... how to choose random problems for C_n for every n, it does not tell us how to choose the security parameter.

Q: ...

A: No, in Ajtai's case, you can reduce lattices in dimension n to lattices in dimension n+1. There's an approximation factor that grows with dimension.

Q: .. what you do is... you don't...

A: We don't know, in an absolute sense, how big should it be? But at least we know the bigger the better. It could be the case that solving lattice problems in cyclic lattices might be polytime solvable, and it might be exponentially hard if the dimension is ... but that would be surprising to me. It's not something we can rule out at the moment. It doesn't provide guidance, that's what I mean. For every n, we do have a harness, plus using some kind of argument that we can get sequence of arguments getting harder and harder. You could define the problem in dimension n to solve problems for cyclic lattices, up to ..., and if you considre such a sequence of problems, it clearly gets harder, you could reduce it of course, but you also lose the efficiency advantages, so little point in doing that.

Also, in cyclic lattices, we could also do one-way functions....

Q: It wouldn't be surprised if it was easier, because wouldn't it have a lot to do with factorization of the polynomial x^(n-1).

A: Yes, it will come up soon, in the context of finding collisions. But when it comes to inversions, the factorizations don't play a big role.

For Ajtai's function, we can show that it's a one-way function, it's collision-resistant, there's some related applications like commitments, I could prove one-wayness but I wasn't able to prove stronger properties. This was in 2002. Collision resistance, for some reason, the proof was breaking.

Q: ... right..

A: It could. It could.

Q: ...

A: ... proof works for every n. There is the worst case side and then there's the average case side, so fine maybe I should take back what I said. Yeah, it could.

But anyway, moving on, an important point is that it's sort of not quite understood at the time, this proof was not just a scale-downed version of Ajtai's function. I think that's what people had an impression of. I figured it was just a conference and it was rejected and I figured fine. But after a review after submitting to a journal, I got a one line rejection and that's when I knew hey wait something is up... the reviewer said hey this paper is using a new assumption, what would happen if the assumption gets broken, hey that would ruin the reputation of the journal (laughter).

Efficient collision-resistant hashing from worst-case assumptions on cyclic lattices, Peikert-Rosen 2006

Generalized compact knapsacks are collision resistant, Lyubashevsky-Micciancio 2006

But if you think about the kind of assumptions used in the past few years, we get multilinear maps and obfuscation..

Q: ... don't send to a journal.

A: Yeah, I was trying to support journals by sending in my best work there.

As I mentioned, to, it's kind of, this was representative of .. general sentiment within the lattice cryptography community, oh cyclic things are not that interesting, and this one is not practical in any way, you can't use it. But it was the starting point for a very nice connection between algebraic number theory and lattice cryptography. It took four years before someone else came up with a result along those lines. Those two papers essentially proved the same result. I remember when I saw the paper by Peikert-Rosen, I didn't know whether to be happy or mad, it's the same result and somehow a conference accepted that one and we ... around the corner.. and ours was rejected for unclear reasons originally. But still, .. one more people working on lattice, at the time it was my student, it showed independent interest in cyclic lattices... but still, they got the same result, I am not alone, there are two more people working on cycle lattices. It was interesting things. Let me tell you what the papers did.

They showed that, the one-way function was not collision resistant. Then they suggested two different ways to fix the function and to make it collision resistant. They were using SHA wide factorization of n is important... and in the other paper, we worked in the ... while our paper worked in modulo power of 2, so they were both interesting solutions.

Let me give you another small test. So, this one is an instance of my one-way function. Each block is a ... the rows are rotations. And how you compute this function is you multiply it by 0 1 and minus 1, and you take the sum of this column and you get the output of the arithmetic, so you only have to do single digit arithmetic, and you want to find a collision, which means two subsets that have the same sun, one subset and multiply by one, the other subset and multiply by -1 and you want to get zero. Can you find a collision to this function? You have 10 seconds. (laughter)

Q: Yes.

A: Prove it.

It's easy to find. There was a single hash function that was presented at the rump session at Crypto of the same year. The compression function was my one-way function and they suggested using it for compression. So you got a hash function there, and I remember someone writing... and I was playing with.. the break between one session and the other, and submitting another rump session and presenting that we worked it out during lunch that we had a collision. The way we could do it so easily, the reason why we could do it, is ... than the fact we could get that. So first, multiply everything by 1. Since these matrices are similar, you get a .. vector... when you multiply by 1. 6 + 3 is 9, and you find your collision. This.. attack the function... is related to the factorization of the polynomial x^(n-1). Working with this in polynomial terms, is like, so that when you get to position n you go back to position 1, and this polynomial factors into a linear factor and something else. This 1-dimensional problem corresponded to projecting on to these linear vectors. So that's where the factorization comes up.

Let's modify this a little bit. I'm changing the sign of these integers. This could be called anti-... matrices... that where when you rotate, you rotate it around and it gets negative. This is a trick to avoid the previous attack. Can you find an attack now? You could write a little program and find the collision just by bruteforce, but coming up with a structured attack is much harder. I don't know any for this.

The version of the security proof that I used showed that this problem is as hard as the worst case complexity of lattice problems on anti-cyclic lattices. So there's a worst case showing you can't find conditions that do this. And this is related to the polynomial x^(n+1) which corresponded to negating the entries around, is irreducible over the integers when n is a power of 2. So there's a lot of work that was done since then, getting more and more algebra into lattice cryptography, but I don't have time to talk about that.

Applications other than one-way functions

I want to say a few more words about getting more applications rather than just one-way functions. It's interesting to think about SIS/LWE as the common vector problem (CVP). In geometric terms as a lattice problem. Multiply x by a is like computing the .. of x... in some version, closes of the direct problem... if the.. lattice, then you get x, the operation of computing the function can be equivalently described as getting your error vector and reducing it to... think of it as cutting your lattice space into parallelograms and putting all of them one on top of another one. Reducing the error in modulo the lattice. Is this function hard and for what values of the parameterS? When the error is less than half the minimum distance of the lattice, the function is injective. If you choose a larger error, at some point it starts intersecting, the function is not injective. If you make it big enough, the function becomes surjective, and you don't need to make it much bigger to make it not only surjective but almost uniform. Here, in the surjective case, different regions have different shades.

So here when you make it bigger, you still have a different shape, but the difference is much much less, so you get something almost regular, each point has the same number of preimages, and this function maps to the uniform distribution. So you can ask, is this a one-way function and in what range of parameters? Ajtai showed that in this case, when the function is surjective and uniform, then the function is hard to invert. The question, the key to this and public key encryption schemes based on lattice schemes, for surjective functions even in the injective range is it hard to invert? This was the question that was answered in his paper Regev 2004 "On lattices, learning with errors (LWE), random linear codes and cryptography".

Regev 2004, I say implicity because, proves that... it might not have been clear to him that the problem was to prove the hardness of Ajtai's function... he was proposing a new formulation called learning with errors (LWE). Just using the lattice duality, it's easy to show they are different formulations of the same problem. And not only that, but, and the way he did that was more in a surprising way, he could prove a function was hard to invert in the injective setting, using quantum computation. So that's why it seems so out of reach of before, it was still today we don't quite know how to relate these two problems using classical tools. Technically, he used a quantum transform to move it from one lattice to its dual. Beside that, perhaps less surprisingly, he showed that if this function is one-way, then it is also random, in the sense that the output of the function is indistinguishable from a uniformly random value. I said less surprising, because if you formulate it in a knapsack version, there were previous results regarding the sumset sum problem if it is one-way then it is random, we already knew that using that formulation, perhaps it was less clear when using the other fomrulation. And finally, he said it could be used for public key encryption, based on the worst case assumption of the quantum hardness of lattices, so he could potentially sell that and make a lot of money out of it.

Learning with errors (LWE)

So you already saw this in the morning many times (LWE), so I will just skip it. Syntactically, it looks different from the previous SIS functions because of the addition here, but you can rephrase it by taking the duel of A, the terminal complement of A, and use it as a ... and then you get a function that is syntactically similar to Ajtai's, but for different parameters.

LWE has had a huge impact in cryptography.

... after starting with 2008, we start seeing more and more complex applications from lossy trapdoor functions, CCA encryption, hierarchical IBE, all the way to fully-homomorphic encryption. Something I heard many times since then was, oh you know what is really the great things about this LWE problem and this... he introduced this to lattice cryptography, without even knowing what a lattice is. Each time I hear that, I feel like crying almost. No I mean, he did, sure, he introduced a new problem. But honestly I think it's almost a negative contribution because that confused everybody because we couldn't see oh yes that's the injective version of Ajtai's function. But at the same time what he did was to prove that the function was one-way. And it was a big open problem. He compared it to another syntatic formulation, it was a big achievement. But anyway, to look at the lattice perpsective, there's 100s of papers on lattice cryptography each year, I think it's in part because yes you can do lattice cryptography without even knowing what a lattice is. I think you can do that equally well using the SIS formulation. This could b applied to the LWE formulation. Most of the... you can use Ajtai's function, but for ... proofs only by... so, we don't know that it would just be a ... for those functions.... Yes?

Q: ...

A: So... depends on which problem you start from. There have been classic reductions that show hardness of .. for this function. So the difference is that Ajtai proved that the problem was as hard as finding short linear independent vectors in the worst case. The classic reductions only show you that it is as hard as computing the length of the shortest vector, but we don't find it. So it's still a pretty interesting and ... so this was achieved first by ..Peikert and Eksgreeees... proving this sort of classic hardness for this problem, starting from short vector problem.... quantized version of that proof, but it's not really... the part where quantum was used, it was a part we didn't know how to make classical. But that reinforces our confidence in the problem. These kinds of proofs do not carry over to the algebraic setting. It's an area that is not completely solved.

There is one more paper that I should mention from 2 years ago, which I am blanking on. There are four people in the room that know what I am talking about.

LWE is just as inefficient as Ajtai's function - practically useful, but only theoretically efficient. It was the opposite of what we had before. It as inefficient as Ajtai's function. It was only theoretically efficient.

SWIFFT: A modest proposal for FFT hashing (Lyubashevsky + M + Peikert + Rosen 2008).

void SwiFFT(const ZNw key[M], const ZNw keysum, const BitsN inputs[M], ZNw out) {
    int i, j;
    ZNw fft;
    for (j = 0; j < N/W; j++) out[j] = keysum[j];
    for (i = 0; i < M; i++) {
        FFT(input[i], fft);
        for (j = 0; j < N/W; j++)
            out[j] += qReduce((qReduce(fft[j]) - ((P-1)/2))
    }
    for (j = 0; j < N/W; j++) out[j] = modP(out[j]);
}

.. it is not, it is a lattice crypto function.. this is just a hash function. At the time, something practically useful was still a big challeng.e But in the case of hash functions, in 2008, we managed to find a truly practical implementation of those functions, it's something that could go as fast as something you want. It was a hash function as fast as SHA-2. SSE/AVX, arithmetic mod q = 257, use w = 42. You can make cyclic lattices really efficient. I wrote a small program to find the best parameters, but little did I know that the values would be so small.

If you try to put these foundational papers into a table, we have these injective and surjective functions ...

Ring lattices, Stronger assumption, Efficient
Surjective<, Weaker assumptions, Fewer applicationsInjective, Stronger assumptions, More applications
General lattices, Weaker assumption, InefficientAjtai 1996, SIS one-way function, Complexity omega(n^2)Regev 2004, LWE encryption +, Complexity omega(n^2)
M 2002, RingSIS one-way, Complexity O tilde (n)

Many other papers: MR 2002, PR 2006, LM 2006, SSTX 2009.

... SIS function, and we have the less efficient and more efficient version versus whether you use structural lattices or not. Ajtai founded this area, of surjective function .. based on general lattices, and Regev 2004 got the more powerful injective version of this, but still inefficient. And still the cyclic lattice work got efficiency but only for surjective and weaker assumptions. And later RingLWE in LPR 2010 sort of fixed that. Basically combining these two things. So they can do much more than that. And this was pseudorandom, so it brought even more algebraic number theory into the game. There are several other related papers that are technically equally important, one of them is MR 2004, PR 2006, LM 2006, SSTX 2009. The SSTX 2009 paper technically predates the RingLWE paper results. This function was one-way, but we don't show pseudorandomness, so it's technically interesting and nice, but proving the sort of pseudorandomness is what was needed to get it past that for other applications.

Fully homomorphic encryption

Let me close with FHE, fully homomorphic encryption, which I mentioned in the title. I'm not going ot say a lot about it. It's already history. Gentry 2009. You heard about it earlier this morning. Craig Gentry in 2009 did something really amazing. I think even more than solving the problem, what he did was convince people that the problem could be solved. Similarly to taking the knapsack problem which everyone was throwing away, and then saying no here's something you could do; and suddenly everyone in cryptography starts thinking "oh yes it could be solved". And now after some time, I think we can finally setup fully homomorphic encryption. The wrong reason is that the reason why he was using lattices was very different from Ajtai. He was using a lot of ad hoc assumptions. I'ts not as clear about how confident we can be. On top of that, there were some efficiency issues. I think there are some other papers about this about better ways to do fully homomorphic encryptions based on problems that are essentially LWE.

.. on Ajtai's lattices. So why did I say perhaps for the wrong reasons? Gentry used the ring lattices, these algebraic lattices. In his paper, it seemed essential to get a fully homomorphic encryption. You want lattices that support addition and multiplication in order to get a full set of boolean case. But it turns out this is not really necessary, it's useful for efficiency, but it was using a different class of lattices. It was important that it is still present in this scheme, is that lattice cryptography has a very simple decryption operation, which is unique to lattice cryptography. The decryption is essentially a scalar product. You saw this morning, ... and if you look at FHE today, it's quite different.

In fact, it's quite different to the first candidate proposal, encryption/decryption today is essentially Regev LWE encryption or variants of it. Security is based on standard assumptions, like LWE or worst-case hardness assumptions on lattices. There is no reason for ring lattices, except for efficiency, and it's much faster and better understood than the first candidates.

Ajtai's connection, sure is not as powerful as LWE, we cannot do fully homomorphic encryption using just that. But the impact on lattice cryptography is something we can still see today. You can think of this class of lattices, and it brought attention to cryptographers in this work. Theory behind the proof really proved to cryptographers, to get both faster functions and better understanding of what we are doing.

I only mentioned a small practice of all the works I should have mentioned for this, because of timing constraints. And basically there's still plenty of work on many fronts. Getting better understanding of algebraic lattices, to get better implementations which is not about coding but about mathematics. What makes things efficient to implement? Well, it's choice of parameters, like to get number transformers and so on. There is great potential on hw to do the computation, because we're working with vectors. I think sooner or later, we will have vector instructions on processors to support lattice cryptography. I think in 1996 that would have been... well, we're using addition on computers, that's what's important on computers, but we will go beyond that. On all this recent work on obfuscation and multilinear maps and so on, basing those on standard lattice problem is really the key to move forward efficiency and really do something like real cryptographers would say (laughter). We have confidence that we know what we're doing when we use lattices. So getting concrete values for lattice cryptography security, I think that requires more attention. Most of the algorithmic work was done, before and after Ajtai's work, expanding the context of breaking things. When you break things, you're in a different kind of setting than when you're trying to estimate parameters. If you want to break a scheme, you attack a challenger, and then fine it's dead. You need to estimate cost by looking at the statistical study of the problem, but tries to improve other understanding of the performance of the algorithms. So revisiting many of the cryptographic algorithms on lattices might be an interesting direction.

That concludes my talk and thank you for your attention.

https://www.youtube.com/watch?v=4KVJbEOqB_Q&t=1h34m37s

Transcripts

Community-maintained archive to unlocking knowledge from technical bitcoin transcripts

Transcripts

Explore all Products

ChatBTC imageBitcoin searchBitcoin TLDRSaving SatoshiBitcoin Transcripts Review
Built with 🧡 by the Bitcoin Dev Project
View our public visitor count
We'd love to hear your feedback on this project?Give Feedback