
Sommaire
3.8. Becker, Joux, May, and Meurer Algorithm
Date de création :
05.05.2015Auteur(s) :
Irene MARQUEZ-CORBELLA, Nicolas SENDRIER, Matthieu FINIASZPrésentation
Informations pratiques
Droits réservés à l'éditeur et aux auteurs. Ces ressources de cours sont, sauf mention contraire, diffusées sous Licence Creative Commons. L’utilisateur doit mentionner le nom de l’auteur, il peut exploiter l’œuvre sauf dans un contexte commercial et il ne peut apporter de modifications à l’œuvre originale.
Description de la ressource
Résumé
Now in session 8, we will present yet another evolution of information set decoding. Before presenting this improvement, we will first improve the Birthday Decoding algorithm what I call a Further Improvement of Birthday Decoding. I will consider the two following lists. The difference between those two lists and those we had before is the + ? that you can find in the weight of the errors e1 and e2. Those lists depend on another parameter ?. What is the meaning of that parameter? Well, the idea is the following: if you add two words of weight w/2 and length n, you expect that you obtain a sum of weight w-(w²/2n). That is, in fact, that you expect that those two words have w²/4n non-zero positions in common. Now, if we shift things a little bit and if we choose ? such that ? = (w/2+?)²/n, then two words of weight (w/2) + ? and length n are expected to have ? non-zero positions in common and a sum of weight w which is the target weight of our syndrome decoding problem. Note that in addition to the above property with a non-zero value of ?, (w,w/2)*(n-w,?) different ways to write a word of weight w, a word e, as a sum of two error patterns e1 and e2 of weight (w/2)+?, that is, the number of representation will increase. All of this together allows us to give that claim. If we choose 2^r and ? as described here, then any solution e to our problem is represented in the intersection of the two sets above with the probability 1/2. And if we do that and implement the Birthday Decoding with those two lists, the workfactor will simplify, if I may say, into the following formula which is true up to a polynomial factor. And, I'm not going to explain why it is a polynomial factor now and rather than a constant factor. Improving Birthday Decoding has an impact on the May, Meurer and Thomae algorithm we have just seen. Instead of the formula we have here, in green, that is true up to a constant factor, by using the Further Improved Birthday Decoding, we could obtain this formula which is better for any value of the parameter, which is true up to a polynomial factor. This idea and this improvement is the embryo of the next improvement of information set decoding. This improvement is due to Becker, Joux, May and Meurer. And the idea is to let ?, that is the increase in the weight of the half error patterns. We let ? grow beyond the optimal value that is much beyond w²/4n. If we do that and the workfactor of the Birthday Decoding becomes this formula with L equal to the list size and 2^r is the number of representative which is given by the formula here. We may also write the formula in the following form introducing the value ? which is the probability that is described on the slide.
"Domaine(s)" et indice(s) Dewey
- Analyse numérique (518)
- Théorie de l'information (003.54)
- données dans les systèmes informatiques (005.7)
- cryptographie (652.8)
- Mathématiques (510)
Domaine(s)
- Analyse numérique
- Analyse numérique appliquée, calcul numérique, mathématiques numériques
- Programmation : Algorithmique, langages, conception objet, programmes
- Informatique
- Informatique
- Expression orale et écrite
- Cryptographie
- Généralités, philosophie, théorie des mathématiques
- Généralités
- Outils, méthodes et techniques scientifiques
- Didactique des mathématiques
- Histoire des mathématiques
- Mathématiques et physique
Document(s) annexe(s)
- Cette ressource fait partie de
Fiche technique
- LOMv1.0
- LOMFRv1.0
- Voir la fiche XML