Overview of asymmetric EMV encryption algorithms
Most of the asymmetric encryption algorithms used in practice are based on the complexity of solving one of the following mathematical problems:
factorization (factorization) problems of a large number: multiplication of two large numbers is a polynomial problem with respect to the size of the multipliers in terms of complexity. At the same time, the inverse problem— decomposing into multipliers — is extremely difficult. The RSA algorithm is based on the complexity of solving the factorization problem;
problems of finding a discrete logarithm. From the point of view of computational complexity, it is quite easy to perform the exponentiation operation in a finite field, but to solve the inverse problem-the search for a discrete logarithm— it will require almost a complete search of all the elements of the field. The DSA, EGSA, and Diffie-Hellman algorithms are based on the complexity of solving the logarithm problem;
the task of decoding of certain error correcting codes. It is easy enough to get a code word (multiply matrices), but it is computationally difficult to find the base code word. This method is rarely used in practice (the McEliece cryptosystem is known to use Hopp codes).
The first and most famous digital signature system in the world was the RSA system, the mathematical scheme of which was developed in 1977 at the Massachusetts Institute of technology (USA) and patented in the USA in 1982. It is called so by the first letters of the authors ‘ surnames: R. Rivest A. Shamir, L. Adleman.
In the most General terms, the RSA system consists of the following (it is recommended to read the contents of the Appendix before reading the rest of this application. A, which provides in an accessible form and at the same time with sufficient completeness the mathematical foundations necessary for understanding asymmetric encryption algorithms). Let p and q be two distinct large primes. Denote n=pg. Then the Euler function for p = pq is equal to f (l) = (p – l) (g – 1).
Choose a large integer d that is mutually Prime with (p – l) (q-1) and define 1fn . To find a Prime number, we use the Solovey-Strassen tests, which are based on the result of the quadratic deduction theorem, set out in Appendix A. to test the number t for simplicity, an odd number g<t is randomly selected such that (g, t) = 1. If the last equality does not hold, then it immediately follows that the number t is composite, and this completes the simplicity check for this t.
Next, in accordance with the quadratic deduction theorem, we need to calculate the number r (m_1)/2 (MoDT) and check that in our case f it is equal to (—). to do this, we need to perform 0 ((logm)3) elementary operations.
After the verification of the required simplicity condition t with a randomly selected number g is completed, another random number is selected, with which the necessary simplicity condition t is checked again. The required simplicity condition is checked once. At the same time, the probability p that the number t is still composite after the checks is satisfied by the inequality p < 1-2~!. This follows from the evidence in ADJ. And the result that if t is a composite number, then less than half of the numbers that are mutually Prime with t satisfy the condition of the quadratic deduction theorem. The value of I in practice is chosen for reasons of smallness of p. Since the Chebyshev theorem on the distribution function of the number of primes density of primes in the neighborhood of the number of t is equal to (logm)-1, you will need to iterate over O(logm) of the numbers t, each of which will have to perform 0((logm)3) operations to check their simplicity. It follows that the total complexity of the RSA key generation procedure is 0 ((logm)4) elementary operations. Taking into account the fact that the number /77 ~ 4P is being searched for , as well as the need to select two primes p and q, the total complexity of the RSA algorithm is 0 ((log/7)4) elementary operations. There are various ways to speed up the calculation of the degree of Cd (mod77) in the RSA algorithm. Widely used is the previously proven in the Appendix. And the Chinese theorem of residues (CRT-Chinese Remainder Theorem). Let CP and cq be the remainder of division C by p and q, respectively, and d and dp be the remainder of division d by p — 1 and q-1, respectively. In other words, there are equalities: CP=C (modp); cq = C (modp); • dp=d (mod(p-l)); dq=d(mod (g-l)). Then, obviously, comparisons are made: Cd = cdpp (Modr), Cd =cq” (modp). From these comparisons, using the Chinese remainder theorem, we obtain that ^ = (Sch ~CP )R?P + s? (mod))'(1B4) where pql satisfies the ratio p~lp = 1 (modg). Note that the number p, 1 is calculated once and then used for any values with. To calculate the right-hand sides in the above comparisons (VZ), 0 ((logVn)3) = %O((logn)3) operations. Thus, using the Chinese remainder theorem, calculating the degree of Cd (modp) requires about four times less operations than in the case of direct exponentiation. At the same time, using CRT for the RSA algorithm in microprocessor cards without the appropriate chip protection gives fraudsters more opportunities to disclose the numbers p and q with which calculations are made. There are two approaches to opening the RSA algorithm. The first is an attempt to decompose the number p into Prime factors (the factorization problem). The oldest and slowest method of decomposition, the Eratosthenes sieve, guarantees the solution of the problem in 0 (4P) checks. Asymptotically, the fastest known factorization algorithms require a running time of order 0 (EA,/ / PP|p|PL), where a = 1 + e for an arbitrarily small e. Another approach to opening RSA is to find the e-th root modulo n of the encrypted number. Today, it is not known that this method is reduced to a factorization problem, just as the method of opening RSA based on its use is unknown. In recent years, specialists in computer number theory have been able to factor integers of 150 decimal digits in a fairly short time (in order, several months) using very sophisticated methods and powerful computing systems (using the fastest supercomputers like CRAY-3 or distributed networks of several hundred VAX stations). For today’s level of computing development, it is considered that RSA cryptosystems with a module less than 512 bits are unreliable, with a module of 784 bits-opened by government services, with a module of 1024 bits-sufficiently reliable for the next 10-15 years, and 2048 bits-reliable for the next decades. It should be noted that the value of CP (l) is classified as secret data, since the knowledge of CP (p) allows you to solve the problem of factorization p. Indeed, there is a system of equations: p + q = p-CP(l) +1 p-q=yl(p+q) 2-4P’ which has a unique solution with respect to p and q. There is another weakness of the RSA scheme. If there are two participants in cryptosystem A and B that use the RSA algorithm with a common module p, then each of these participants knows the secrets of the other. Indeed, participant B knows EB, dB, EA (open exponents are known to all participants of the cryptosystem), and so on. Let’s show how it can calculate dA. Denote g0=eBdB-1, h0=(g0, eA), DG=D0/B0. Obviously,the equality (D1, EA) = 1 is fulfilled.using Euclid’s algorithm, we find such integers o and B that AD1+BEA =1 is fulfilled. We prove that f(G7)|DG is Valid, by definition eBdB -l = /rcp(n) for some whole K. Hence g1=(eBdB -l)/h0=kq>(n)/h0. Since by definition (e”, CP (/7)) = 1 and h0=(g0, eA), we get (L0,<p(p)) = 1. The latter means that h0\k and therefore f(p)\d^.
Therefore, from the equality ADG +beA= 1, it follows that BEA =1 (tos1f (/7)). Thus, the number B taken modulo f (l) is equal to dA. Finding dA requires 0 ((log/7)3) operations.
It can also be proved that there are efficient factorization algorithms for a known value of the closed exponent (decoding exponents).
The RSA algorithm is the de facto international EDS standard. Practical use of the RSA system has the following disadvantages:
the RSA method is protected by US patent # 4,405,829 and its use in commercial products requires a license agreement with the patent holder-RSA Data Security Corporation (USA);
to ensure high cryptographic stability of the signature, it is necessary to use sufficiently long integers of 1024 bits or more, which requires quite large computational costs (in the case of microprocessor cards, separate cryptographic coprocessors are used, and the card memory costs are also noticeable);
when generating keys in the RSA system, you need to check a large number of conditions, if each of them is not met, it is possible to falsify the signature;
the RSA method allows you to get signatures for documents created from existing documents without knowing the secret keys based on a set of documents at hand and digital signatures to them.
Another common asymmetric EGSA encryption algorithm was developed in 1984 by the American T. El-Gamal. This algorithm is not protected by a patent, which was one of the arguments used in August 1991 by the us National Institute of standards and technology in justifying the choice of the EGSA algorithm as the basis for the national standard DSA (Digital Signature Algorithm) before the us congressional Commission.
The EGSA signature algorithm works as follows. We consider the field GF(p), where p is a sufficiently large Prime number. Let G(q) be a subgroup of the multiplicative group of the field GF(p) of Prime order q, d — forming subgroups of G(q). Recall that G(q) is a cyclic group (as a subgroup of a cyclic group or because the order of the subgroup is simple).
The secret key for creating a signature is an integer 0 <x<q. The public key for signature verification is the deduction y = d* (mod p).
To create a signature for a t message follow these steps:
The hash function of the signed message h(m) is calculated.
A random integer K is generated, such that 0 <k<q.
The deduction r = gk (modp) is calculated.
The number s is the solution of the equation h(m) = xr + sk (modg).
This solution always exists because (K, q) = 1. The signature for the message t is a pair (r,s).
To verify the signature, follow these steps:
The inequality g<R is checked. If it is not executed, the signature is invalid.
The comparison GH(m)=y’rs (modp) is checked. If this comparison is made, the signature is authentic, otherwise it is not.
The necessity and sufficiency of performing the last comparison to confirm the signature is obvious. Indeed, yvs = gxr+sk+Cq for some whole C. Since d is a forming subgroup of order g, gq~l(modp) and therefore GH (m) = yrr5 (modp) holds.
Compared to the RSA method, the EGSA algorithm has a number of advantages:
first, at a given level of strength of the digital signature algorithm, the integers that have to be calculated are shorter, which consequently reduces the complexity of calculations and reduces the amount of memory used;
second, when generating keys, it is sufficient to define a large number p, such that the number p-1 would have a large simple divisor q;
third, the encryption procedure using this method does not allow you to calculate (as in RSA) digital signatures for new messages without knowing the secret key.
However, with all the advantages listed above, the EGSA algorithm has some disadvantages. In particular, at the same level of stability, the length of the signature and its verification time are longer than in RSA. In addition, repeating the same random number K during the validity period of the private key x leads to the disclosure of the latter. To do this, it is enough to solve a system of two linear equations.
K. Schnorr’s digital signature scheme resembles the EGSA algorithm. We consider the field GF(p), where p is a sufficiently large Prime number. Let G(q) be a subgroup of the multiplicative group of the field GF(p) of Prime order q, and let d be a subgroup of G(q).
The secret key for creating a signature is an integer 0<x<q. The public key for signature verification is the deduction y = DX (mod p).
To create a signature for a t message, follow these steps:
A random integer K is generated, such that 0 <k<q.
The deduction g = DC (mod p) is calculated.
Calculated e = L (/77| / g), where h(x) is the hash function, and t\g is the Shig concatenation.
The number 5, 0<s<qr is calculated such that the comparison s = to — xe(modp) is performed. The signature for message t is a pair (e, 5).
To verify the signature, follow these steps:
RV 0<rx<q is calculated, such that h =gV(modp) is compared.
E1=h(m\r1) is calculated.
The equality e = eg is checked. If it is executed, the signature is authentic; otherwise, it is not.
The necessity and sufficiency of performing the last equality for signature verification is obvious. Indeed, for the number GX, the equality rx – D5+Xe (mod p) is fulfilled. Since, as follows from the Schnorr scheme, k = s + xe + Aq for some integer A and g7 = l (modp) , then gg = gk(modp) = d, where the signature verification condition comes from.
As in the El Gamal scheme, repeating the same random number K during the validity period of the private key x leads to the disclosure of the latter.
The cryptographic stability of the El Gamal and Schnorr algorithms is determined by the complexity of solving the discrete logarithm problem in a subgroup of simple order q of the multiplicative group of the field GF(p). According to the forecasted research of cryptographers, we can expect that even without the use of a super-computer, practical forgery (determining the secret key by public) will become possible no later than 2006 for p = 512 bits, q = 256 bits, and 2017 for p = 1024 bits, <7 = 256 bits.
Given that the main characteristics of super-computers are 10 years ahead of commercially available computing equipment, when using them, it is impossible to guarantee that the 1024-bit version cannot be forged after 2007.
The review would not be complete without mentioning the first asymmetric encryption algorithm, the Diffie — HelLman algorithm. In fact, the history of cryptographic systems with public keys began with a revolutionary article by two mathematicians whose names appear in the name of the algorithm. The Diffie-Hellman algorithm is used to solve the problem of exchanging secret keys in systems with many users.
The essence of the algorithm is as follows. All operations are performed in some finite field GF (q) with a primitive element d. Now let’s consider two participants of the cryptosystem A and B. Each of them generates a secret number for itself (a and B, respectively). In addition, each participant in the system calculates the values of Yes and d in the GF(q) field, respectively. Calculated exponents are public public keys known to all participants in the cryptosystem.
Then, if participant A wants to establish a secure connection with B, it generates a session key for some symmetric encryption algorithm as follows: A takes the public key of side B-d and raises it to the power of its secret key O. The result is a key d°b. It is obvious that the same key can be obtained by party B based on knowledge of its own secret key b and the public key A-d°.
Opening the Diffie-HelLman algorithm is reduced to solving the problem of finding a logarithm in a discrete finite field, which is an NP-complete problem. The cryptographic strength of the algorithm depends significantly on the choice of the field order, its primitive element, and the size of secret exponents.
In the last decade, considerable attention has been paid to ESS algorithms (Elliptic Curve Cryptography), based on the use of constructions called elliptic curves. These algorithms are based on the fact that for the equation Ah=B relative to x with known a and b, and provided that o, B, x belong to the elliptic curve E, there is no known solution algorithm other than iterating over all possible values of X. Moreover, due to the complexity of the construction of elliptic curves, even such simple solutions as a complete iteration are difficult to evaluate from a computational point of view. Therefore, until recently, experts considered the reliability estimates of ESS digital signature systems to be significantly less reasonable than similar estimates for the problem of multiplication and discrete logarithm. Only in recent years has the confidence of analysts in elliptic curve designs increased significantly.
Estimated difficulty level of reliability of the algorithms of the ESS corresponding to the encryption algorithms based on the solution of problems of discrete logarithms with a key length of 512 bits, we can restrict the elliptic curve whose points are described by pairs of integers, each of which has a length of 160 bits. This reduces the total length of the secret key record from 512 to 320 bits, which in turn reduces the complexity of calculations (and therefore the time) by up to 2.4 times. In the case of ESS, the complexity of EDS verification is 36-480 times greater than when using RSA.
Thus, elliptic algorithms are of particular interest for applications related to microprocessor cards, when it is necessary to” unload ” the card processor during signing operations, as well as use a smaller memory size for storing the key.
Russia has adopted the following standards: GOST R 34.10-2001 “Processes for forming and verifying electronic digital signatures” and GOST R 34.11-94 ” Information technology. Cryptographic protection of information. Hashing functions”. The cryptographic stability of the Russian electronic digital signature standard is based on the complexity of solving the discrete logarithm problem on the set of points of an elliptic curve defined over a finite simple field GF(p).
Let’s conclude our review of asymmetric encryption algorithms with a list of the most frequently used algorithms for generating message digests (hash codes).
MD2, MD4 and MD5 hash algorithms developed by the Ro by Rivista. Each of them generates a 128-bit hash code.
The MD2 algorithm is the slowest , and MD4 is the fastest. The MD5 algorithm can be considered a modification of MD4, in which speed was sacrificed in order to increase reliability.
SHA-1 (Secure Hash Algorithm)is a message digest calculation algorithm that generates a 160-bit hash code. The SHA-1 algorithm is approved by the US government (as part of the Capstone project). It is more reliable than MDx algorithms because it generates a longer hash code, which reduces the likelihood that different input sequences will be converted to a single hash value.
In the Russian standard GOST R 34.11-94, the digest length is defined as 256 bits.