A Tour of NTL: NTL Implementation and Portability
NTL is designed to be portable, fast, and relatively easy to use and extend.
To make NTL portable, no assembly code is used (well, almost none, see below). This is highly desirable, as architectures are constantly changing and evolving, and maintaining assembly code is quite costly. By avoiding assembly code, NTL should remain usable, with virtually no maintenance, for many years.
NTl makes very conservative requirements of the C/C++ compiler:
The configuration flag NTL_CLEAN_INT is currently off by default.
When this flag is off, NTL makes another requirement of its platform; namely, that arithmetic operations on the type long do not overflow, but simply "wrap around" modulo the word size. This behavior is not guaranteed by the C++ standard, and yet it is essentially universally implemented. In fact, most compilers will go out of their way to ensure this behavior, since it is a very reasonable behavior, and since many programs implicitly rely on this behavior.
Making this "wrap around" assumption can lead to slightly more efficient code on some platforms. It seems fairly unlikely that one would ever have to turn the NTL_CLEAN_INT flag on, but it seems a good idea to make this possible, and at the very least to identify and isolate the code that relies on this assumption.
Actually, with NTL_CLEAN_INT off, it is also assumed that right shifts of signed integers are consistent, in the sense that if it is sometimes an arithmetic shift, then it is always an arithmetic shift (the installation scripts check if right shift appears to be arithmetic, and if so, this assumption is made elsewhere).
It is hard to imagine that there is a platform existing today (or in the foreseeable future) where these assumptions are not meet. However, as of version 5.4 of NTL, all of the most performance-critical code now works almost as well with NTL_CLEAN_INT set as without. The differences are not very significant (maybe 10%). Therefore, there is hardly any reason to not set this flag. Also, note that the only code affected by this flag is the traditional long integer package (which, if you use GMP as the primary long integer package, is not involved), and the single-precision modular multiplication routines defined in ZZ.h.
The configuration flag NTL_CLEAN_PTR is currently off by default.
When this flag is off, NTL makes another requirement of its platform; namely, that the address space is "flat", and in particular, that one can test if an object pointed to by a pointer p is located in a array of objects v[0..n-1] by testing if p >= v and p < v + n. The C++ standard does not guarantee that such a test will work; the only way to perform this test in a standard-conforming way is to iteratively test if p == v, p == v+1, etc.
This assumption of a "flat" address space is essentially universally valid, and making this assumption leads to some more efficient code. For this reason, the NTL_CLEAN_PTR is off by default, but one can always turn it on, and in fact, the overall performance penalty should be negligible for most applications.
NTL uses floating point arithmetic in a number of places, including a number of exact computations, where one might not expect to see floating point. Relying on floating point may seem prone to errors, but with the guarantees provided by the IEEE standard, one can prove the correctness of the NTL code that uses floating point.
Briefly, the IEEE floating point standard says that basic arithmetic operations on doubles should work as if the operation were performed with infinite precision, and then rounded to p bits, where p is the precision (typically, p = 53).
Throughout most of NTL, correctness follows from weaker assumptions, namely
It is also generally assumed that the compiler does not do too much "regrouping" of arithmetic expressions involving floating point. Most compilers respect the implied grouping of floating point computations, and NTL goes out of its way to make its intentions clear: instead of x = (a + b) + c, if the grouping is truly important, this is written as t = a + b; x = t + c. Current standards do not allow, and most implementations will not perform, any regrouping of this, e.g., x = a + (b + c), since in floating point, addition and subtraction are not associative.
Unfortunately, some compilers do not do this correctly, unless you tell them. With Intel's C compiler icc, for example, you should compile NTL with the flag -fp-model source to enforce strict adherence to floating point standards. Also, you should be wary of compiling using an optimization level higher than the default -O2 -- this may break some floating point assumptions (and maybe some other assumptions as well).
One big problem with the IEEE standard is that it allows intermediate quantities to be computed in a higher precision than the standard double precision. This "looseness" in the standard is a substantial impediment to creating portable software. Most platforms today implement the "strict" IEEE standard, with no excess precision. One notable exception -- the 800 pound gorilla, so to speak -- is the Intel x86.
NTL goes out of its way to ensure that its code is correct with both "strict" and "loose" IEEE floating point. This is achieved in a portable fashion throughout NTL, except for the quad_float module, where some desperate hacks, including assembly code, may be used to try to work around problems created by "loose" IEEE floating point [more details]. But note that even if the quad_float package does not work correctly because of these problems, the only other routines that are affected are the LLL_QP routines in the LLL module -- the rest of NTL should work fine.
Mostly, NTL does not require that the IEEE floating point special quantities "infinity" and "not a number" are implemented correctly. This is certainly the case for core code where floating point arithmetic is used for exact (but fast) computations, as the numbers involved never get too big (or small). However, the behavior of certain explicit floating point computations (e.g., the xdouble and quad_float classes, and the floating point versions of LLL) will be much more predictable and reliable if "infinity" and "not a number" are implemented correctly.
There are three basic strategies for implementing long integer arithmetic.
The default strategy is implemented in the traditional long integer arithmetic package. This package is derived from the LIP package originally developed by A. K. Lenstra, although it has evolved quite a bit within NTL. This package uses no assembly code and is very portable.
The second strategy is to use the Gnu Multi-Precision Package (GMP) as a supplemental long integer arithmetic package. In this strategy, the representation of long integers is identical to that in he traditional long integer package. This representation is incompatible with the GMP representation, and on-the-fly conversions are done between the two representations (only when this is sensible). This strategy typically yields better performance, but requires that GMP is installed on your platform.
The third strategy is to use GMP as the primary long integer arithmetic package. In this strategy, the representation of long integers is in a form compatible with GMP. This strategy typically yields the best performance, but requires that GMP is installed on your platform, and also introduces some minor backward incompatibilities in the programming interface.
Go here for more details on the use of GMP with NTL.
NTL makes fairly consistent use of asymptotically fast algorithms.
Long integer multiplication is implemented using the classical algorithm, crossing over to Karatsuba for very big numbers. Long integer division is currently only implemented using the classical algorithm -- unless you use NTL with GMP (version 3 or later) as either a supplemental or primary long integer package, which employs an algorithm that is about twice as slow as multiplication for very large numbers.
Polynomial multiplication and division is carried out using a combination of the classical algorithm, Karatsuba, the FFT using small primes, and the FFT using the Schoenhagge-Strassen approach. The choice of algorithm depends on the coefficient domain.
Many algorithms employed throughout NTL are inventions of the author (Victor Shoup) and his colleagues Joachim von zur Gathen and Erich Kaltofen, as well as John Abbott and Paul Zimmermann.
NTL is not a "perfect" library. Here are some limitations of NTL that a "perfect" library would not have: