Subject: Re: Big Numbers From: Erik Naggum <email@example.com> Date: Thu, 18 Oct 2001 02:12:39 GMT Newsgroups: comp.lang.lisp Message-ID: <firstname.lastname@example.org> * Barry Margolin | Any competent programmer who can't deal with the issues of converting at | the appropriate interfaces doesn't deserve the title "competent | programmer". Ignoring for purposes of the discussion the general absence of competence among programmers, the most common error in statically typed languages that have hardware-representation-oriented, disjoint integer types is to specify too narrow a type, including the increasingly annoying case of the widest hardware type not being wide enough, as witness the need to deal with 64-bit file sizes on traditional 32-bit operating systems, not to mention the many 16-bit limitations that continue to annoy the Unix world, or the 8-bit limitation on character representation... There are many ways to specify integer ranges in various languages that make programmers choose between wasting space and getting wrong results _quietly_ after overflows, instead of ignoring space issues and getting the right results always. Some languages elect to raise exceptions upon overflow, but what are you going to do about it? (Wa have no concept of "infinity" for integers.) | I can't imagine this being the deciding factor in choosing a language. I can imagine it. It does not take more than two languages that differ only in their bignum support, and considering the proliferation of both languages and implementations-called-languages, this situation will come up if it has not already. | It's a nice feature to have, but how many applications *really* need it? That depends entirely on how big your integer is. | Do you think that C's lack of built-in bignums made a significant | difference (i.e. more than a percent or two) in the difficulty of | implementing Mathematica? I have no idea, but considering the work required to get fast bignums, leaving it to people who know how to do it seems like a good idea. | I think Lisp is a far better language for implementing this type of | application because of its better support for complex webs of data | structures, *not* because of bignums; that's just the cherry on top. You could say the same about the support for complex numbers, too. There are a _lot_ of these cherries on top. Take away too many, and you no longer have a cherry-topped language. I mean, even Scheme got all of this number stuff right. And if Scheme has it, it cannot be removed without making the language less than minimal, now can it? /// -- Norway is now run by a priest from the fundamentalist Christian People's Party, the fifth largest party representing one eighth of the electorate.