From ... From: Erik Naggum Subject: Re: Waving the lambda flag again (was: Constants and DEFCONSTANT) Date: 1999/04/08 Message-ID: <3132556485345213@naggum.no>#1/1 X-Deja-AN: 463985423 References: <7dr23c$2re$1@shell5.ba.best.com> <3131961221139730@naggum.no> <4niubflwhf.fsf@rtp.ericsson.se> <41UM2.19154$134.197089@tor-nn1.netcom.ca> <3132042770550791@naggum.no> <3132050414299555@naggum.no> <3132487825915707@naggum.no> <4npv5guv18.fsf@rtp.ericsson.se> <3132514507164896@naggum.no> mail-copies-to: never Organization: Naggum Software; +47 8800 8879; http://www.naggum.no Newsgroups: comp.lang.lisp * pvaneynd@mail.inthan.be (Peter Van Eynde) | If you were not the author of this text, Eric, I would assume that the | author was lazy and didn't meant byte==piece of 8 bits, but now I'm | doubting :-). well, a "byte" is not "8 bits of a machine word at an 8-bit boundary", but "8 bits of a machine word at an 8-bit boundary" _is_ a "byte". that IBM usurped a perfectly general concept and used it for byte-addressable machines with 8-bit bytes, only, is an historic accident that Common Lisp does not accept. on the PDP-10, on which I for all practical purposes grew up, byte operations were very powerful instructions to work on bytes of user-specified sizes. LoaD Byte and DePosit Byte have survived into Common Lisp. Increment Byte Pointer and ILDB and IDBP have not -- they were used to read successive bytes out of a sequence of machine words, and that is properly handled by better primitives, such as STRING operations today. the byte pointer itself did not survive: it had a memory address, too, but the byte specification did -- see the functions BYTE, BYTE-SIZE, and BYTE-POSITION. | Anyway, I can't see the problem: | (log 60 2) ~ 5.9 bits | (log 24 2) ~ 4.6 bits um, how can I put this? INTEGER-LENGTH returns an integer, not a floating point number. LOG is the wrong function to use when measuring the required size of an integer, although CEILING of LOG yields the same value as INTEGER-LENGTH for positive numbers. | and (+ 6 6 5) = 17 bits -> the first fixnum, used in the table to convert | between seconds-since-midnight -> (hour, minute, second). yup, this is the idea. (it turned out not to be worth the hassle to use only 16 bits by partition the day into two 12-hour halves. if I were cramped for space, I'd revisit that space optimization.) | (log 31 2) ~ 4.9 bits | (log 12 2) ~ 3.6 bits | (log 400 2) ~ 8.6 bits | | (+ 5 4 9) -> 18 bits -> the second fixnum, | in the table days-since-newyear -> (day, month, year). | But I would do | (days-since-newyear, year) -> (day, month), not? I use day-since-beginning-of-leap-day-period. as I explained earlier, the current leap day period started at 1600-03-01, and ends 2000-02-29. the next leap day period starts 2000-03-01. since we're going to face that day pretty soon, I chose that day as day 0. | And because all the fields are lower than 8 bits we can use pieces of 8 | bit to encode them (could be more efficient on come CPU's, mainly | Alpha's I think). hm. I haven't actually investigated the possibility of using a (vector (unsigned-byte 8)) for the representation. I'll try that. (I guess I'm still not quite used to 8-bit addressable memory, and from what I read in the specifications of modern CPU's, the designers work really hard to avoid forcing people to stop believing in that old myth, too. think about it: if we scuttled the stupid 8-bit legacy and chose 32 bits as the smallest addressable unit and reinstated byte pointers, we'd have four times more addressable memory, and we wouldn't need 64-bit processors for another, um, 18 months!) #:Erik