From ... Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!news-spur1.maxwell.syr.edu!news.maxwell.syr.edu!newsfeed.icl.net!newsfeed.fjserv.net!feed.news.nacamar.de!uio.no!nntp.uio.no!ifi.uio.no!not-for-mail From: Erik Naggum Newsgroups: comp.lang.lisp Subject: Re: READ-DELIMITED-FORM Date: 05 Sep 2002 12:43:22 +0000 Organization: Naggum Software, Oslo, Norway Lines: 32 Message-ID: <3240218602163684@naggum.no> References: <3240164830618395@naggum.no> <3240206449214856@naggum.no> <3240210721844260@naggum.no> <3240212171035953@naggum.no> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Trace: maud.ifi.uio.no 1031229803 1940 129.240.64.16 (5 Sep 2002 12:43:23 GMT) X-Complaints-To: abuse@ifi.uio.no NNTP-Posting-Date: 5 Sep 2002 12:43:23 GMT Mail-Copies-To: never User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2 Xref: archiver1.google.com comp.lang.lisp:39714 * Tim Bradshaw | Can you explain why? Because the reader algorithm is defined in terms of tokens that are examined before they are turned into integers, floating-point numbers, or symbols. The tokens ., .., and ... must all be interpreted (or cause errors) prior to being turned into symbols, and if you expect to be able to look at them after `read´ has already returned, the original information is lost and you will have insurmountable problems reconstructing the original characters that made up the token, just like you cannot recover the case information from a token that turned into an integer or symbol. The hard-wired nature of ) likewise has to be determined prior to processing it as a terminating macro characters. The usual way to implement the tokenization phase of the reader is to work with a special buffer-related substring or mirrored buffer that characters are copied into and then to use special knowledge of this buffer in the token interpretation phase. The way I implement tokenizers and scanners is with an offset from the current stream head to peek multiple characters into the stream. When the terminating condition has been found, I know how many characters to copy, if needed, and I am relatively well-informed of what I have just scanned. When the token has been completed, I let the stream head jump forward to the point where I want the next call to start. This may be several characters shorter than I scanned ahead, naturally. I invented this technique to parse SGML, which would otherwise have required multiple- character read-ahead or some buffer on the side and much overhead. -- Erik Naggum, Oslo, Norway Act from reason, and failure makes you rethink and study harder. Act from faith, and failure makes you blame someone and push harder.