Subject: Re: CMUCL18d on Alpha?
From: Erik Naggum <>
Date: Fri, 19 Apr 2002 17:11:49 GMT
Newsgroups: comp.lang.lisp
Message-ID: <>

* "Pierre R. Mai" <>
| Actually, since you've got millions of floats, I think that they
| should be read directly into a specialized vector, otherwise the list
| and boxing overhead is going to be huge, and might be the reason you
| are running into the 18b heap limit.  On the x86 using a fairly
| current CMUCL (post 18c, but pre 18d), I can read 2 million random
| single floats (3 per line, making a 20MB file) in around 16-18MB of
| heap space, doubles should be double that amount (not tested), with:
| (defun read-floats-slow (stream)
|   (do ((result (make-array 1000 :element-type 'single-float :adjustable t
| 			   :fill-pointer 0))
|        (float (read stream nil stream) (read stream nil stream)))
|       ((eq stream float)
|        result)
|     (vector-push-extend float result)))
| Reading takes 80s for the file on an AMD K6-2/550.

  If you start off with a vector that is pretty close in size to what you
  expect, you will also avoid all the overhead of copying the vector you
  have just extended several times.  Depending on the implementation, you
  may either extend the vector a million times (Allegro CL, with a default
  extension of 20) or about 15-20 times (CMUCL doubles the vector's size),
  or something entirely different; neither CLISP nor LWL provide reasonably
  readable disassembly to understand what they are doing.  In any case, if
  you have any means of waiting until you know how many floats to allocate
  space for, you will be better off.  One way to do this is actually to
  allocate, fill, and collect some fairly large "chunks" of the total, like
  1024-element vectors.  At the end of the read phase, you know how much
  space you need, and you can write a specialized accessor that uses the
  1024-element vectors directly in a two-level vector (note the simple
  10-bit shift instead of an expensive division), or you can copy them all
  into one vector of known size.  This _should_ have a significant effect
  on the time it takes to read all the data, but my guess is that it will
  all pale in comparison to what you wil do with all this stuff in memory.

  In a fight against something, the fight has value, victory has none.
  In a fight for something, the fight is a loss, victory merely relief.

  Post with compassion: