From ... From: Erik Naggum Subject: Re: Heap- vs. stack-based fn call frames, was: how to validate input? Date: 2000/04/26 Message-ID: <3165733245874405@naggum.no>#1/1 X-Deja-AN: 615853845 References: <87wvlv75lf.fsf@inka.de> <3165639388940721@naggum.no> <3165662049043666@naggum.no> <3905c241$0$226@nntp1.ba.best.com> <3165723214520205@naggum.no> mail-copies-to: never Content-Type: text/plain; charset=us-ascii X-Complaints-To: newsmaster@eunet.no X-Trace: oslo-nntp.eunet.no 956746764 6605 195.0.192.66 (26 Apr 2000 10:59:24 GMT) Organization: Naggum Software; vox: +47 8800 8879; fax: +47 8800 8601; http://www.naggum.no User-Agent: Gnus/5.0803 (Gnus v5.8.3) Emacs/20.5 Mime-Version: 1.0 NNTP-Posting-Date: 26 Apr 2000 10:59:24 GMT Newsgroups: comp.lang.lisp * Flemming Gram Christensen | How do you mean? The UltraSparc II and Athlon's and the like | has prediction instructions already. I believe these instructions are called "prefetch", which is different from prediction. "prediction" usually applies to branch prediction, but I'm talking about a similar automatic heap cache line prefetch or priming (when the memory is known to be written first) when a function call is coming up in the instruction stream. these instructions are fun to watch do weird things with one's ideas about cache line costs, but using them requires their actual presence and some computation for the effect you get for free from reusing the same memory the way a stack works. most stacks are amazingly shallow and the cache hit rates for stacks (including temporaries in call frames) are typically >99.9%. to get that good hit rates for a heap allocation scheme that walks over fresh memory while allocating and causes much scattering of allocated call frames unless the heap is optimized like a stack, you have to do a huge number of more or less explicit cache line prefetching in the absence of lots of prefetch instructions that add to the computational load without necessarily having any effect. #:Erik