From ... Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!newsfeeds.belnet.be!news.belnet.be!news2.kpn.net!news.kpn.net!nslave.kpnqwest.net!nloc.kpnqwest.net!nmaster.kpnqwest.net!nreader1.kpnqwest.net.POSTED!not-for-mail Newsgroups: comp.lang.lisp Subject: Re: data hygiene [Re: Why is Scheme not a Lisp?] References: <87u1rkl068.fsf@charter.net> <87wuwg1b05.fsf@photino.sid.rice.edu> <87ofhrc3ed.fsf@charter.net> <874rjj1ve1.fsf@photino.sid.rice.edu> <87it7yz2sf.fsf@photino.sid.rice.edu> <87d6y5heq2.fsf@becket.becket.net> <87elilwsnx.fsf@photino.sid.rice.edu> <87u1rfn07o.fsf@becket.becket.net> <87k7sbtzp5.fsf@photino.sid.rice.edu> <871yej1v0h.fsf@becket.becket.net> <87y9grsf <3C99E6F8.BDA7534B@cfmu.eurocontrol.be> <3C9A02FE.725F7F6E@motorola.com> <3225739210766179@naggum.net> Mail-Copies-To: never From: Erik Naggum Message-ID: <3225769957792533@naggum.net> Organization: Naggum Software, Oslo, Norway Lines: 103 User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.1 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Fri, 22 Mar 2002 07:12:25 GMT X-Complaints-To: newsmaster@KPNQwest.no X-Trace: nreader1.kpnqwest.net 1016781145 193.71.199.50 (Fri, 22 Mar 2002 08:12:25 MET) NNTP-Posting-Date: Fri, 22 Mar 2002 08:12:25 MET Xref: archiver1.google.com comp.lang.lisp:29831 * Matthias Blume | Suppose you can "spawn" as many sub-threads as you like at any point, | but following a pointer has a fixed cost. Are you trying to make me pretend that spawning a sub-thread is free and following pointer has costs? This is the part I simply reject. | In the plist case, you need at least 2n time steps to traverse n | key-value pairs because the fetch operations (mostly CDRs) are all | data-dependend on their respective predecessors. In the alist case, one | thread could "run down" the spine of the alist in roughly n steps, | spawning a new thread at every CAR. The subthreads then independently | check to see if their respective key is the one that we are looking for. | This latter work can effectively overlap with the guy running down the | spine, so the total time can be less than in the plist case. I see that several people have been working this out on a mythical computer, not extant hardware. I am frankly completely unimpressed with the arguments that have been made so far. One does not optimize theoretical code for theoretical computers. | On real hardware, _if_ the key comparison is cheap (e.g., EQ), then each | of those subthreads will live only very shortly, so the amount of | effective parallelism that needs to be supported by the hardware is | bounded (and rather small). Well, at least they have to live long enough to grab the cons cell pointed to by the car and the contents o the car of that cell. Now, it is all well and good to have a CPU that can pipeline anything you want, but we are actually trying to talk to the same _memory_, right? And that is not the memory in either L1 or L2 cache, right? How many requests can the _memory_ pipeline from data that are going to be very close? I mean, my CPU has 32-byte cache lines, meaning it will usually not do any less than 32-byte fetches, but the crucial thing is to prefetch _both_ car and cdr of all cells visited in order to obtain maximal memory performance. To make this work, one would have to do massive analysis of the addresses that are actually fetched. This is actually possible as long as one has to fetch from real memory instead of L1 or L2 caches, or even L3 caches. In order to make this a testable condition, there is no point in scanning some short list. The list has to be at least 1 million entries in length for the test conditions to ensure that all data cache lines are flushed. To make that work, there would be some savings in cdr'ing ahead to fill half the L1 cache so the cons cells, which I assumme will occupy 2 adjacent words, will be in the L1 cache when the car visits it. However, in total, there will be exactly as many memory references with a plist and an alist, because they use exactly as many cons cells. I maintain that on, e.g., the IA32, which I have spent some time learning how to optimize the living daylight out of, you will never see any difference in performance between an alist and a plist. There is no magic in this, just the willingness not to "suppose" one can do something that no processor actually does. | Obviously, as Joe's tests demonstrate, getting any real advantage out of | this on a real system is hard. It would require tight coding, hardware | that can actually do something like the above, at least in some sense, | and would probably still not matter for relatively short lists. None of the compiled code I have seen has used prefetch instructions, and the pipeline on, e.g., IA32, is not long enough to see these actualy memory fetch coming. Therefore, current tests exhibit nothing of the kind we would like to prove in this exercise. In order to measure the impact of any savings, the best approach is probably to have a forerunner that visits both cars and cdrs in an alist and the cdrs in a plist. It should probably be running at _least_ 8 cons cells ahead of their use, but something like 100 will probably make more sense, in order to give the memory time to fill up the cache lines. This also indicates that there is very significant savings to be had from starting the storage of any object on cache line boundaries. But suppose your list is short enough that all cons cells can fit in L1 cache. The best optimization approach is simply to call length on it to move all the cons cells into L1 cache, and then to make sure that they stay there between every time you need to scan the list. If you manage to expire the cache lines between calls to either assoc or getf, there is no point in this exercise at all, since the next time it is needed, you will have to go out and prefetch it, anyway, and since both assoc and getf must return the first occurrence, the likelihood of prefetching memory that you will never use is pretty high. | By the way, this is all pretty silly anyway. If I had to implement a | dictionary data structure where the number of keys is large enough so | that the above would matter, then I would *definitely* use something else | that has better theoretical (and practical) complexity. Yup. I have been trying to argue that assoc is more expensive because it is more way more general and may do a _lot_ more work than getf, which could be as simple as my plist-get. To get a user call to assoc be as tight as my alist-get requires a lot of effort in both compiler and user code. I think, however, that I have demonstrated exhaustively that there is no performance difference between alist and plist and that performance should never enter the picture when deciding which to use. For instance, performance is likely to be good for get-properties and destructuring using keyword arguments, neither of which have a parallell for alists. /// -- In a fight against something, the fight has value, victory has none. In a fight for something, the fight is a loss, victory merely relief.