Jason <email@example.com> wrote:
| Frank Buss wrote:
| > loop solution could look like this (but it is not very fast):
| > (defun occurences (list)
| > (loop for i in (remove-duplicates list)
| > collect (cons i (count i list)) into result
| > finally (return (sort result #'> :key #'cdr))))
| This is so very terse! I'm amazed.
Well, it may be terse, but Frank was being kind when he said
"not very fast". In fact, for large input lists (of length N,
say) with many kinds of items (M, say), it can be dog slow!!
It's at *least* as bad as N * M, since it scans the *entire*
input list once for each kind of element. And it might be worse
than that, depending on the characteristics of your implementation's
REMOVE-DUPLICATES [but that's probably hash-based, and thus fast,
Note that the hash-table solutions you were given [such as Geoff's]
will take time closer to N + M, that is, N for populating the
hash table and M for dumping it.
For N small enough to fit on a screen, both will run blazingly
fast -- you'd never be able to tell the difference -- but for
N = 10000 and M = 1000, the former is ~900 *times* slower.
Of course, the SORT occurs in both, and probably adds an amount
of around M * (log M). If you *really* wanted to speed this up for
large inputs, you'd figure out how to count and sort simultaneously,
e.g., maybe use some kind of clever balanced tree to insert into,
or a pair of them, one indexed on the keys and one on the counts,
and rebalance both as needed.
Or something... ;-} ;-}
Rob Warnock <firstname.lastname@example.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607