Subject: Re: Common Lisp vs Scheme
From: (Rob Warnock)
Date: 14 Jul 2002 03:16:35 GMT
Newsgroups: comp.lang.scheme
Message-ID: <agqqej$8f15v$>
NOTA BENE: Anyone seriously interested in knowing what Common Lisp *is*
(as opposed to opinions, including mine) should read (or download) the
"Common Lisp HyperSpec" (CLHS), available from multiple locations, one of
which is <URL:>.

	The Common Lisp HyperSpec is not the ANSI Common Lisp standard,
	but is based on that standard (with permission from ANSI and X3).

["Based on" is somewhat of an understatement -- it was machine-converted
directly from the LaTex that was submitted to ANSI.]

Dotted-decimal citations below all refer to the CLHS.

Feuer <> wrote:
| Kaz Kylheku wrote:
| > Some things that are in Lisp, but not in Scheme:
| > - expressions can yield multiple values
| Incorrect.  R5RS includes multiple values.

Yes, I'm sure Kaz knows that. What he probably should have said
(but didn't because it's *so* much longer) is:

- Expressions can yield multiple values. If the receiving context
  is expecting fewer values than provided, excess values are silently
  discarded. Likewise, if there are too few values provided, receiving
  contexts silently provide NIL for the missing values. This allows
  the convenience of defining functions that always provide optional
  data in additional values without worrying about providing 1-, 2-,
  3...-valued versions for each of them. E.g., the built-in CL functions
  FLOOR, ROUND, and TRUNCATE *always* provide the remainder as a second
  value, which is harmlessly discarded in single-valued contexts such
  as (+ 34 (floor xyz 12)).

| Whether this is a good idea is an entirely different question...

Yes, they are. ;-}  That is, there are two questions here:

- Is even having multiple values a good thing? [Scheme & CL say "Yes".]

- If multiple values are provided, should provider/consumer
  arity matching be "strict" (as in Scheme) or "loose" (as in CL)?

| > - separate function and value bindings for symbols
| True.  The Schemers see this as a significant advantage to Scheme,
| the Lispers see it as an advantage to Lisp.

And they will *never* agree!! Advocating the non-default position
in either newsgroup is an almost-certain way of starting a flame war
(or at the very least, instantly being labelled a troll).

| > - explicit FUNCTION operator for creating closures, with shorthand
| >   notations provided by #' and LAMBDA macro.
| Scheme instead has a simple, consistent way to create closures.
| If you really need anything else, you can write macros.

This is a non-issue. CL's FUNCTION arises as a necessary consequence
of having separate function and value namespaces. [See previous point
and next point.]

| > - simpler list manipulation: extracting elements from empty list
| >   yields NIL rather than errors, reducing the cases that have to
| >   be handled due to not having to test for empty lists.
| And making it much harder to track down bugs in list-processing code.

The CL community prefers succinct code and convenient defaults to
enforced type purity. This is an explicit difference in the goals/choices
of the two communities -- a "religious" choice, if you will, that will
*NEVER* be changed by disputing it. [A less-emotional term might be
"political party". <URL>
is well worth reading in this regard.]

Note that language feaures cannot [usefully] be debated in isolation.
Multiple features interact to provide different local minima, "sweet
spots" if you will, in language-design space.

In this case, CL's choices of a unified false/NIL, relaxed matching of
values & contexts, and NIL as a silent default for many "impossible"
results produces an economy of expression, a perspicuity of code, that
the CL community believes is more valuable than "excessive" error-checking.

The Scheme community (at least as expressed in its standards), has
adopted a *different* choice of interacting features [distinct #f & (),
strict values matching, and explicit errors for (some) "impossible"
operations]. Not necessarily "better" or "worse", mind, but definitely
different, and in its own way *also* a local minimum in feature space.

| > - package system
| What is a package system?

A kind of module system, but only for names:

	11.1.1 Introduction to Packages
	A package establishes a mapping from names to symbols. At any
	given time, one package is current. The current package is the
	one that is the value of *package*. When using the Lisp reader,
	it is possible to refer to symbols in packages other than the
	current one through the use of package prefixes in the printed
	representation of the symbol.

Small example:

	> *package*
	> (defvar gorp 13)
	> (make-package "FOO")
	> (in-package "FOO")
	> (defvar gorp 45)
	> gorp
	> (in-package "COMMON-LISP-USER")
	> gorp
	> foo::gorp
	> foo:gorp

	*** - READ: #<PACKAGE FOO> has no external symbol with name "GORP"
	1. Break> 

That is, there is no *enforced* hiding, in that one may always refer
to *any* symbol in another package by using the "<pkg>::" prefix,
but the normally-used single-":" prefix may only be used with symbols
that were explicitly "exported" from the package, which provides a
measure of protection against inadvertent misuse.

Packages can USE other packages, which means that when a symbol
is interned and is not found in the current package, it is
automatically seached for in the packages in the current packages
PACKAGE-USE-LIST (which, for programs at compile time, almost
*always* includes the COMMON-LISP package).

There is a special package KEYWORD, whose prefix normally prints as
the empty string (that is 'KEYWORD:FOO => :FOO): Interning a Symbol in the KEYWORD Package
	...when a symbol is interned in the KEYWORD package, it is
	automatically made to be an external symbol and is automatically
	made to be a constant variable with itself as a value. The KEYWORD Package
	This makes it notationally convenient to use keywords when
	communicating between programs in different packages. For example,
	the mechanism for passing keyword parameters in a call uses
	keywords[1] to name the corresponding arguments; see Section
	3.4.1 (Ordinary Lambda Lists).

A further example of interacting features: The separate function/value
namespaces, combined with the fact that users are forbidden[1] to redefine
functions in the COMMON-LISP package (though they may override them in
their *own* packages), means that unintended variable capture in macros
is much less of a problem for CL programmers, so that a more-hygienic
macro system than DEFMACRO (which can always use GENSYM for hygiene)
is not seen as necessary by the CL community.

[1] CLHS " Constraints on the COMMON-LISP Package for
    Conforming Programs"

| > - symbol property lists
| Another one of those things that the Lispers love and the
| Schemers were glad to leave behind.

See above notes on "interacting features".

| > - dynamically scoped variables, a.k.a. special variables
| The only real language I know of that does these well is
| Glasgow Haskell...

MzScheme's "fluid-let" and "make-parameter/parameterize" are
also worth a look.

| > - PROGV
| Wazzat?

[Note: Many CL block forms are named PROGx, e.g., PROG/PROG1/PROG2/PROGN.
PROGN is the CL equivalent of Scheme's "begin".]

PROGV is an admittedly-arcane form, seldom-used (except very occasionally,
when it is deemed *essential*!), for creating & binding special variables
at run time, or I should say, whose *names* may determined at run time:

	5.3 The Data and Control Flow Dictionary
	Special Operator PROGV
	Among other things, PROGV is useful when writing interpreters
	for languages embedded in Lisp; it provides a handle on the
	mechanism for binding dynamic variables.

Contrived grotesque example:

	> (defvar x 3)
	> (defvar y 4)
	> (defvar z 5)
	> (defun foo (n)
	    (list (nth n '(x y z)))) 
	> (defun bar ()
	    (list x y z))
	> (bar)
	(3 4 5)
	> (progv (foo 1) '(67)
	(3 67 5)

| > - notation for defining macros with lambda list destructuring
| What's that mean?

"Destructuring" refers to the taking-apart of argument list actuals
into pieces to be bound to lambda (or macro) formals. "Lambda list
destructuring" reflects that CL lambda allow this to be done in a
more flexible, data-aware way than simple one-to-one matching, in
particular, the handling of CL formal keywords such as &OPTIONAL,
&KEY, and &REST. But in addition to these, which apply to all
functions & lambdas, macros may use a more powerful form of
template-based destructuring: Destructuring by Lambda Lists
	Anywhere in a macro lambda list where a parameter name can
	appear, and where ordinary lambda list syntax (as described in
	Section 3.4.1 (Ordinary Lambda Lists)) does not otherwise allow
	a list, a destructuring lambda list can appear in place of the
	parameter name. When this is done, then the argument that would
	match the parameter is treated as a (possibly dotted) list, to
	be used as an argument list for satisfying the parameters in the
	embedded lambda list. This is known as destructuring.

	Destructuring is the process of decomposing a compound object
	into its component parts, using an abbreviated, declarative
	syntax, rather than writing it out by hand using the primitive
	component-accessing functions. Each component part is bound to a

	A destructuring operation requires an object to be decomposed, a
	pattern that specifies what components are to be extracted, and
	the names of the variables whose values are to be the components.

A simple (if contrived) example:

	> (defmacro with-foo ((x (y z)) w &rest q)
	    `'(x ,x y ,y z ,z w ,w q ,q))
	> (with-foo ((1 2) ((3 4 5) (6 7))) 8 9 10)
	(X (1 2) Y (3 4 5) Z (6 7) W 8 Q (9 10))

or in the style of some of the binding macros:

	> (with-foo ((a 43)
		     ((b c d) (12 34 67)))
	      (func e f)
	    (some code)
	    (goes here))
	(X (A 43)
	 Y (B C D)
	 Z (12 34 67)
	 W (FUNC E F)

As you can see, this gives the use a limited but built-in
template-matching capability in their macros.

| > - hash tables
| How _many_ hash tables?  And how well do they behave in _my_ application?

How many do you want?

	18.2 The Hash Tables Dictionary
	make-hash-table &key test size rehash-size rehash-threshold
	  => hash-table
	test---a designator for one of the functions eq, eql, equal,
	or equalp. The default is eql.

The SIZE, REHASH-THRESHOLD, &  REHASH-SIZE keywords are hints (only)
to the implementation about the initial size, desired occupancy before
growing the table, and amount to grow. Using these, you can do a *lot*
of tuning, e.g.:

	(make-hash-table :size 100 :rehash-size 1.5 :rehash-threshold 0.7)

says to start with 100 slots, grow the table each time 70% of the
current slots are used, and when you grow it, add 50% to the size.

Good enough for you?


Rob Warnock, 30-3-510		<>
SGI Network Engineering		<>
1600 Amphitheatre Pkwy.		Phone: 650-933-1673
Mountain View, CA  94043	PP-ASEL-IA

[Note: and aren't for humans ]