| allegro-cl archives 1999-7-6 | home index prev H thread prev K thread next J next L |
|
From: Steve Haflich Subject: Re: Bug in macrolet? Date: 1999-7-6 23:21
From: Jeff Dalton <aiai.ed.ac.uk at jeff>
Subject: Re: Bug in macrolet?
To: Steve Haflich <franz.com at smh>, Antonio Leitao <gia.ist.utl.pt at aml>
Cc: <cs.berkeley.edu at allegro-cl>
Steve Haflich <franz.com at smh> wrote:
I would certainly *expect* an error when both running interpreted
and when the code is compiled.
This "expect" concept isn't appropriate when analyzing what a standard
says. It is appropriate only when drafting such a standard, and when
considering whether you like the standard. I certainly agree that the
interpreter behavior violates a CL macro programmer's notion about the
"whenness" of variables, and it would probably be better in this case
if some sort of error were signalled (because the code isn't portable
and is probably massively confused) but I'll argue with some examples
below that it is very hard to get a herd of CL programmers to agree on
the details.
So ... here is a relevant passage (from the "Special Operator FLET,
LABELS, MACROLET" entry in the Hyperspec):
The macro-expansion functions defined by macrolet are defined in
the lexical environment in which the macrolet form appears.
Declarations and macrolet and symbol-macrolet definitions affect
the local macro definitions in a macrolet, but the consequences are
undefined if the local macro definitions reference any local
variable or function bindings that are visible in that lexical
environment.
The remaining question is how does Allegro manage to get the behaviour
it does, since it is (I'd say) rather unexpected? And why?
Neil explains it correctly below.
From: Neil Goldman <isi.edu at goldman>
It doesn't seem all that hard to come up with this behavior. The ANS seems
to say that the body of a lexical macro (macrolet) definitely has access to
PART of the lexical environment in which it is defined (declarations and
other lexical macros) but NEED NOT have access to lexical variables (which
of course would make no sense). If the interpreter simply makes the ENTIRE
lexical environment available, you would get the behavior being described.
What is surprising about the ANS is that it does not require an error to be
signalled when a lexical variable is referenced. I suspect (but with no
explicit evidence) that they simply intended to allow (but not require) an
implementation to let a reference to a variable X to be interpreted as a
reference to a global definition of X (e.g., a defconstant) in this
context, even if a lexical binding of X intervened, not that they intended
to allow an implementation to actually make a lexical binding of X visible
in interpreted code. But that is pure speculation on my part.
|