Subject: Re: Lambda functions
From: Erik Naggum <>
Date: Sun, 23 Sep 2001 02:47:31 GMT
Newsgroups: comp.lang.lisp
Message-ID: <>

* Kaz Kylheku
> what is going on behind the scenes is that lambda is actually
> a macro, which epxands (lambda (x)(+ x x)) to  (function
> (lambda (x)(+ x x))) 

* Anette Stegmann
> The second lambda must be different from the first one, because 
> otherwise there would be an infinite macro recursion. Therefore 
> it is confusing to use the same name for both.

  function is a special form.  It does not evaluate its argument.  (This is
  _really_ basic stuff, and behooves you to access reference material to
  get simple answers like this instead of wasting everybody else's time
  explaining things like this to you.)

> What if I do not want to have a closure?  (which I understand to be
> something like the lambda with x replace by a generated unique symbol).

  Huh?  A closure captures the binding of the free variables in the lambda
  expression.  Various forms of renaming are employed lambda calculus, but
  that is a way to describe argument passing formally, not something we
  actually do in the code.

> Well, IIRC in the "old LISPs" there where no closures.

  Yes, because there were no lexical variables.

> I still find it conceptually simple to see things this way:

  It would be massively _more_ conceptually simple for you if you could
  become willing to wrap your head around the way these things are actually
  done in the language under study.  Nobody cares what you find simple or
  not, really, but the language is there and does what it does, and you
  should care to see its simplicity instead of trying to impose on it the
  one that you bring with you.  If you cannot hack the language as it is,
  some other language may suit you better if you really insist on giving
  _your_ conceptual simplicity higher precedence than exploiting and
  employing the vast resources made available by people within the existing
  framework.  This has much to do with the personal attitude problem I
  label "ignorant arrogance", and it occurs with people at every skill
  level, even people who _supposedly_ know the language well, but still
  insist that if/when/unles are braindamaged because they break with his
  pre-existing view or the world and that view is of _course_ much more
  important than everybody else's, at all costs.  Understanding how
  something came to be and why other people think it is great is part of
  that humility that comes with sufficient intelligence to recognize that
  and when others are way smarter than you are.  Failure to recognize this
  causes you to behave stupidly and that is tremendously annoying because
  those who fail to recognize it are also unlikely to grasp when to let go
  of their preconceptions.

> A name is naming an expression and on evaluation is replaced by its value.
>   ((lambda (x)(+ x x))2)
> when I use (setq f (lambda (x)(+ x x))) then I am naming the  lambda list
> by the name of "f" and so in "(f 2)" the name f then  should be replaced
> by its value "(lambda (x)(+ x x))" giving  "((lambda (x)(+ x x))2)" what
> then can be evaluated.

  You may find this conceptually simple, but real Lisps decided long ago
  that the human language tendency to have verbs and nouns draw from the
  same lexicon, but mean different things according to context actually
  works tremendously well.  Lisp was developed in the English language
  community.  Algol and several other languages that fight against this
  tendency in human languages were developed in non-English communities.
  If you do not like the ability to spell a verb and a noun the same way,
  take it up with English or German, not with languages that evolved with
  designers and users speaking the respective languages.

> It is because CL has multiple values for a name depending on the
> context. I am not sure if I like this, because it is conceptually simple
> to have a name be just a name for /one/ value.

  Please accept that some of us, native and almost-native English speakers,
  cannot fathom how other languages can be useful when they have to invent
  new words just because a word of a different type is spelled the same
  way.  E.g., my native language is Norwegian, essentially close to German,
  which has the annoying tendency that the noun and verb are completely
  different, but I have been a bilingual American-English speaker since
  about age 6, and to me the ability to name verbs and nouns the same is a
  conceptual simplicity that far outweighs the _artificial_ simplicity of
  associating only one value or meaning with a word.  Matter of fact, I
  appreciate the fact that most languages affords numerous interpretations
  of each word, and especially enjoy the literal and the figurative meaning
  that we can trace back to the classic languages.

> This is as if, for example and comparision, in a conventional language
> every name could have a number and a string value, so:
>   a = 2
>   a = "hello"
>   printnum a
>   printstr a
>   printnum a
> would print "2hello2".  The first impression I would have is that this
> makes things more complicated than necessary.

  I find it utterly amazing that people who object to the function/data
  seperation have to invent some annoyingly stupid example with two kinds
  of data instead of actually trying to get the very simple idea that
  functions are _used_ very differently from data so often that it makes
  sense to separate the two from eachother.  The only cost is that we read
  the first position in a form as an operator and everything else as an
  operand, and then when we need to talk about the operator, we use an
  adjective like function, and when we need to _call_ some data value that
  is a function, we use funcall the exact same way apply is used to call
  both functions (with the above adjective) and data.  Please accept that
  some of us find this fantastically natural and think in terms that make
  conflating nouns and verbs tremendously confusing.  Scheme does this, and
  they are _hysterical_ about issues that we real Lispers simply do not
  understand their emotionalism about: hygienic macros, for instance, guard
  against clobbering the functional value of variables that some programmer
  is likely to use for data.  That is a stupid solution to a non-existing
  problem as seen from real Lisps: They made the obvious mistake of getting
  rid of the verb/noun distinction, and now they have to pay for it.  Duh!
> I actually dislike it that this apparent self-application is not
> happening, because here the "notation lies" in my eyes.  I dislike it
> that it is possible that the same name is naming two totally different
> things in this expression (x x).

  Get over it.  Or use Scheme.  In any case, your dislike is _completely_
  irrelevant to real Lisps.  _You_ deal with it, or not.  The language
  stays the same.  If you are irritated by it, go away and spend your life
  on something less irritating.  Personally, I could not care less what
  some Scheme freak is irritated by, but I find it amazing that so many
  Scheme freaks waltz over here to comp.lang.lisp and spout the arrogance
  about what they dislike as if they expect to be considered anything but a
  real pest in a forum that has explicitly accepted, embraced, and enjoy
  what you "dislike".  Do you do this in real life, too, by the way?  Do
  you waltz into political party meetings and express your _dislike_ for
  core elements of their program?  Do you walk into stores and express your
  _dislike_ for the way they choose to display their products?  If so, may
  I suggest Augustine's prayer?

        God grant me serenity to accept the things I cannot change,
                      courage to change the things I can,
                   and wisdom to know the difference.

> When, as I assume, after (defvar x 3) the second x in (x x) is
> effectively replaced by 3, how would I then write a self-application,
> i.e., an application of a function to itself?

  I think you need to get hold a reasonably up-to-date Lisp text.  You ask
  a bunch of questions that assume so much that one would have to return to
  first principles to re-teach you how to see the problems you have come to
  think of as obvious.  I wonder why this happens to people, incidentally.

  I am studying SQL at the moment, and I have the "pleasure" of playing
  with a rather interesting set of implementations that have a rather
  interesting concept of adherence to the database theory that I studied
  several years ago, not to mention conformance to the standard.  However,
  if I am to be good at SQL, there is simply no way around accepting what
  is out there _until_ I am good enough at this to know what to demand and
  how to achieve what I want.  Like, MySQL is sufficient for a lot of small
  applications, but lacking transactions (i.e., commit and rollback), you
  invite a veritable disaster if you base real applications on it.  But
  more importantly, qua language, SQL has to be understood on its own
  terms, not the terms of some _other_ language.  Oracle has PL/SQL and is
  flirting heavily with Java these days, and people use Perl to interface
  with several other databases.  I have found the need to learn Perl well,
  too, and fortunately, Perl's huge suck factor is gradually decreasing
  since it is now possible to refer to variables sanely.  (Note, however,
  that Perl also makes a big issue about separating verbs and nouns.  Note
  also that the SGML family of languages have lots of namespaces: elements,
  attributes, entities, unique ids, etc, and that people find this simple
  and intuitive to deal with, because the human brain is obviously wired
  for dealing with context.  At least English-speaking human brains.)

  Why did that stupid George W. Bush turn to Christian fundamentalism to
  fight Islamic fundamentalism?  Why use terms like "crusade", which only
  invokes fear of a repetition of that disgraceful period of Christianity
  with its _sustained_ terrorist attacks on Islam?  He is _such_ an idiot.