Subject: Re: Could CDR-coding be on the way back?
From: Erik Naggum <erik@naggum.net>
Date: 12 Dec 2000 21:01:17 +0000
Newsgroups: comp.lang.lisp,comp.arch
Message-ID: <3185643677174540@naggum.net>

* Paul Wallich
| I expect that doesn't make it impossible; instead, if anything, it
| makes it likely that Usenet-N will be some kind of horrific distributed
| database that absolutely requires a flat address space to work...

  I expect that only some headers will be passed around very shortly,
  with really clever cache propagation protocols that makes clients
  retrieve an article by message-id on demand, ultimately from the
  originating server, cutting traffic and disk requirements down to what
  is actually used, and killing half the stupid spam problem, too.  News
  traffic will follow the readers, not always flood the whole Net.  And
  since we're all fully connected (in N months, anyway), the off-line
  generation of news readers and the desire to keep a full copy of
  today's cache/catch of USENET locally will simply go away.  Nobody in
  their right mind has a desire to keep a "local copy" of the whole
  World Wide Web just to read an infinitesimally small fragment of it,
  yet that is what people do with USENET because getting out of how it
  has been done up to now is hard to get away from, but it really is no
  worse than some large ISP doing it the right way.  The bonus of such a
  design is that most originating servers would be able to hold on to an
  article _much_ longer than the receiving servers of today are, and
  that, too, would relieve most people of the need to keep local copies
  of everything.

#:Erik
-- 
  "When you are having a bad day and it seems like everybody is trying
   to piss you off, remember that it takes 42 muscles to produce a
   frown, but only 4 muscles to work the trigger of a good sniper rifle."
								-- Unknown