From ... Path: archiver1.google.com!news2.google.com!newsfeed2.dallas1.level3.net!news.level3.com!news-out.visi.com!petbe.visi.com!easynet-quince!easynet.net!feed.news.tiscali.de!uio.no!nntp.uio.no!not-for-mail From: Erik Naggum Newsgroups: comp.lang.lisp Subject: Re: O'reilly subjugated to the Lisp juggenaut (well, almost ;-) Date: 18 Jan 2004 03:17:14 +0000 Organization: Naggum Software, Oslo, Norway Lines: 114 Message-ID: <3283384634718441KL2065E@naggum.no> References: <866764be.0401071538.51364b65@posting.google.com> <3283054407117609KL2065E@naggum.no> <2hbrp6ibno.fsf@vserver.cs.uit.no> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: readme.uio.no 1074395836 12306 129.240.65.210 (18 Jan 2004 03:17:16 GMT) X-Complaints-To: abuse@uio.no NNTP-Posting-Date: Sun, 18 Jan 2004 03:17:16 +0000 (UTC) Mail-Copies-To: never User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3 Xref: archiver1.google.com comp.lang.lisp:10214 * Frode Vatvedt Fjeld | Could you expand on this observation? I mean, how were they too | successful, and how did this cause their deaths? Being too successful is one of life's biggest risks, but a risk that has received relatively little attention, primarily because the too successful simply die off after a very brief period, leaving few, if any, traces of their existence. What makes natural selection work is that regardless of which random factor constitutes the crucial advantage that allows breeding, a large number of incidental factors are inherited from both parents by random but without perfect fidelity, so even after a long chain of successful breeding of the advantageous factor, all sorts of incidental factors show variation, which means that when the conditions that made the advantageous factor advantageous change, there will be a large number of previously non-winning individuals who are suddenly better adapted than the previously winning individuals. Over time, conditions always change, so various factors are selected for, and over sufficient time, a large number of advantageous factors are present in the population. If, however, one factor is too successful, it will continue to be the winning factor regardless of the variation in the other factors over the range of variation in the conditions, and therefore will stifle the development of other advantageous factors until the conditions change sufficiently that it no longer is the winning factor. At this point, the whole population is ill prepared for the change, and may well perish entirely if the winning factor accidentally becomes the matching factor for a disease or a predator. For human optimization of winning factors, we have another problem: The more we optimize a particular solution for a particular condition, the most costly it will be to acquire the same optimized match for a changed condition, for we will not tolerate that somebody else just happens to be better at it while we perish. Therefore, as conditions change and competition drives us to optimize, people will voluntarily become too successful in the sense that they resist change and work to maintain the advantage by presenting the necessary adaption as a cost that they cannot afford. The Lisp Machines were heavily optimized for their particular (if not peculiar) conditions and were effectively much more dependent on those conditions than less optimized solutions, which could replace parts of the system without incurring large development costs to regain the advantages. The tight coupling between software and hardware became a problem when cheaper and faster hardware arrived but which would have required massive development effort to maintain the advantages of the proprietary hardware, which was, after all, developed under intense pressure to make the software run fast enough. Software developers know better than most people how destructive to the core design intense optimization pressure can be and how the cost of increase in performance rises. We still run software that was designed several decades ago, and although the optimization criteria of modern Intel processors are vastly different from early processors, we find that most optimizers of Intel code still optimize for sometime in the early to mid-1990s. Optimization is generally detrimental to future success, but it is the only way to accomplish present success in competition with others who are equally interested in short-term results. In fact, when just one of the competitors becomes interested in short-term results and hopes to profit sufficiently to offset the risk of future profits, it takes more guts than most people can muster to stick with marathon runners as others rush to support and profit from sprinters. It doesn't take a genius to figure out that optimizing for short-term profit will be the death of long-term profitability, but people have made short-term decisions for decades now and they still wonder why the future is less bright and much less certain. In the Lisp Machine case, being too successful meant that they failed to adapt in time when the external conditions changed. Nothing in the success of the Lisp Machines indicated that they were on the wrong track, quite the contrary, until they were eclipsed by much cheaper hardware that took advantage of a few of their incidental features and dropped the crucial features because of the cost. Depending too much on their relatively few winning factors and focusing too much on their development made it harder for other factors to evolve properly at the same time, and when these other factors were suddenly advantageous in the market, the previous winning factors became liabilities. Put another way, a company that produces one excellent product has a much, much smaller chance of winning in the long run than one that has a lot of crappy products that each manages to have a minor advantage over its competitors. When the crapware producer par excellence keeps whining about "innovation", they really mean that their advantage over their competition is materially insignificant and that the only way they can maintain an advantage at all is by competing with themselves, i.e., the previous version of each product. Over time, however, this process necessarily produces high quality products in a large number of areas, but only as long as their /competitors/ are better than they are at every single one of them some of the time. When they actually win over their competitors, as a permanent condition, they, too, will be too successful and will keep doing what made them successful, which by the very nature of life, is not what will make them successful in the future, for /which/ of many incidental factors turned out to be the winning factor under some conditions is not only unpredictable, but entirely random. All you know is that /some/ of your incidental factors /may/ turn out to be advantageous, but once you have found one of them, it is time to nourish all the /other/ incidental factors, for that is what your present and future competition is doing. The old adage that if you find something that works, you should do more of it, is sound for an individual in a non-competitive environment, but it is extremely dangerous in a competitive environment, where you won only because you did something that the previous winner did /not/ do. So, if you keep doing what made you successful, you will be too successful in a very short time, and then you just vanish when a competitor gains ground, like the Lisp Machines or like Digital Research. -- Erik Naggum | Oslo, Norway Act from reason, and failure makes you rethink and study harder. Act from faith, and failure makes you blame someone and push harder.