I’m really bored of Lisp Machine Romantics at this point: they should go away. I hope they never do that.
History
Symbolics went bankrupt in early 1993. Various vestiges of the company in the way of these things have, in this case, endured for decades. But 1983 was the time when Lisp machines died out.
Death was not unexpected: when I started using mainstream Lisps in 19891 Everyone knew that specialized hardware for Lisp was a useless idea. The common thought was that the advent of RISC machines had killed it, but in fact machines like the Sun 3/260 in its ‘AI’ configuration had2 Nails were already being hammered into his coffin. In 1987 I read a report that showed the Lisp performance of an early RISC machine using Kyoto Common Lisp, which was not a famously fast implementation of CL, outperforming a Symbolics one on the Gabriel benchmark. [PDF link],
1993 was 32 years ago. The Symbolics 3600, perhaps the first Lisp machine that sold in small numbers, had been introduced ten years earlier in 1983. People who used Lisp machines are old today except as historical artifacts3,
Lisp machines were widely available and offered the best performance for Lisp for a period of about five years, which ended about forty years ago. They were probably never competitive in terms of performance for the money.
Now the time has come, and long overdue, to let them go.
But still the Romantics – some of them old enough to remember Lisp machines – repeat their myths.
‘It was an environment of development’
No, it was not like that.
The development environment offered by both families of Lisp machines was really good, at least for the 1980s. I mean, they were really great. Some of the things they were good at matter today, but some don’t. For example, in the 1980s and early 1990s Lisp images were much larger than available memory, and the machines were also typically extremely slow. So good Lisp development environments do a lot of work to hide this slowness, and in general ensure that you rarely have to restart anything, which took a significant fraction of an hour, if not more. None of this matters today, because machines are so fast and Lisp is relatively small.
But that wasn’t the only way they were good. They were really lovely things to use in so many ways. But, despite what people may believe: it did not depend on the hardware: There is no reason why such a good development environment could not be created on stock hardware. Maybe, (probably) this wasn’t true in 1990: it’s certainly true today.
So if really good Lisp development environments don’t exist today, it has nothing to do with Lisp machines not existing. In fact, as someone who uses Lisp machines, I find the LispWorks development environments at least as comfortable and productive as they were. But, oh no, the full-fat version isn’t free, and neither version is open source. Nor, let me remind you, were they.
‘They were much faster than anything else’
no they were not. Please, stop at that.
‘The hardware was user-microcodeable, you see.’
Please, stop telling me things about Machines I use: Believe it or not, I know those things.
Many machines before about 1990 were user-microcodeable. This meant that, technically, the user of the machine could implement his own instruction set. I’m sure there are cases where people did do this, and a very small number of cases where doing so wasn’t just a waste of time.
But in almost all cases the only people who wrote the microcode were the people who built the machine. And they wrote microcode because it’s the easiest way to implement a very complex instruction set, especially when you can’t use large numbers of transistors. For example, if you’re going to provide an ‘add’ instruction that will add any number of numbers, trapping the user back in the code for some cases, then by far the easiest way to do this would be to write the code, not build the hardware. And that’s what Lisp machines did.
Of course, the compiler could produce that code for the hardware without that instruction. But with special instructions the compiler’s work becomes much easier, and the code becomes shorter. A small, quick compiler and small compiled code were very important for slow machines that had small amounts of memory. Of course a compiler that is not made of wet strings can use the type information. Avoid The entire despatch case was prepared, but only the wet string was available.
Microcodeable machines almost never meant that users of the machines would write microcode.
At that time, the trade made by Lisp machines might even have been justified. In general CISC machines were probably a good compromise given the memory expense and rudimentary compilers: I remember being horrified by the size of the code compiled for RISC machines. But I was scared because I wasn’t thinking about it properly. Moore’s Law was very effective around 1990 and, among other things, meant that the amount of memory you could spend was growing exponentially over time: the RISC people understood this.
‘They were a complete lisp’
Ultimately, perhaps, this is a good point. They were, and you could dig around and change things on the fly, and it was great. Sometimes you can repeat the actions you have done later. I remember playing with sound on the 3645 was only really possible because you could get low-level access from Lisp to disk, because the disk could provide data just a little bit faster to stream the sound.
On the other hand, they had no isolation and thus no security at all: people didn’t care about this in 1985, but if I were using a Lisp-based machine today I would certainly be unhappy if my web browser could modify my device drivers on the fly, or poke and peek at network buffers. A machine that was completely Lisp today would need to ensure that such things could not happen.
So it may be completely Lisp, but you won’t have exactly the ability to redefine the visceral parts that you have on Lisp machines. Maybe it’s still worth it.
Needless to mention, I’m not interested in spending too much time tinkering with something like SSL implementation: those things already exist, and I’d prefer to do something new and cool. I would prefer to do something that Lisp is specifically suited for, rather than reinvent the wheel. Well, maybe that’s just me.
Machines that were entirely Lisp could be really interesting, although if they were safe they might not look like the Lisp machines of the 1980s. But that doesn’t mean they’ll need special hardware for Lisp: they won’t need it. If you want something like this, there’s no hardware stopping you: no need to mourn endlessly over the lost age of Lisp machines, you can start building one right now. Shut up and code.
And now we come to the really weird arguments, arguments that we need special Lisp machines either for reasons that are just plain wrong, or because we need Lisp machines. never were,
‘It’s very hard to write good Lisp compilers for stock hardware’
This mantra is becoming old.
The most important thing is this Today we have good stock-hardware Lisp compilersFor example, today’s CL compilers are not far off from CLANG/LLVM for floating-point code, I tested SBCL and LispWorks: it would be interesting to know how many times more work is required than them in LLVM for such a small improvement, I can’t imagine a world where these two CL compilers would not be at least comparable to LLVM if equal effort was spent on them,4,
These things are so much better than the wet-cardboard-and-string compilers that lispms had, it’s not ridiculous.
There is also a large amount of work underway on compilation of other dynamically typed, interactive languages, aiming for higher performance. This means on-the-fly compilation and recompilation of code, where both the compilation and the resulting code must be instant. Example: Julia. Any of that development can be reused by Lisp compiler writers if they need or want to (I don’t know if they do, or should).
Ah, but then it turns out that’s not what a ‘good compiler’ means after all. This suggests that ‘good’ means ‘the compilation is fast’.
All of these compilers are very fast: even the computational resources used by a pretty hairy compiler have not scaled up that fast for the problems we want to solve (which is why Julia can immediately use LLVM). Compilation is also not an Amdahl constraint because it can occur on the node that needs the compiled code.
Compilers are so fast that a widely used CL implementation exists where EVAL uses the compiler, unless you tell it not to.
Compilation options are also a thing: you can tell compilers to be quick, fussy, sloppy, safe, produce fast code, etc. Some fundamentally modern languages even allow this to be done in a standardized (but extensible) way at the language level, so you can say ‘Make this inner loop really quick, and I’ve checked all the bounds, so don’t bother with it.’
At this point the tradeoff between a fast Lisp compiler and a really good Lisp compiler is hypothetical.
‘They had amazing keyboards’
Well, if you don’t mind the awkward layout: yes, they did mind.5and there are absolutely nothing To be concerned with Lisp.
And so it goes on.
bored now
There is a well-known syndrome among photographers and musicians called GAS: Gear Acquisition Syndrome. suffering from6 Chase an endless stream of purchases of gear – cameras, guitars, FX pedals, the long-expired last batch of a famous printing paper – in the strange hope that the next camera, the next pedal, that paper, will bring out Don McCullin, Jimmy Page or Chris Killip. Because, of course, Don McCullin and Chris Killip took the pictures simply because they had the right cameras: it had nothing to do with talent, practice or courage, no.
GAS is a lie we tell ourselves to avoid the strange reality of what we really need to do. PracticeA lot, and even if we did we really wouldn’t be very talented.
Lisp Machine Romanticism is the same thing: a wall we build ourselves so that, somehow unable to climb over it or knock it down, we never have to face the fact that the only thing stopping us is ourselves.
There is no purpose in arguing with Lisp Machine romantics because they will never admit that the person building endless obstacles in their way is the same person they see in the mirror every morning. They are too busy building walls.
As a footnote, I went to a talk by an HPC guy in the early 90’s (so: after the end of the Cold War)7 And when the HPC money was gone) where he said that HPC people needed to aim for machines based on what looked like large commercial systems because no one would finance dedicated HPC designs anymore. At that time this meant large cache-coherent SMP systems. They have reached their limits and are actually gone now: the bank I worked for had dozens of fully populated large SMP systems in 2007, with maybe still one or two that they can’t get rid of due to some legacy applications. So HPC people now run huge shared-nothing farms of commodity processors with very thick interconnects and are thinking about/using GPUs. Of course, this is the same as what happened with Lisp systems: perhaps, in the HPC world, there are romantics who mourn the lost glory of Cray-3. Well, if I were talking to people interested in the possibilities of hardware today I would be saying that in a few years there is going to be a Very If you can buy the power then huge farms of GPUs will become very cheap. People may be looking at whether they can be used for something more interesting than the giant neural networks they were designed for. I don’t know if they can.