>>All right. So Peter Seibel describes himself
as a “writer turned programmer turned writer.” I first became aware of him when I worked
for Sun and he worked for WebLogic. And he sent me a very convincing piece of
e-mail asking me to put chained exceptions into Java.
It convinced me, and so I did. So if you are a Java programmer who has benefited
from chained exceptions, you have Peter to thank.
Let’s see. After that, he worked for a start-up that did Java-based transactional messaging
system. And then he took a couple years and wrote
this great common lisp book called what? Practical Common Lisp. It won a Jolt Award.
And then, he wrote this book called, “Coders at Work.”
I could tell you all about it, but that would leave him with nothing to say.
So instead, why don’t we give a warm welcome to Peter Seibel.>>[Clapping] Peter Seibel: So this book, as some of you
have seen — a show of hands. Who’s actually read it already?
Anyone? Josh. Couple. Okay. So you all have that to wait in store for
you hopefully. So I’ll just give you a brief overview of
what the book is. It’s 15 Q and A interviews with — I like
to say — “Notable computer programmers.” I did not make any attempt to say, “These
are the 15 best programmers working on the planet.”
I tried to make a book that was balanced in various ways so it wouldn’t be all one kind
of programmer. I did also have a sort of bias — for perhaps
obvious reasons — of selecting programmers who I thought would be interesting to talk
to. So the totally indrawn, mad genius who only
grunts would not be a good candidate for an interview.
And so, those people — despite their prodigious coding skills — were left out.
But I think I got a good selection of people. Actually, the story of how I got this set
of people I did is a little bit interesting, but years ago, when I started working on this
book, I put up a website and asked for suggestions of names and made some wizi u1 stuff to let
people sort the names in various ways. And I ended up with about 284 suggestions
of names of programmers who someone thought would be worth interviewing.
And then, people sorted them in various ways. And I think, actually, Peter Norvig, your
— what is he now? Your V.P. of research, or whatever he is
— was the top of the list. He might have lost at the very last minute.
John Carmack pulled ahead of him. John Carmack would have been a great guy to
have in this book, but he never replied to any of my e-mails or physical letters.
But, I whittled down that list to the 15 people who are in the book, and I’ll just — actually,
I was sort of curious to see. Since none of you mostly have read the book,
then you can just answer this. So who here has heard of Jamie Zawinski?
Raise your hand. About 50 percent. Brad Fitzpatrick, fellow Googler? About the
same. Douglas Crockford? A little less.
Brendan Eiche, Inventor of java script? Josh Block? Very nice.
Joe Armstrong, inventor of erlang. A few. Simon Peyton Jones? More.
Peter Norvig? Very good. Who doesn’t know Peter Norvig?
Guy Steele? A lot. Yes, he’s everywhere. Dan Ingalls, co-inventor of Smalltalk.
Peter Deutsche? El Peter Deutsche. Fewer. Ken Thompson, another Googler. Yes, inventor
of Unix. Fran Allen. There are a few.
Turing Award winner. You guys should know who she is.
Bernie Cosell? Nobody. See? I’m really proud of having Bernie Cosell in
this book because — as you’ll hear — he’s a really interesting guy.
And Donald Knuth? All right. You? You’re out of here.>>[Laughter] Peter Seibel: So, I sat down with all of these
people. All of the interviews were done in person
over between four to six hours, all recorded on little digital recorders and laboriously
transcribed and then edited down. Probably about two-thirds of the raw transcript
was cut out to make the book. So if you think the book’s too long, it could
have been a lot worse. And I sort of feel like this set of people
I interviewed — we’ve all heard perhaps of Sturgeon’s Law, that “90 percent of everything
is crud,” which he invented thinking about science fiction but realized applied to everything.
These people are the other ten percent; all right?
If you read programming blogs or forums or read it or whatever, it sometimes can be dispiriting.
And you think, “Wow. All programmers actually are really idiots.” — including yourself
sometimes perhaps. But hopefully, if you read these interviews,
you’ll realize that we’re not all that way. These folks obviously had thought about their
craft. There were very few dogmatic points of view
on any of the latest back and forths about unit testing or test-driven development or
code ownership. Everyone had a sort of nuanced view of things,
which was refreshing. And basically, the goal of the book was to
talk to people about, you know, How did they become the programmers that they are today?
I think Apress sort of wanted a book that people could pick up and read and learn something
about how to be programmers. I think it got — it’s a little more subtle
than that however. I think Apress sort of thought, “Oh, we’ll
just go ask a bunch of people a bunch of questions. Then it’ll be very obvious.
People have actionable stuff they can take away and apply to their craft.”
But I don’t know if they’re disappointed, but at least some of my reviewers are disappointed.
I — like every author on the planet — looks at the Amazon reviews.
Couple one star reviews on Amazon. One guy said, “I wanted more insight into
how to become a great coder, but you won’t find that here.”
Another guy said, “I want to know how they solve problems.
That’s not a topic you’ll find covered in this book.”
But a friend of mine on the Lisp IRC Channel said to me the other day, “I was going to
say your book is more inspiring than enlightening, but after all, I think it is enlightening,
however subtly.” And I think he’s got it right.
I loathe to agree with the one star reviewers.>>[Laughter] Peter Seibel: — in that, there is no royal
road to programming, right? It would be great if someone would interview
a bunch of folks like this and produce a book that you would read it and then say, “Aha!
That’s how you become Donald Knuth. That’s how you become Guy Steele — in seven
days.” [laughter] Right? But we know that’s not really how it works.
And I was struck by that interviewing people that it’s hard to get at what makes great
programmers great. We know that the people in this book are great
at what they do, because they’ve produced great stuff, but it’s just a hard thing.
I mean, if you know — you know all of probably — everyone here is a programmer, I assume?
More or less. Okay. So you know what goes on in your mind and
you know probably how hard it is to explain like, How do you make the decisions you make
when you code? And so, it’s a hard thing to get at.
So I think we sort of have to read between the lines and also observe that these people
became who they became over a long time. And so, I started at the beginning of how
they started out. Four of them started with Basic.
Dijkstrais rolling over in his grave. Four of them started with Fortran and eight
of them started with Assembly. I think I counted Guy Steele in there twice.
And they started sort of how probably anyone started who started when they did.
Kids today starting will start somewhere else, but it’s not obvious that they started out
very differently. Except that they all did seem to have hands
on experience with computers, particularly some of them at a time when that was very
unusual. So Donald Knuth happened to be at Case —
what I guess is now Case Western Reserve — but they had an IBM-650, and they let
undergraduates touch it. So he got to sit down with the machine and
debug programs and look at the programs in the manual and say, “I could do better than
that.” And then discover that, in fact, he could.
But also how hard it was to debug a program. I think he said he “wrote a hundred line program
that had a hundred bugs in it.” [laughter] So, sort of looking at what people
did, it really comes down to basics. The old three R’s — Reading, Writing, Arithmetic.
Reading — some author or novelist said once, “There are two ways to learn to write —
read and write.” And it seems to me, looking at these interviews, that that’s true for
programmers as well. Almost everybody I talked to made some mention
of reading other people’s source code. A lot of them, that was really a formative
experience, as I’ll show in a second. I didn’t get the sense, however, that programmers
read as much maybe as they should. Or even some people sort of said we should
do it, but even they didn’t really do it. Like, I imagine novelists probably read a
lot of novels. Like just regularly read novels.
Even among the people I interviewed, I didn’t get the sense that anybody just sort of regularly
picked up a piece of code to read for fun. Some a little.
Brad Fitzpatrick actually sort of struck me as a little bit.
He just grabbed the android source code, get the Chrome source code, and just looked at
it to sort of see how it worked — for no particular reason.
But mostly people read stuff that they were working on or that their team was working
on with some exceptions, and I’ll go over them.
So Jamie Zawinski started out — for those of you who don’t know, he was an early employee
at Netscape, wrote the Unix version of mosaic or whatever it’s called Netscape I guess,
before it was Mozilla — but he actually got his start really at CMU.
He was hired as a high school student to work at then Scott Fellman’s AI lab at CMU and
got to work on lisp machines and got this very old school.
He was sort of like the lisp culture was almost dying out and he as a very young person got
sort of enmeshed in that and sort of got it when other kids his age would have been playing
with Apple II’s and stuff. So he was there working on these lisp machines
and so he basically said he ended up reading a lot of the code, you know for these lisp
machines just looking at how stuff worked. And I asked him if he was reading code that
he wanted to work on or is it just for curiosity. And he said, “Mostly I just wondered how that
works.” The impulse to take things apart is a big
part of what gets people into this line of work.
And I certainly saw that in a lot of people. Brad Fitzpatrick described how he didn’t used
to read code. He was programming since he was five years
old on a home brew Apple II that his father built.
The story of how that computer came into existence is in the book and it’s amusing.
But he said, “There were a number of years when I wrote a lot of code and never read
anyone else’s. Then I got on on the Internet, and there’s
all this open source I could contribute to, but I was just scared shitless that it wasn’t
my code and the whole design wasn’t in my head — that I couldn’t dive in and understand
it.” And he describes — but he did eventually
start sending in some patches to game the TTK instant messenger.
And he said, “I was digging around in that code and I just saw the whole design.
Just seen parts of it I understood. I realized, after looking at other people’s
code, that it wasn’t that I memorized my own code.
I was starting to see patterns. I would see their code and I was like, “Oh,
okay. I understand the structure that they’re going with.”
[talking] And so he said, after that, he really enjoyed reading code, because as he says,
“Whenever I didn’t understand some pattern, I was like, ‘Why the fuck did he do it like
this?’ And I would look around” — Brad also gets the prize for most cursing in his interview.
[laughter] It’s actually cut down in the book — “I would look around more, and I’d be
like, ‘Wow, that’s a really clever way to do that.’ So I see how that pays off.”
And so he said, “I would have done that earlier, but I was afraid to do it, because I was thinking
if it wasn’t my code, I wouldn’t understand it.” [talking] So he sort of came over.
And as I said, he’s the one who does seem to sort of just read code for fun.
Like I said, android, Chrome, Firefox, Open Office.
Douglas Crockford is a big fan of Knuth’s Literate Programming, but even he said he
mostly read Knuth’s prose instead of the actual code.
He’s also a big fan of reading as a group activity — sort of ‘inspections’ kind of
reading — and also uses that in interviews. If you interview over at — you get tired
of working here, you go over to Google, interview with Doug Crockford, he’s going to say, “Bring
a piece of code that you wrote, and let’s read through it together as a way of understanding
your skills.” Brendan Eiche started out — He was an undergrad — he was really a physics
undergrad, but he got into some computing and they got a hold of the Unix source code
and they looked at that, and they looked at the C preprocessor which he described as an
amazing mess. And so he started trying to understand that,
trying to write a better one. Now, he does read some other code, I think
frameworks and stuff and java script obviously, he’s also interested in python and ruby.
Josh Block didn’t really say anything in particular about reading code, but I’m sure he approves.
Peter Norvig, when he was young — in grad school, whatever — he was at Berkeley, and
they had the source code of the Symbolics lisp machines.
And so, he took a look at that. And again, it was just sort of what he was
interested in. He was like, “Oh, this has an interesting
feature in it. Let me look into it.
How do they open files across the network, you know, just the same way as they open files
locally” — which was newer then. And Guy Steele — as always — was very articulate
about stuff, so he wrote a lisp very early on, probably when he was in high school.
A lisp implementation for the, I guess, the IBM 1130, and he really credits the ability
to recode. He was hanging out at MIT as a high school
student. So they had the famous drawer of source code
listings. He says, “I would not have been able to implement
lisp for an 1130 without having access to existing implementations of lisp on another
computer. That was a very important part of my education.”
And so he said, “Part of the problem that we face nowadays — now that software has
become valuable and most software of any size is commercial — is that we don’t have a lot
of examples of good code to read.” And he goes on to say, “Open source has helped
with that.” He also read Tech.
So, how many people have actually looked at Tech, the program?
Knuth’s book. Yeah, a few. I found — I asked everybody about Literate
Programming and, obviously, Knuth was in favor. [laughter] Everyone else was a little mixed.
You know, there were people who were like this, “Interesting.”
You know, “It’s interesting to read.” Or it’d be, “An interesting experiment to
do, but I wouldn’t want to code that way all the time.”
Other people like — Ken Thompson is like, “No, that’s just — that’s dumb.
It can’t work, because you’re writing everything twice.”
And other people have said, “Yeah, it’s interesting, but that toolkit was never right.”
They only had it for C, or whatever. Guy Steele said, maybe if they had good tool
chains for literate programming in lisp he would have done it.
So it sort of struck me that Knuth’s been out there arguing for this and really getting
nowhere on that front. Peter Deutsche — he didn’t really talk a
lot about code reading, but he again was at MIT with the drawer full of listings.
And I think he was sort of famous, because he also went — and he was at MIT when he
was like 13 years old writing a lisp implementation for the PDP-1.
And I think sort of famously pulling listings out of the drawer and writing them to be twice
as good and really annoying people twice his age.
Ken Thompson also. So he’s a great example of the early code
reading there being very important. He was at Cal.
They had this drum computer called the G15, and it had an interpreter for a language called
Intercom on it. And they would use a little bit in his double
E classes. And so, a friend of his — a grad student
— had written an interpreter for Intercom. And so, he got a listing of that.
And over vacation — Christmas vacation or something — he said, “I read it and just
dissected it. I didn’t know the language it was written
in, which happened to be Nelliac — and it was just a marvelously written program.
And I learned programming. Nelliac, Intercom and how to interpret something,
everything, from that. I just sat and read it for probably the entire
vacation in a week, and then came back and asked some questions about it — nagging little
bugs kind of things. After that, I knew how to program, and I was
pretty good at it.” [laughter] Fran Allen, who’s been a researcher at IBM forever.
She basically joined IBM in time to teach Fortran to IBM scientists.
Fortran had just been invented and she was on track to be a math teacher, but needed
to pay off her school debt, so took a job at IBM.
They said, “Oh, you’re a teacher. Teach the scientist to use Fortran on all
their scientific stuff, because how else will we make everyone else use it if we won’t use
it?” And she had to drag them kicking and screaming.
I observed that that’s the last time scientists have adopted a new language.
[laughter] But she too — other than the teaching — she started out as a programmer at IBM.
She worked on this operating system called the Monitored Automatic Debugging System.
And she said she really remembers reading the original program.
It was very elegant. Later on, she lead research teams.
She led the team that basically invented Static Single Assignment, you know, compiler technique.
And one of her employees had built a parser. This is actually the guy who wrote the parser
that’s part of the Jikes Java compiler. I don’t know if you guys are familiar with
that. And so, she wanted to understand that.
She said, “It’s probably the best parser in the world.”
It’s on open source now, “and I wanted to understand it, so I took it and read it.
And I knew that Felipe Charles — the man who had written it — was a beautiful programmer.
The way I would approach understanding the new language or new implementation of some
very complex program, would be to take a program from somebody that I knew was a great programmer
and read it.” And that sort of actually gets — I’m going
to jump ahead a little bit, but Guy Steele said something about — I guess I heard a
little bit about before, but he said, “One of the problems that a lot of people seem
to face with reading code is, it’s hard to find good code.” I mean, there’s open source;
there’s a gazillion things. But how do you know that the stuff that we
really ought to read? And Guy Steele said basically, “It’s hard
to find good code that’s worth reading. We haven’t developed a body of accepted literature
that says, ‘This is great code; everybody should read this.'” So it tends to be one-page
snippets and papers, chunks of code out of existing stuff.
So, you know, even he who was saying that and saying how important code reading was
— I asked him about some code he had actually read, and he gave me an example, but it had
been three or four years since. So, it’s not like he’s picking up new code
just to read for fun anymore, other than the stuff he’s working on.
Some of the folks I talked to actually gave pretty good advice about how to read code,
I thought. In some ways maybe the most practical advice
in the book, because it’s a daunting thing. Here’s a big pile of code, How do you get
into it? Some people can just sort of start reading
and suck it all into their brain and eventually it’s all there and they can understand it.
But I’ve never been able to do that. So Zawinski gave some advice.
Just, you know, you just dive in and start reading it and you can start at the bottom.
He said, “If you want to understand how Emax works, start at the bottom.
What are cons cells made of? If that doesn’t work for you, sometimes starting
with the build system can give you an idea of how things are put together.
Just trying to build the damn thing will really show you — actually and Brad Fitzpatrick
echoed a similar thought which is just, “pipe find into less while you’re trying to build
the thing and sort of look around and you’ll see how things are structured.”
And then, I think it was Zawinski who said, “Once you get it built, you know, now you’ve
built your own version of Emax or Firefox. Now, you can just make one stupid little change
— change the title of some window — and now, you’ve actually started working on the
program and understanding a little bit how it fits together.”
Guy Steele talked about taking one function, you know.
You want to look at Emax — think about, Well, somewhere it’s got to insert a character.
Let’s find that, take a task — something that’s going to happen — and then, just find
the code that’s going to show you how that, where that happens.
You know, eventually there’s going to be something that increments some counter that moves the
cursor along in the buffer. And as you’ve traced that whole path, you’ll
have looked at a lot of the code. Now, go back up to the top, pick another thing,
try that. Brendan Eiche took a different approach which
is often to take a big program, sort of hard to read, because, you know, they’re hard.
Throw it into the debugger and just trace around sort of more dynamically.
Get a view that way of what’s happening as the program runs.
And that will drag you through the source code in a way that’s actually — you’re going
to see the flow of control. But he also did say, “Really understanding
it is this Gestalt process that involves looking at different angles of top and bottom and
different views of it, playing it in the debugger, stepping through in the debugger, incredibly
tedious that that can be.” And he sort of cautioned that, “Simply reading
source code without firing it up and watching it in the debugger can be misleading in the
sense that you can go a long way reading source, but you can also get stuck.
You can get bored and convince yourself that you understand something that you don’t.”
I think the Zawinski approach of trying to change something is sort of also a guard against
that, right? If you actually try and change it, you’ll
discover where your understanding has failed. Another way to read code that comes up more
in debugging two people mentioned. So Bernie Cosell, who none of you have heard
of. Bernie Cosell was one of the three software
guys who wrote the software for the IMPS, the Internet Message Processors at BBN that
first four nodes of which were the first four nodes of what is now the Internet.
So BBN had been contracted by ARPA to build this thing as basically an experiment.
And Bernie Cosell and two other guys wrote all the software in assembly for some hardened
Honeywell 316. So he was — I’ve read in some — you know
old software books, methodology books, they say you know, “One thing you should never
do is patch the binary.” You know, you read that now and you’re like,
“What do they mean by that ‘patch the binary?’ [laughter] “That sounds insane.”
So when Bernie Cosell joined the IMP project, he was the third one on just by a couple months.
And so, the two other guys had done a little bit of work.
And one of the other guys seriously thought the best way to maintain their source was
they had, you know, the assembly listing and they would assemble the thing, and it would
be running. And then they’d find a bug.
And so, they would find, you know, the patch you needed to make.
And he would write in his notebook, you know, “At this place, put a jump to over here.
Have this little code. And then jump back to this address.”
And they would load that patch onto the system, and he would have in his notebook.
So there was no source code listing of this system as it was running.
Because they just had patch after patch after patch piled on.
Now, the guy who was doing this was very disciplined with the notebook and he recorded everything
that happened, but to get the source, you had to take the original listing that was
how many weeks old, plus this guy’s notebook, and apply the patches in the right order.
So Bernie Cosell came in and said — like I think any sane person would say — “This
is crazy!” And he went through the exercise over a weekend of applying all the patches
and generating a new listing. And then saying, you know — come Monday there
was a still-running system but now was generated from a clean listing.
And he said, “From here on out, we’re putting the patches in every night and generating
a new listing.” So all of these guys were clearly sort of
masterful assembly programmers that they could even begin to get away with that.
And Cosell was at — this was when BBN was just a hotbed of innovation.
I mean, they were doing the Internet. They were doing AI stuff.
They had a lot of connections with MIT there in Cambridge.
And they tended to hire people like Cosell, who were MIT dropouts, because they were just
as smart and a lot cheaper. [laughter] So he had sort of bailed on MIT
in the sophomore — junior — year. Went to work at BBN.
He developed a reputation. And he is mentioned in the book, “Where Wizards
Stay Up Late” — which is about the inventing of the Internet.
He developed a reputation of being this masterful debugger.
And I asked him about that. And he said, “Well, that’s sort of fake.”
Because what happened was, they had these bugs that nobody could figure out.
And so, they’d give it to me. And I’d go and I’d read the code and I’d read
the code and I’d read the code until I got to a point where I didn’t understand what
I was doing, and then, I would rewrite that part.
And he said, “This is a terrible way — I can’t believe I got away with it.”
But he basically would build his own model — not of how the code did work, but how
it ought to work — and get far enough in that then when he said, “Okay, where it seems
to be doing doesn’t match what I think it ought to be doing; I’m just going to make
it do what I think it ought to be doing.” And apparently got away with that over decades.
Another person who has apparently adopted this technique is Peter Norvig, who was also
a little shame-faced about it, but — which goes a bit, I think also, to the point of
how hard it is to read code. If you didn’t write it and you try and dig
into it, eventually it’s just like, “You know what?
I don’t understand this. I’ll just rewrite it.
That’ll be easier than figuring it out.” So the one guy who was a real inspiration
on this point — reading code — is Donald Knuth — inspiration on many things.
I mean, so this guy obviously reads. I mean, his job is to sort of read almost
everything in an area and distill it down into something the rest of us can pretend
to understand. So he talked about looking at Babylonian manuscripts
of how they described the algorithms in ancient Babylonia 4,000 years ago just to sort of
see how did they think about algorithms. And then, he found a Sanskrit document from
the 13th century about combinatorial math and really felt that this was actually quite
sweet. He felt like the guy who wrote this thing
in the 13th century in Sanskrit — probably there was nobody he knew who understood what
he was talking about. [laughter] But he had these ideas about combinatorial
— what we now call combinatorial math — and Knuth found a translation of this document,
and he felt like this guy was talking to him. Like, he understood him.
And he’s like, “I had those same thoughts as I was getting started in computer programming.”
And so, this poor guy — he’ll never know — but he actually did find someone who did
understood him in Donald Knuth. And so, Knuth talked — this was in the context
of how important reading source materials are to him, and since source materials and
also computer code — he said, “I was unable to pass that on to any of my students.
There are people alive now in computer science who are doing this well — a few.
But I could count on the fingers of one hand the people who love source materials the way
I do.” And he went on to describe all his collections of source code, various compilers
from the 60’s that were written in interesting ways, and Dijkstra’s source code to the THE
operating system which he hadn’t read, but he’s holding for a rainy day.
[laughter] And described one time he broke his arm — he fell off his bike and broke
his arm and was laid up and couldn’t really do much.
And so, he read source code for a month and that was a really important experience for
him. And so, I was sort of asking him the standard
question like, “Well, how do you do this?” And again, there’s no royal road.
Like, he didn’t have any easy answer to how you read source code, but I’ll read this passage.
It’s a little long, but I’ll read it, because it’s so inspiring to me about how he does
it. So he was saying, “Well, it’s really worth
it for what it builds in your brain — reading source code.
So how do I do it? There was a machine called the Bunker Rainbow
300, and somebody told me that the Fortran compiler for this machine was really amazingly
fast. And nobody had any idea how it worked.
I got a copy of the source code listing for it.
I didn’t have a manual for the machine, so I wasn’t even sure what the machine language
was. But, I took it as an interesting challenge.
I could figure out begin, and then I would start to decode.
The operation codes had some two lettered mnemonics and I could start to figure out,
“This probably was a load instruction, This probably was a branch.”
And I knew it was a Fortran compiler. So at some point, it looked at column seven
of a card and that was where it would tell if it was a comment or not.
After three hours, I had figured out a little bit about the machine.
And then, I found these big branching tables. So it was a puzzle.
And I looked at how these primitives are used. How does that get used by higher levels in
the system? And that helped me get around.
But really” — [talking] oops, I’m reading the wrong thing.
[reading] “It was a puzzle. And I kept making little charts like I’m working
at a security agency trying to decode a secret code, but I knew it worked and I knew it was
a Fortran compiler. It wasn’t encrypted in the sense that it was
intentionally obscure. It was only in code, because I hadn’t gotten
the manual for the machine. Eventually, I was able to figure out why this
compiler was so fast.” [talking] And then, being the algorithms guy
— [reading] “Unfortunately, it wasn’t because the algorithms were brilliant.
It was just because they’d used unstructured programming and hadn’t optimized to the hilt.
It was just basically the way you solved some kind of unknown puzzle.
Make tables and charts and get little more information here, make a hypothesis.
In general, when I’m reading technical papers, it’s the same challenge.
I’m trying to get into the author’s mind, trying to figure out what the concept is.
The more you learn to read other people’s stuff, the more you’re able to invent your
own in the future, it seems to me.” [talking] And, you know, I can just picture
Knuth here decoding — I mean, he doesn’t know what these op codes mean and he’s reading
the source code. So then it’s like, “Oh, I should be able to
read a C program, and understand it, right?” [laughter] Like, I know how the language works.
So he echoed sort of Guy Steele’s thing saying, “We ought to publish code.
The lion’s book is available. Bill Atkinson’s programs are now publicly
available thanks to Apple. And it won’t be too long before we’re able
to read that. So I said, “Well, you know, open source is
out there.” And he said, “Yeah, that’s right.”
But he also really echoed the idea that you should read more — read code that’s not
— what does he say, “Don’t read the people who code like you.”
So I think, for him, the fact that he was decoding this machine — this architecture
that he didn’t even know — was more valuable than just reading a bunch of code and something
that would have been a little more accessible. So he obviously is the father of literate
programming. He’s the one who really advocates for people
reading code. It was interesting to me that everyone sort
of thought that was a good idea, but there wasn’t as much of it as you might have thought.
The other bit — reading — writing, is the other one.
And I mean, writing code. A lot of people also thought writing English,
or prose, in your native language had some connection.
A lot of people thought — Douglas Crockford said, “I am a writer.”
I ask people, you know, “Are you an artist, a scientist, or craftsman?” He said, “I’m
a writer. Sometimes I write in English.
Sometimes I write in code.” Other people thought there were just similarities in the way your
brain worked between writing prose and writing code.
Though, actually, Guy Steele — who’s probably one of the great technical writers — was
the one who thought they were the most dissimilar. He felt like writing for computer is very
different, because the computer is so literal-minded, you just can’t get away with as much as you
can writing for people. But, when it comes to writing code, this seems
— again, no royal road. You want to become a good programmer?
Write a lot of code. These people — I think it’s a necessary,
but perhaps not sufficient sadly, requirement if you — a lot of these people, like probably
a lot of the people in this room were just driven to code.
I mean, they just code. That’s what they do. Joe Armstrong said, “The really good programmers
spend a lot of time programming. I haven’t seen very good programmers who don’t
spend a lot of time programming. If I don’t program for two or three days,
I need to do it.” And then he went on to say, “You get better
at it. You get quicker at it.
The side effect of writing all this other stuff” — and he was talking about just all
these random projects that he was working on — “the side effect of writing all this
other stuff is, when you get to doing ordinary problems, you can do them very quickly.”
And Knuth — I asked him about if he still enjoyed programming, he said, “Oh, my God,
yes. I’ve got this need to program.
I wake up in the morning with sentences of a literate program.
Before breakfast — I’m sure poets must feel this — I have to go to the computer and write
this paragraph, and then I can eat and I’m happy.
It’s a compulsion. That I have to admit.” So basically all of these people, you know,
were the hacker — they had the hacker thing going.
They just had to hack. Zawinski was working at CMU on stuff related
to AI, but he also was just digging into the graphics code on the lisp machines and writing
screensavers, which eventually landed him a job with Peter Norvig at Berkeley.
And when he was waiting for the linguist to tell him what to do, he spent more time writing
screensavers, and said later when he was working a gazillion hours a week at Netscape, he sometimes
thought, “Why did I leave the job where I was just writing screensavers?” [laughter]
Brad Fitzpatrick is just a coding machine as far as I can tell.
You know, started when he was five — like I said — on his Apple.
And his dad said he passed his dad up at six or seven, and just worked on, you know, the
stuff he was working on. He wrote a live journal because it was fun.
And a lot of people seem to be driven by the desire to have something.
They weren’t just coding in the abstract. They were coding because they wanted to solve
a problem, you know? Brad wanted to have an online social website
so he could chat with his friends and post stupid stuff.
And so, he just did that. He describes in the book how he implemented
the comment system on Live Journal as a between-classes hack to annoy one of his friends, because
his friend had said something stupid and there was, at that time, no way to comment on Live
Journal pages. And so he said, “I need to make fun of him.
I need to put in a comment system.” [laughter] And so, his friend when he saw
the comment was like, “What the fuck? We can comment now?” You know, a lot of people
started young. Josh Block here was writing chat programs
to be annoying for his science fair. Joe Armstrong — we were talking about what
erlang was good for and what it wasn’t. And he was saying, “Well, I do some image
manipulation, but I just have a C program that does the actual image manipulation, because
erlang wouldn’t be that good for it. And I said, “Yeah, and plus, Image Magic has
already written. No need to rewrite it.”
And he said, “Oh, that doesn’t worry me in the slightest.
I think if I was doing it in OCaml, then I would go down and do it, because OCaml can
do that kind of efficiency. If I was a OCaml programmer, Okay, what do
I have to do? Implement, re-implement Image Magic? Right.
Off we go.” And I said, “Just because it’s fun?” And he
said, “Yeah, I like programming. Why not? You know?
I’ve always been saying that erlang is bad for image processing.
I’ve never actually tried. I feel it would be bad, but that might be
false. I should try.
Hmm… interesting. You shouldn’t tempt me.” Simon Peyton Jones spent his college years
between nine p.m. and three a.m. building hardware and trying to write a compiler in
BCPL while earning a degree during the day. Guy Steele was everywhere.
One of my favorite things from Guy Steele’s interview was he taught himself APL from the
printout. He went — there was a big trade show, and
IBM was there demonstrating their new APL, and he went up to the booth sort of at the
end of the show, and sort of looked — I don’t know.
Big puppy dog eyes, or something — at the woman who was cleaning up the booth as she
was taking the printout on the terminal where they had been demonstrating the new APL for
the whole show. And she said, “Do you want this?” And he said,
[nodding] And she gave it to him. So he had days of printout of APL interactions,
and from that, taught himself the language. He also made money in high school writing
COBOL, of all things, that was his first job as a programmer — writing a grading system
for, unfortunately, a different high school, so he didn’t get to hack his own grades.
And he learned enough lisp just on his own to be the first person to score a perfect
score on a quiz that — this guy at MIT was hiring lisp programmers gave everybody a quiz,
and he gave it to young 16-year-old Guy Steele, and Guy Steele aced it.
And Ken Thompson, also another coding machine. Just whatever he wanted — when he wrote Unix
at Bell labs, he thought he was going to be fired for it, because they had just come off
Multix, which was seen as a huge disaster. And since they had come off the project, the
official thing — they were doing research, but there were some things they were supposed
to research and some not, and operating systems was one of the “Not to be Researched” things.
But he had an itch to write Unix, and he figured, “Well, I’ll write this, and they’ll probably
fire me. But whatever, this is what I want to do.”
And so, that’s what he did. The rest is history — he did computer chess
— hardware-aided computer chess, whatever struck his fancy.
Bernie Cosell, same way. And Knuth took a decade off to write his own
tech-setting system. All right, so.
And then, I’ll just do a little bit on — we’ve done reading, we’ve done writing…
Arithmetic. I asked everybody about how much math was
really required to be a programmer, and I certainly came out of math departments.
Actually, Thompson had an interesting observation that computer science came — at different
universities — out of two places. Some places, it came out of math — Cornell,
whatever — and some places, it came out of E.E. — like Cal — and you really saw this
split. You know, the theory guys came out of math,
and then, the systems — the people who built Unix and whatever — came out of the E.E.
track. But so, I asked people about “Do you need
to learn a lot of math? Do you need, you know, and then also — sort
of related to that — formal proofs?” There was some suggestion that sort of standard
math curriculum is not that useful for programmers. Like, ultimately calculus is not really what
you want to learn so much as maybe discreet math obviously but statistics or probability
might be more useful really than calculus. On the topic of proofs, most — this was a
little interesting to me. Almost everybody I talked to poo-pooed the
notion of formal proofs. Again, Dijkstrais rolling in his grave, but
basically, the overall sentiment was pretty consistent, you know.
Crockford said, “I looked at them in the 70’s, but it just wasn’t working.
Things are too hard. Software is so complicated and go wrong in
so many ways.” You know, people would say, “Assertions are
useful, but full-on formal proofs are just not going to happen.
Armstrong described taking a course into notational semantics and spending 14 pages trying to
prove that in two different schemes, you know, let X=3, and let Y=4, and X + Y=7.
And then, 14 pages later, he’s proven that. And he’s like, “Well, how am I going to prove
the correctness of my erlang compiler?” Norvig said, “I rarely have a program that I can
prove correct. Is Google correct?” You get back these ten
pages, if it crashes, it’s incorrect, but if you get back ten pages, are those the correct
ten pages? So that was down the line — that was pretty
much — nobody was interested in proving things correct.
Guy Steele, even Knuth likes to prove things informally correct, but doesn’t just feel
like, same thing you know, you wouldn’t know what all the assertions for even a simple
program would be. Guy Steele gave an excellent example of the
perhaps limited applicability of proofs, but he was given a review paper for Cacum that
was done by a student of David Greaves who was, himself, a student of Dikester.
And it was a proof of a parallel garbage collector, or proof of the correctness of this parallel
garbage collector. And they gave Steel the paper to review, and
he ground through this proof, checking the proof.
And it took him 25 hours to check every step of this proof, and at the end of the 25 hours,
he said, “There’s a step here that I can’t make look right.”
And so, he turned that back in as his review, and it turned out that was a bug in the proof
that was therefore a bug in the program, because the proof proved that the program worked.
And so, then they found there was one little thing that had to be swapped and then they
redid the whole proof and they sent it back to him.
And he rechecked the proof, which took another 25 hours.
And he could find no flaws in the proof. And so, you know, this sort of cuts both ways
in the sense that I ask, “What could you do just spent 25 hours and found the bug in the
program?” And he said, “No, no way.” It was this incredibly — this parallel garbage
collector with all these interactions and the proof abstracted it in a way that let
him, by finding the bug in the proof, it pointed to the bug in the code, but he never would
have said, “Ah, there’s a bug in the code.” On the other hand, it took him 50 hours, and
they still don’t know, right? I observe that, you know, Dijkstrahas this
famous quote about, “You can’t prove by testing that a program is bug-free.
You can only prove that you failed to find any bugs with your test.”
And I said, “Well, it sort of sounds to me like, with the proof, you can’t prove a program
is bug-free. You can only prove that, as far as you understand
your own proof, it hasn’t turned up any bugs.” And Steel basically agreed with that.
So, I found an interesting example of how it can work.
It’s not totally an exotic quest to try and prove things correct, but also, how hard it
is until we get a little further along with automated theorem proving.
So I guess — Oh, one last bit. So while we’re on Dikester, another one of
my favorite moments — obviously sad that Dijkstra wasn’t around to be included in this
book, so he couldn’t defend himself. But my favorite moment was, I asked a lot
of people about — Dijkstrahas this famous paper on the cruelty of really teaching computing
science, in which he basically says, “Undergrads should come in and be given formal symbol
manipulation stuff with the predicate calculus for years of their education before they’re
even allowed to touch — to actually program.” He describes, “We’ll use this language, which
we’ll be very careful to make sure has not been implemented anywhere on campus.
So nobody can actually program in it.” And nobody seemed to think that was such a
great idea. [laughter] Josh Block said, “That’s crazy.
There’s a joy in telling the computer to do something and watching to do it.
I would not deprive students of that joy.” But my favorite moment involved — with all
due respect to Josh — was when Knuth starts talking smack about Dikester.
That’s good stuff. [laughter] Because who else really can anymore?
So Knuth said — you know, I asked him about this thing, and he said, “But that’s not the
way he learned either. He said a lot of great things and inspirational
things, but he’s not always right. Neither am I, but my take on it is this —
take a scientist in any field. The scientist gets older and says, “Oh, yes.
Some of the things I’ve been doing have a really great payoff, and other things I’m
not using anymore. I’m not going to have my students waste time
on the stuff that doesn’t make giant steps. I’m not going to talk about low level stuff
at all. These theoretical concepts are really so powerful,
that’s the whole story. Forget about how I got to this point.”
I think that’s a fundamental error made by scientists in every field.
They don’t realize that when you’re learning something, you’ve got to see something at
all levels. You’ve got to see the floor before you can
build the ceiling. That all goes into the brain, it gets shoved
down to the point where the older people forget that they needed it.”
So that was, unfortunately, we can’t ask Dijkstrato respond to that.
But so in the interest of leaving some time for questions, I think I’ll just leave it
there. Actually, let me just close with this.
Several people — including your own Ken Thompson — were sort of wondering about where modern
programming was going. And, you know, Ken Thompson said, “I don’t
understand the code they write here at Google. It makes no sense to me.” And Bernie Cosell,
who helped him run the Internet, is now actually a sheep farmer in Virginia.
So he’s basically out of the game. Though he — like everyone who’s allegedly
retired, you know, the people I talk to — they can’t help but program.
But he says, “I don’t envy modern programmers. It’s going to get worse.
The simple things are getting packaged into libraries, leaving only the hard things.
The stuff is getting so complicated, but the standards that people are expecting are stunning.”
And he talked about — actually used Google maps as an example of something that like,
he knows what’s going on under the covers, but he’s like, “I don’t think I could write
that code” — which I doubt a little bit. And so, he closed saying, “There’s a good
time to be an over-the-hill programmer emeritus, because you have a few props because you did
it once. But the world is so wondrous that you can
take advantage of it and maybe even get a little occasional credit for it without having
to still be able to do it. Whereas, if you’re in college, if you major
in computer science, and you have to go out there and have to figure out how you’re going
to add to this pile of stuff? Save me.”
[laughter] So you guys are out here adding to the pile of stuff.
I’ll take your questions.>>[Clapping] Q Peter, the way I see the difference between
what really good programmers can do and what ordinary people like me can do is like the
difference between me and a chess grand master is they can keep many, many, many more things
in mind at the same time than I can, which gives them the ability to find connections
and solve problems much more easily. Do other people see things that way? Peter Seibel: I think there were some —
I asked a lot of people about sort of on that topic, but also — and actually, the analogy
applies to chess as well little bit in the sense that I observed that sometimes the smartest
— ‘smartest’ in some definition of smart — people write the worst code — the most
spaghetti code, because they can keep it in their minds, right?
There’s all of these little tendrils that are all interlocked, and it all fits in their
head, and so they can do it. Whereas other people who are less smart in
that way, or smarter in another way, realize that like, even if you can do that, it’s better
to Not, and find better ways of organizing things.
And a lot of people I asked sort of posed that like, “Have you found that?” Said, “Yes”
— that, I mean, there’s at least two kinds of programmers in that sense.
I’m trying to think who really echoed that. I think Josh Block did.
But to get back to the chess analogy, that’s sort of like, some people can just — this
is at a much lower level, but I used to play chess with my dad when I was a kid, and he
used to beat me all the time. And then I actually learned how to play chess,
and I killed him. Because he was just basically smart and can
calculate a few moves ahead, which was enough to beat a little kid, but really had no understanding
of what he was doing. So it’s sort of the same thing.
But I think that with grand master chess players and also with great programmers, it’s not
just that they can do sort of do more of the same thing.
They look at it differently, right? A grand master looks at a chessboard and just
sees the position and sees where the forces are flowing.
And so, it’s not that they’re calculating — I mean they can — when they get to the
end game, they calculate like crazy, but most of the time, they just look at the board and
say, “Oh, yeah. This is where the forces are flowing.”
I mean — I’m just speculating here — the same for programmers.
Some people just sort of see how things fit together at a maybe deeper level and see the
consequences of bad choices. Q So I noticed that pretty much everything
you touched on is about the programmer working on something by himself.
That’s mostly not what we do here. I don’t know — I mean, is it true that all
great programmers have to work by themselves because there’s too much of a hindrance lapse
with others, or is there some insight into how we can work in teams? Peter Seibel: Right. So the question was,
So today, I’ve talked mostly about things that apply to people working alone, what about
working in teams? Which is obviously prevalent.
Yeah, I did. So buy the book and read it, because I did talk about that too.
Probably, due to the folks that I interviewed, I mean, a lot of them sort of came from a
little earlier era when the cowboy — I think half the people I talked to said, “I’m the
last cowboy coder.” So. [laughter] So I guess they can fight about
that. But, I did talk about how people like to work
with others. XP was a little more in good light when I
started thinking about the questions. I was sort of asking that.
But, you know, I asked people, “Did you ever pair program?” Not a lot of pair programming,
but lots of variants on it. People, you know, Josh called it “buddy programming”
and Joe Armstrong did a very similar thing. Write a bunch of code and then swap, you know.
Joe Armstrong wrote the early version of erlang, and then his friend Robert Virding took it
and completely rewrote it and then gave it back.
And Armstrong would completely rewrite that. And it went back and forth, and they had a
very different style. And they actually improved each other’s —
he thought — code. Ken Thompson talks about how he liked to split
things up in teams. Peter Norvig mentioned — talked, raised the
issue of — when I asked about what do things people need to learn that they’re not learning
in school? And he said, “Well, learning to work in teams
is not yet taught as much it ought — and is really the important thing.” Actually,
this book came out. I had had breakfast or a lisp get together
and Peter was there and I was fresh off of PCL and I was asking people, you know, “What
book should I write next?” And he said, “You should write a book about programming in groups.”
And so, I went to Apress and said, “You know, I’m thinking about writing a book about programming
in groups.” And they said, “Okay, that’s great.
We’ll be happy to publish that, but first, you should really do this book of interviews.”
Because this book was Apress’ idea, because of the earlier book, Founders at Work.
And so, they had the idea for it, and they said, “Well, you should really do this.
And then, it’ll take you a few months.” So that was several years ago.
[laughter] I don’t write books fast. But anyway, you know, that is — I think a
book done like this 20 years from now will have a very different — will have learned
some stuff about that, that sadly, in a way, the people I interviewed didn’t have as much
to say about because of the era they came up in for the most part.
Jamie Zawinski described in great detail, however, how the Netscape people would scream
and curse at each other all the time and how that was very productive. Q Did you notice a consistency of opinion
in terms of the tools of programming, such as ID’s vs.
plain-text editors, or working interactively vs.
sitting down with a printout? Peter Seibel: Right. So the question was,
“Was there consistency in tools?” I’m tempted to be flip and say, “Yes, everybody uses Emax”
— because that was almost true. They were sort of — and that camp was divided
between the people who were ashamed to still be using Emax and the people who were proud
to still be using Emax. [laughter] Some people like, “Oh, I should
really learn how to use an ID, but I don’t.” And other people were like, “Yeah, I looked
at those ID’s. They’re terrible.
I’m sticking with Emax.” Though, on the tool front, a little — partly just because of
the different eras that people started, there are different tool sets.
Dan Ingalls wrote the first version of Smalltalk in Basic, because that was the interactive,
programming environment he had available. He’s always had a preference for interactive
programming environments, as you might guess, given the way Smalltalk turned out.
And so, he wrote the first version of Basic, because that was what he had.
He knew Fortran inside out, but that wouldn’t have been a fun way to write it.
So a lot of people came up in the era of punch cards and dealt with that in different ways.
Like I mentioned earlier, a lot of people — even people who came up in that era had
hands-on experience, like I said, in a way that you might not expect.
So they — you know, it was punch cards, but they would have access to the machine so they
could work sort of interactively that way. But as far as the actual tools people use
today, you know, it’s sort of what you would expect.
There’s a lot of Emax, and some people use ID’s.
And Josh Block still prints stuff out and really looks at — spreads it out all over
the floor. I think Guy Steele echoed that.
This is not now, but Guy Steele described implementing — making a big change to the
Mac lisp system where they wanted to change the way I/O worked, and he printed out the
whole listing and took it to his parent’s vacation house and spread it out all over
the floor and just worked with paper. So there’s still that.
It’s useful for some things. But no big shockers on the tool front, unless
you’re expecting everybody to be using ID’s. Then you’d be shocked. Q It’s been, I think, 20 years since the book,
Programmers at Work, was published. Any difference between those interviews and,
you know, the culture shifts in the interviews that you did? Peter Seibel: So the question was there was
a book 20 years ago called Programmers at Work and what’s changed?
Well, a bunch of stuff has changed. That book was — there’s a couple differences
just between the books. Because Programmers at Work was sort of at
the dawn of the PC Revolution, or whatever. They interviewed Bill Gates as a programmer,
because he sort of still was. And a lot of the people they interviewed were
working on microcomputers when those were new.
And so, in some ways, this book sort of goes on both sides of that, because a lot of the
people I interviewed really started before that era and worked on bigger machines.
There was this — we sometimes forget — there was this huge, in a way, step backwards.
At the time that PC’s and micros came out, you know, all of a sudden it was back to writing
things in assembly and there was a point at which Microsoft said, “Okay, now we can start
using C.” C is not so expensive that we can’t — it’s
not too big and bloated — we can move from assembly to C.
But at that same time, people were working on workstations.
They had lisp machines and the Smalltalk, and stuff was — people were doing sort of
serious computation that got a little bit lost, and now, we’re sort of finally coming
back around. The PC’s have ramped up, and now they’re powerful
enough that we can put up with the efficiencies of things like python and ruby and so forth.
So the other big difference between the two books is just that mine is a little more technically-oriented
I think. Programmers at Work was sort of — everyone
was aware that there were these PC’s, and they had — IBM had the Charlie Chaplain ads
and, What is it all about? And so, Lammers was trying to get at that
for a little more general audience whereas mine was aimed at, “You are a programmer.
You want to know how these folks work.” or think, or live, or whatever.
So yeah. And then, obviously anything that talks about
the Internet is, you know, that’s changed how people look at programming.
Open source is much bigger now. Okay. We have time for one more question,
so you’re it. Q So did anybody say something about negative
aspect of what they regret they see nowadays besides the obvious complexity of software?
Some trends they see as regrettable or something to be avoided and? Peter Seibel: Right. So the question is, Did
anyone have any worries about trends that are going the wrong direction as far as they’re
concerned? Other than the complexity of software.
So that would be the main one. And I don’t even know if people regretted
that so much as, you know, it should change just as it’s inevitable.
I guess in a way the most interesting on — there was some of that — and probably
the most interesting was Donald Knuth. And it’s hard to say, given who he is.
It’s just part of — he’s a bit of a throw back.
The kinds of stuff he works on, he works on alone, and it’s really not that big, you know,
compared to something like what you guys are building here.
But he was sort of very vehement about not liking black boxes.
You know, he said, “I recognize the use of black boxes and abstractions,” but he said,
“I like to be able to open them up.” You know and, “If there’s an algorithm and
it’s packaged up, I always think I can open it up and do it better.”
And he gave an example — this is sort of a simple example — but if you have a, you
know, matrix multiply algorithm and then you want to use that sort of generic algorithm
for multiplying complex matrices, then it’s 4x, but if you can open it up on the inside,
there’s an identity that lets you do it in 3x time or whatever.
So you know, and I’m like, “Well, yeah. YOU can open up any box and make it better.”
[laughter] “I’m not sure that’s a good strategy for everyone.”
But he was — I mean, he felt like he seemed to regret that, you know, and I — somewhere
here, he said, “There’s overemphasis on reusable software where you never get to open up the
box and see what’s inside. It’s nice to have these black boxes, but almost
always, if you can look inside the box, you can improve it and make it better once you
know what’s inside.” And he just was saying, “So you get these
libraries and now you know that when you call this subroutine and you put XoYo, X1Y1, but
when you call this other subroutine, it’s XoX1, YoY1, you get that right, and that’s
your job.” And he seemed to think that was sort of sad.
It’s like, “It’s no fun.” He couldn’t really say whether, you know,
Do we have to give up that fun to build big systems, or should we just stop trying to
build big systems? That’s hard to say.
And he wasn’t the only one. There was a little Joe Armstrong also echoed
this idea of, “It’s good to open stuff up and not be put off by the abstractions.”
And that sort of struck me — as a friend of mine used to talk about — the Jedi programming.
You are a Jedi. You have to build your own light saber.
So there’s something to taking these things apart and figuring out how they work, but
obviously, you know, if we take apart everything, we’ll never get anywhere.
There it is. Thank you.>>[Clapping]