12
Oct

Computational Thinking – Computer Science for Business Leaders – July 2016


DAVID MALAN: Good morning. My name is David Malan. This is Computer Science
for Business Leaders. And we’ll do some group
introductions before long. But why don’t we dive right in this
morning to one of our first lessons, which is going to be on
computational thinking. Indeed, today and
tomorrow, among the goals are to give you really kind
of a top-down sense of what computer science itself is all
about, and computer engineering, particularly so that
everyone in the room, even just after a couple of days’ time,
can hold their own all the more, so to speak, with engineers,
can try to estimate, perhaps, a little more effectively
how long projects might take or what might be involved or,
at the end of the day, just what kinds of questions to
actually ask so as to wrap your minds around technical projects. It looks like we have
a mix of folks, though, and to give you a sense of who else is
in the classroom on an aggregate scale, we’ll give you a sense
of some of the answers that you gave us to the Google
form that we circulated in advance. But here’s where we are today. So we’re going to start off
with computational thinking and talking about some fairly high
level concepts like abstraction and algorithms and representation. But we’ll distill those into things
much more concrete before long. After a short break, we’ll then
dive into internet technology. So how does the internet work? Most everyone here uses it every day. Most everyone here is on it right now. But what, actually, is
underneath the hood? And what are some of the
things that can go wrong? And why was it designed
the way it was designed? And what are the kinds of
questions to ask, then, when building a business on top of it? Indeed, after lunch,
we’ll focus specifically on a more modern incarnation of these
technologies known as cloud computing. This in and of itself is mostly
just a buzzword, frankly. But we’ll distill
exactly what that means and what some of the
ingredients and building blocks are for using the cloud. And then, this afternoon,
after another break, we’ll take a look at web development. We’ll just scratch the surface, but
it will be an opportunity hands-on, with laptops to get your
hands dirty if you’ve never done it before with a
little bit of languages like HTML and CSS and the
general Linux architecture so that, when you walk
out today, we’ve tried to make things a little
more concrete, still, based on the higher-level discussions. Tomorrow, meanwhile, we’ll start
with a high-level discussion of issues of security and privacy
in society, more generally– things like, a few months back, Apple created
a bit of a stir with its butting heads with the FBI, as you may recall, as
they tried to get into someone’s iPhone. We’ll talk about that, what
it means for encryption, and what that technology actually is. Looking again at it another
industry topic like Dropbox, or sharing of files,
which, at first glance, might seem like a wonderful service
and exists for Microsoft and Google and any number of other forms. But what does it actually
mean to be storing your files on services like Dropbox
for your own privacy and security? Then we’ll talk about
programming, so what it actually means to write software, what
some of the basic building blocks are, followed by technology stacks,
so a general way of describing what are the sort of ingredients
that you can bring to bear when trying to build a business
or a website or a mobile application. What’s the sort of
alphabet soup these days? And then, lastly, web
programming– looking not just on the aesthetics of web pages
using languages like HTML and CSS, but actual web programming using
languages like JavaScript or others. And we’ll talk about some of
the related technologies there. So here’s who’s in the room. When asked via that
Google form, how would you describe your comfort with technology? It’s split between
“very” and “somewhat,” with a few of you saying “not very.” So this is a nice mix. And certainly feel free to
ask any and all questions as we proceed today and tomorrow. And we’ll see, perhaps
based on questions from me to you, just how very
“very” is, perhaps. But we’ll get there. And then, when asked, as well, “Do
you have any programming experience in any language,” this
was a bit more of a mix, with 35% of folks saying “no prior
programming experience,” some of you saying “some,”
and then, just a few of you saying “a lot,” which is a
good mix, too, because, hopefully, then when we do the
hands-on portion, they’ll be a little something new for everyone. So if you’d like to follow along today
or tomorrow, all of these slides, including any edits we might make
in real time, is at this URL here. So if there’s one URL you
want to keep open today, let me pause for just a moment and
suggest you go to this bitly URL, cs-for-business-leaders-201607. I’ll linger for just a moment on that. In the meantime, we can
get these out of the way. Just let me know if you have
any trouble viewing those. All right. And typing seems to be slowing,
so if you don’t have it yet, just look at the person
next to you, if you could. So we are now at computational thinking. All right, so what does
this actually mean? This is kind of one of the
buzz words within academia for describing one of
the returns that you get from studies in computer
science, what you get out of it, what you learn how to do. And indeed, many fields
teach you how to think, or so they claim in course catalogs,
descriptions, and so forth. But computational thinking is
actually a sort of ingredient that we can leverage. It’s a way of thinking more
algorithmically, so to speak, more methodically, and bringing to bear, to
problem, sort of a more organized way of thought. But to get there, let
me propose that there’s three ingredients to this notion
of computational thinking. Specifically, let me propose
that computational thinking can be distilled as input,
going into something called an algorithm, producing outputs. So in other words,
computational thinking is all about building this
process, this pipeline. But this invites the question,
well, what do we mean by inputs, and what do we mean by outputs? Well, inputs are just whatever the
problem is that we have to solve. In a little bit, we’ll play around with
an old school technology, this phone book. This might be the input to a problem–
look someone up in this phone book. So here is the physical input. And the additional ingredient,
the additional input, is someone’s name, whom
we might want to look up. And the output, hopefully, is going
to be, from that problem– what? Intuitively? If the input– is a phone number. So hopefully, that’s
going to be the output. Meanwhile, the algorithm is the
process that my hands engage in, in order to find someone’s name in this
phone book and, in turn, their number. So that would be the algorithm
or the problem solving steps in between the inputs and the outputs. But how does a computer, and how do we
humans, represent inputs and outputs? Well, at the end of
the day, most everyone here probably knows that computers only
understand what alphabets, so to speak? AUDIENCE: Binaries. DAVID MALAN: Binaries. So, 0’s and 1’s. And even those of you who said you were
only somewhat or not very comfortable probably at least know or have heard
that computers, indeed, only understand 0’s and 1’s, the so-called binary
systems– “bi,” meaning two, and that’s two because you only
have 0 and 1 in your vocabulary. We humans, by contrast have used the
decimal system, “dec,” meaning 10, so we get to speak in terms
of 0 through 9, not to mention alphabetical letters and more. So if computers only
have 0’s and 1’s, though, how could we possibly
go about representing the number 2 and the number 3 and
the number 4, let alone a, b, and c, let alone images and movies and audio
files and any number of other formats that we just take for
granted these days? Well, think back to
grade school, perhaps, when you first learned that this
pattern of symbols– 1,2,3– represents what number in decimal? 123. Excellent. So that all comes to us very
intuitively, certainly, these days. But why is that? You might not have thought about it
for some time, but if you’re like me, you probably learned back
in the day that this 3 is in the ones place, or the ones column,
and the 2 is in the tens place, so the tens place. And then the hundreds
place, thousands place, ten thousands place, and so forth. But why is that significant? Well 1, 2, 3 is just
a pattern of symbols. That’s our vocabulary,
0 through 9, so the fact that we’ve ascribed meaning to the
places in which these digits are, means that we can do 100 times 1–
is meant to be the arithmetic– and then, that’s plus 10 times
2, and that’s plus 1 times 3. So this, of course, is 100 plus
20 plus 3, which gives us 123. Now at the end of the day, we’ve still
written down the exact same thing, but we’ve gone from just a
pattern of symbols– 1, 2, 3– to a higher-level notion of
a number, a decimal number, that we know now, intuitively, is 123. Well, how many of you–
just to get a litmus test of before and after–
how many of you know binary, could recite numbers in binary? OK, good. Otherwise, this is going
to be very underwhelming. So it turns out all of you, I claim,
a bit boldly, already know binary or certainly could make
this leap within seconds. So instead of having 1’s, 2’s, 3’s, and
0’s and 4’s and 5’s and 6’s, 7’s, 8’s, 9’s in our vocabulary, now we’re
limiting ourselves to just 0’s and 1’s. And so if we have, say, three digits,
just for the sake of discussion, we also now– if we only have 0’s and
1’s– are going to want to change these placeholders because before, 1, 10,
100, 1,000– those were powers of 10. And that was significant
because we had 10 digits. So just intuitively,
or take a guess, what are the powers going to
need to be if you only have two digits at your disposal? Powers of 2. So we still start with a ones
column, which is 2 to the 0, but then we have 2 to the 1,
which is the twos column, 2 the 2, which is the fours column, 2 the 3,
which is going to be 8, 16, 32, 64. And so the columns are
still powers of some value. But in this case, it’s
powers of 2 instead of 10. So based on that logic, if I
told you a computer was storing this pattern of bits, or
binary digits, what number is the computer thinking of in decimal? Just 0, right? If the logic is the same–
4 times 0, plus 2 times 0, plus 1 times 0 gives us the
higher-level notion of 0. So this is how a computer
would store the number 0. So even if you’ve never spoken
or thought about binary before, how do we represent, say,
the number 1 in binary? Yeah. 0, 0, 1. And of course, to represent the
number 2, we’re going to do what? This. Yeah. Trick question. So this would be 3, just based on
the process we’ve been using before. So that’s not correct. This would be the number 2. This now would intentionally
be the number 3. The number 4, of course,
would be 1, 0, 0. And then we can count up further. But actually, let’s count up as high as
we can with three binary digits, a.k.a. bits, what’s the largest
number we can count to? Yeah, 7. And you know this because, if you only
have 0’s and 1’s– and we already know that putting 0’s everywhere
gives us 0– clearly, the largest number
has got to be 1, 1, 1. So that’s 4 plus 2 plus 1 is 7. But how, then, does a
computer count as high as 8? Yeah. So we need more powers. We need more bits. So just like you might
carry into the next place using the decimal system or grade
school arithmetic, same idea here. If we want to now count
as high as 8, that’s fine. But we’re going to need another
bit in the so-called eights place so that we can represent this. Now as an aside, how many
bits does a computer typically use to store numbers? Does anyone know? It’s bigger than 4, otherwise we
wouldn’t have much to do in a computer. AUDIENCE: 16. DAVID MALAN: 16, used to be 16. Nowadays, it’s a little bigger. 32 is very common, and very
much in vogue these days is 64. So even if you’ve never really
thought about what it means, if you’ve heard the expression
32-bit or 64-bit– it’s usually in the context of CPUs or computers
or operating systems– well, that signifies, essentially,
how many bits does that computer use to store numbers. Specifically, it means
how many bits does it use to store addresses of memory. But more on memory in just a little bit. But that’s to say, if a computer
has a ones place, a twos place, a fours place, eights,
16, 32 total such bits, the largest value a computer
can represent is 2 to the 32. And so, if there’s one number,
roughly, you take away from today, let’s consider this one. How high can a computer count if a
computer can count as high as 2 the 32? 2 to 32– we could do this. So 2 times 2 is 4, times 2 is 8, times
2 is 16, 32, 64, 128, 256, 512, 1024, 2048. It’s going to get hard soon, so save me. So it’s going to be a pretty big number. Turns out, this is roughly 4
billion– roughly 4 billion. Now, for those of you who
might have grown up with PCs and might have built them or
bought them, certainly, yourself, you might have at some point realized
or been told that your computer can only support up to 2 gigabytes of RAM. Did anyone ever run
into this limitation? So if you ever ran into an
upper bound on how much memory your computer could
support, sometimes it’s just because the manufacturer
had some hard-coded limitation. Apple does this just for money and
financial reasons or engineering reasons. But back in the day, especially
with PCs that you could upgrade, there was still an upper bound on how
much RAM you could put in a computer. You could only, in many machines, put
in 2 gigabytes because, at the time, they were using 32-bit
values– and actually 31-bits because they were stealing one
of them to represent negative signs. But more on that, perhaps, another time. So that meant not 4
billion, but 2 billion. And if you’ve got memory in your
computer–and we’ll talk more about memory in a bit– if you’ve
got memory in your computer, but you can physically only
count as high as 2 billion, you better have no more than 2 billion
bytes of memory because everything at 2 billion and 1 or beyond, essentially,
would just be inaccessible. You wouldn’t have the vocabulary
with which to express it. Nowadays, this is less of an issue. And computers these days are
doing 64-bit values, 2 to the 64. And this is not a
number I can pronounce, but I can bring up a calculator. If you’ve ever wondered
what 2 to the 32 is. That, of course, is 4 billion,
which looks like that. And just to give you a sense
of the size of 2 to the 64, that’s a lot more bytes. That’s way more RAM than we would ever
be able to afford or fit, physically, probably, in a computer. At least right now. So why is all of this important? Well, we only seem to
have, at the moment, a conceptual way of thinking about
what goes on inside of a computer. We’re just doing things with markers
and with numbers on the screen. But at the end of the day, the
computer is a physical device, and a phone, which is essentially
a computer, is a physical device. So what’s going on underneath the hood? How do we get from these abstract
ideas of counting to 7 or 4 billion to actually doing that,
physically, in a computer. Well, what’s one of the only
inputs into my computer? At the end of the day, it’s
this thing here, the power cord. And even if you’re not
an electrical engineer, what comes out of the power cord? Electricity comes out of
the holes in the wall. We can keep things very real today. And you probably remember, generally,
that there’s electrons involved. Little– Electricity is flowing
in one direction or another. And that suffices for
today’s purposes, certainly, to know that electricity is an input. And you note, certainly, that
when you have electricity, you can turn things on. So if I want to turn on
this light on my phone, you can think of that as
consuming some electricity. Electricity is now
flowing through the device because I’ve turned the switch on. And now, when I hit it again, it’s off. So it turns out computers
do fundamentally work in quite the same way. If you think of your computer as
having a whole bunch of little switches or light bulbs, if you will,
inside, if those are on, you can think of the
computer as representing a 1. And if they’re off, you can think
of the computer as representing a 0. So my phone right now is in
the off state, so to speak. And now, once I turn the switch
on, it’s in the on state. So I might say that my phone
is currently representing a 0. And now it’s representing
a 1, 0, 1, and so on. But unfortunately, with just
one switch, one light bulb, I can only count so high. And it’s around this time
I like to borrow one phone. Can I– a phone? That one, I can turn on. May I? I don’t need to unlock it. I just need to turn the flashlight on. So everything is safe, still. So if now I have two phones at my
disposal, you can think of this one as being in the ones place,
this one as being in twos place. So quite trivially, what number
am I representing right now? AUDIENCE: 0. DAVID MALAN: Both are off. Right. So they’re both off. And if I go ahead and
turn on this one, I’m now representing 1 because this is the
ones place, this is the twos place. Now if I turn this one
on, I’m representing– AUDIENCE: 3. DAVID MALAN: –3 because 1 and 1. And if I turn off this one,
I’m representing, of course, 2. Now with two phones, I can only
count as high as 3– 0, 1, 2, 3– because that’s how many ways I
can permute the on, off switches. Now certainly– thank
you– a typical laptop has many more than just two switches. It might have– turn
this off– it might have thousands of switches or millions
of switches or tens of millions of switches these days. And indeed, if you’ve ever heard
the expression “transistor,” back in the day a transistor
radio, inside of a computer, a transistor is really just a tiny,
little switch that if it’s on, allows a little bit of electricity
to flow and get stored, perhaps, and so that gives us a 1. And if the switch is
off, that gives us a 0. So inside, physically, a computer,
there might be millions or tens of millions of these switches. And that’s how we can store
so much darn information or compute so much
information all at once. Unfortunately, there is a downside
of using electricity and switches in this way. And I deliberately turned my
flashlight off a moment ago because I was probably worried
that what might happen? The battery might die. So if the battery dies, or
if, in the case of my laptop, my battery dies and this
power cord comes off, presumably all those little light
bulbs, all those little transistors, naturally switch off, which is
not good for your Excel files and your Word documents
and anything else that you might be storing
locally on your computer. So what do computers instead use to
store data persistently or long term? Yes, memory. And it turns out, there’s
different types of memory. Let’s distill this for just a moment. So there’s different types of memory. We’ve been talking
about bits, of course, which might be turned to 0’s, which
might be represented, physically, with little transistors
turning on and off. And so this takes
different forms, but let’s go ahead and describe
a few types of memory. One, there’s little
things called registers. These are typically 32 or 64 bits of
memory inside of a computer’s CPU. And we humans really never should
care about how many registers a computer has. It’s not a selling point
of a computer, per se. It’s a very low-level
implementation detail. But it’s about the
smallest unit of measure that a computer or its CPU,
central processing unit, might actually care about. There’s then stuff called
RAM, which is what all of us are intuitively describing, most
likely, as memory, random access memory. This is the stuff that does
get lost when you lose power, or when your battery dies. It’s for non-persistent data. And RAM is where programs
live when you’re using them or when files are open. So if you double click Microsoft Word
or some such program on your desktop, and a window opens up,
essentially what’s happened is that program,
Microsoft Word, has been copied from somewhere permanently
on your computer into somewhere temporary on your computer called RAM. Why that is, we’ll see in just a moment. And where it came from was probably
something called a hard drive. Now back in the day, these might
be called HDDs, hard disk drive. The catch is, they’re not
really hard disks anymore. They’re not spinning
platters, as we’ll soon see. They’re also called SSDs
now for solid state drives, which means they’re purely electronic. But they store their charges
even after you lose power. But this is kind of the
pipeline from one to the other. And it turns out we
could do this all day. There’s something called a
L1 cache in here, level 1. There’s L2 cache in here. Those two are very low-level
details, but this is just to say when we humans
say memory, frankly, we could technically mean a
whole bunch of different things. But odds are humans mean this one here. But it’s different from this
kind of memory, your hard drive, because this is where your
data is stored permanently. So when you install Microsoft
Word or Excel or create a file and go to File, Save, all of those
0’s and 1’s are somehow stored here. But when you double click on
the program or open the file, they’re temporarily copied
in duplicate, essentially, here while you’re using them. Now, why might there be this duality? If you put on your
so-called engineering hat, why would we have so many
different types of memory? Because, as an aside, the stuff that’s
in here, eventually ends up in here, and a little bit in here, and even
less of it, but some of it, in here. So like all of these steps,
moving things around, why would we have this pipeline
from long term to super short term, would you think, intuitively? AUDIENCE: Speed. DAVID MALAN: Speed. What do you mean? AUDIENCE: When you’re
dealing with less material. DAVID MALAN: Yeah, so less material–
so speed, though, in what sense? Is one of these slower or
faster, would you conjecture? AUDIENCE: [INAUDIBLE]
the RAM is smaller. DAVID MALAN: Oh, so the RAM is smaller. So that’s, indeed, the case. A computer might have 2 gigabytes of
RAM these days, 8 gigabytes of RAM, but low numbers of gigabytes. Whereas a hard disk might
have 512 gigabytes or 1,024 gigabytes, a.k.a. a terabyte,
or even more than that. But this seems silly. Why have less of this, if
this is the good stuff, this is where your programs
and files are living when you actually care to use them? AUDIENCE: One has the working memory,
and the other’s the stored memory. DAVID MALAN: True. One is working memory. One is stored memory. But why not just use your
hard drive for working memory? Why complicate things? AUDIENCE: You don’t want
to lose the old stuff. DAVID MALAN: You don’t want to lose–
but if I’m storing everything always on my hard drive, it would stand
to reason that I’ll never lose it. Whereas if I load something into
RAM while temporarily working on it, power goes out, that’s
when all of us start to swear because that’s when
you’ve lost data, potentially. AUDIENCE: I guess he was saying like
speed and the amount of space you put [INAUDIBLE]. DAVID MALAN: Speed, yeah. And let me fill in a blank here. It’s indeed the case that this
is bigger, this is smaller. This is bits, this is, maybe, terabytes. But this is also super fast, and this
is, relatively speaking, super slow. So there is this trade off
between size and space. You want a huge amount
of space, ideally, for all of your personal files
and work files and so forth, so you don’t have to run out
of space and delete things and just generally do
that kind of dance. But this space, as a result of being
cheap, tends to be pretty slow. And as a result of being mechanical,
as we’ll see in a moment, also tends to be slow. Back in the day, this was a physical
device with a moving platter, not unlike record players
of yesteryear, that is just subject to laws of motion
and physical speed limitations, whereas this is purely electronic. And even RAM is purely electronic
and therefore much faster. So if you have more of this and
less of this, but this is slower, and this is faster, why not just
have more of the fast stuff? Just to be clear, this is
more, less, fast, slow. So why not just have more
fast somewhere in the middle? Like more RAM? AUDIENCE: Does it take more electricity? DAVID MALAN: It doesn’t necessarily
take more electricity, but good thought. AUDIENCE: Expensive. DAVID MALAN: Yeah. It’s really this issue right here. It just happens to be the case,
still, that the higher you go up on your RAM vs hard disk
space, this is just a lot more expensive than hard disk space. So yeah, you could
have a terabyte of RAM, but you’re really going to pay for it. And no one really
supports systems that use that because, for some of the
intuition that some of you had where this is really
just working memory, I don’t need to run all
of my programs at once. I might run one, maybe half a dozen
if I have multiple windows going. But it’s a subset, most likely, of all
of the stuff I’ve installed over time. I’m not going to have every Word
document open that I’ve ever created. I’m probably going to have one or a few. So working memory can be,
logistically, just smaller. And so even though
it’s more expensive, we can tolerate that because
we don’t need as much. And so these are some of the trade offs. And when you’re buying
a consumer PC or Mac, frankly, the only ones you
really have choice over, discretion over, perhaps, are those two. And more RAM tends to
mean more expensive. But why might you want to have more
RAM, according to this logic here? Why might you want to spend
a few hundred dollars more to get the nicer Mac or
the nicer PC with more RAM if everything else is constant? Faster why? That’s correct, but why does more
RAM make your computer faster? AUDIENCE: Bigger working memory? DAVID MALAN: Bigger working memory. AUDIENCE: [INAUDIBLE] pull more things
from your hard drive [INAUDIBLE] using them all at the same time? DAVID MALAN: Yeah, that’s
what it boils down to. But there’s a reason that it
actually feels, to the human, faster. It turns out that, even if you
only have a gigabyte of RAM, you can actually cram more than
a gigabyte of files and programs into RAM, which seems a
little counter intuitive. But that’s because of a
technology called virtual memory. And computers have had this for
20, 30 years now, in some form. Back in the day, you used to have to
pay for and install special software to add virtual memory to your computer. It was a feature. Nowadays, it just comes with Windows
and Mac OS and Linux and Unix and other operating systems. But virtual memory is a feature
of modern operating systems that creates the illusion that,
even if you might have, physically, 1 gig of RAM, the computer will let
you think you have 2 gigabytes of RAM. But the moment you try to
add the billionth plus 1 byte to your RAM, what the computer does
is it does a little switcheroo. It looks at your RAM, realizes
yes, Excel is running, but you haven’t touched that
spreadsheet for a minute or for an hour or for a week. I’m going to secretly move Excel
from RAM back to your hard drive in some temporary scratch
locations, or some temporary space that you don’t even have
to know or care about, so that you can load that additional
file that you just double clicked on and go about your business. So what happens, though, if I
just said, oh, wait a minute. Now I do want to play
with that Excel file. So I click the little icon in my toolbar
or whatnot for the Excel program. That then gets foregrounded. And if you’ve ever seen this happen,
visually, sometimes computers have this like– it kind
of feels painfully slow as the window is going from the
background, or minimized state, to foreground. Why is that so slow? What might be happening? AUDIENCE: [INAUDIBLE]. DAVID MALAN: Yeah. Go ahead. Oh, from– AUDIENCE: I was just saying, it’s
retrieving it from the hard drive so it’s taking [INAUDIBLE] memory. DAVID MALAN: Exactly. Remember that this is slower. This is faster. And so if Excel was secretly
swapped out by your operating system so that you could use that memory for
something else, the moment you want Excel back, your computer
has to secretly move it back from the hard drive to RAM. And that’s what creates
that feeling of slowness. So it’s not so much that
just adding RAM to a computer makes your computer faster because
if you’re only using a few programs or files at any one time, you could
have an infinite amount of memory. If you’re not going to
use it, it’s not going to have any impact on your computer. But if you have relatively
little RAM, but you’ve been using your computer
and programs and files so much that you’re constantly swapping,
so to speak, unbeknownst to you, but underneath the hood, between RAM
and hard disk RAM and hard disk again and again because of virtual memory,
then yes, more RAM will help. So in a consumer perspective,
frankly, just spending on RAM tends to be a good thing. There tends to be an upper
bound, realistically, anyway because this way you don’t have
to think about these kinds of things, as well. But that’s a slippery slope. Of course, paying for
more hard disk space is great, too, because you never
have to worry about deleting things. Paying for a faster CPU,
central processing unit, will just make everything faster
because the computer can think faster. So even though I say more is better,
it all, of course, has this impact. But this is the first such example of
a trend we’ll see today and tomorrow, of these trade offs. And in retrospect, it
might seem pretty obvious. So better is more expensive,
faster is probably more expensive, but I can get less of it to
sort of balance my budget. This notion of trade offs is going to
be constant throughout computer science. In this case, it might be something like
speed and cost or speed and quantity. But we’re going to see other
such trade offs, as well. And that really, at
the end of the day, is what’s interesting about or
hard about engineering– is having to make those educated decisions
as to which one is more important and when. So let’s now scale up from
something conceptual to something physical, but small, like a phone
and a flashlight, to actual hardware, something like a hard drive,
with which some of you might be familiar from yesteryear. Most of your laptops today
probably have what are called SSDs, solid state drives. But if your computer has
a fan, or if you’ve ever heard clicking, which
is generally not good, that’s because you have a hard disk
that works a little something like this. Let me go ahead and hit play. SPEAKER 12 (ON VIDEO): The hard
drive is where your PC stores most of its permanent data. To do that, the data
travels from RAM along with software signals that tell the
hard drive how to store that data. The hard drive circuits translate those
signals into voltage fluctuations. These, in turn, control the hard drive’s
moving parts, some of the few moving parts left in the modern computer. Some of the signals control a motor,
which spins metal-coated platters. Your data is actually
stored on these platters. Other signals move the read/write heads
to read or write data on the platters. This machinery is so precise
that a human hair couldn’t even pass between the heads
and spinning platters, yet it all works at terrific speeds. DAVID MALAN: So now let’s zoom
in a little lower-level on where the bits actually are on a physical
device like that, and it’s version 2. SPEAKER 13 (ON VIDEO): Let’s look
at what we just saw in slow motion. When a brief pulse of electricity
is sent to the read/write head, it flips on a tiny electromagnet
for a fraction of a second. The magnet creates a
field, which changes the polarity of a tiny, tiny
portion of the metal particles which coat each platter surface. A patterned series of these tiny,
charged up areas on the disk represents a single bit of data in the
binary number system used by computers. Now, if the current is sent one
way through the read/write head, the area is polarized in one direction. If the current is sent in
the opposite direction, the polarization is reversed. How do you get data off the hard disk? Just reverse the process. So it’s the particles on the
disk that get the current in the read/write head moving. Put together millions of
these magnetized segments, and you’ve got a file. Now, the pieces of a single file maybe
scattered all over a drive’s platters, kind of like the mess
of papers on your desk. So a special, extra file keeps
track of where everything is. Don’t you wish you had
something like that? DAVID MALAN: OK. So this is all to say
that, not only do we have a conceptual mental model for
how to count higher than 0 and 1, to 2, to 3, to 4, we also
have a physical way now of representing these 0’s and 1’s,
not only temporarily in stuff like RAM and inside of your computer by way of
transistors turning things on and off, but also permanently. So to be clear, I am often reminded when
we talk about hard disks and storage of this guy from many years ago. Although, maybe he still exists. Wooly Willy. If you remember this little toy–
it’s just a plastic container, below which is a face. And then there’s all these
little black magnetic particles. And you get a little red stick
that itself has a magnet on it. And you can draw mustaches and
hair and so forth on his face. But those little magnetic particles
are, essentially, a much larger analog of this thing here. So as the video discusses, having
these 0’s and 1’s represented by these little blue
and red ovals, that’s like saying if the particle is oriented
like this, it might represent a 1. And if it, instead, is this way,
south, north, instead of north, south, it might represent a 0. If you ever, years ago,
remember floppy disks, and you ever moved that little
metal sheath over the floppy disk and touched it, there was,
indeed, a floppy disk inside. But what you’re touching is
a platter, not unlike this, that had a lot of magnetic particles
that, unfortunately, you just moved or destroyed or corrupted,
essentially, by touching them. And that’s why there was that
metal sheath on top of it. All right. So if we can store
permanently or temporarily 0’s and 1’s, that all fine and good. But it seems that all computers are good
for is computing things like numbers or calculating, really. How do we get from 0’s and 1’s
and 2’s and 3’s and just numbers, more generally– 4 billion, maybe
that high– to things like letters, so that we can actually implement
Microsoft Word and email? And how do we then get higher still
to things like image and movies? Let’s start with the first. If all you have is 0’s and 1’s in your
alphabet, how could you send messages, textually, to someone else
if we have just 0’s and 1’s? Yeah. AUDIENCE: It could be the A
would be 1, B would be 2, C 3– DAVID MALAN: Yeah. OK. What do people think about that? A is 1, B is 2, C is 3, and so forth. Like it? Yes? No? It’s kind of arbitrary, but it’s clean. It’s nice. And so long as all of us humans agree,
it’s actually perfectly correct. Now, it turns out that
world didn’t quite standardize on 1, 2, 3 for A, B, C. They
started counting a little higher just because of punctuation and
other symbols on the keyboard. But indeed, that’s exactly the case. It turns out that the
world decided some time ago that, if a computer is to
store the capital letter A for the purposes of a
document or an email or whatnot, it’s actually going to represent
that A using the decimal number 65 or, more specifically, whatever pattern
of bits represents the number 65. And let me try to figure this out. So if I have the ones place, the
twos place, the fours, the eights, 16’s, 32’s, 64’s, in this case,
I need 1, 2, 3, 4, 5, 6, 7. And indeed, it turns out that 7 is
going to be significant in a moment. How do I represent the
number 65 using binary? I need 7 bits, seven 0’s
and 1’s, but what pattern? AUDIENCE: [INAUDIBLE]. DAVID MALAN: Yeah. So thankfully, this one’s actually a
little easy because you just need a 1 at both ends. You need a 64, and you need a 1. So that gives us 65, which is to say,
if you’re writing a Word document or you’re writing an email, at the
end of the day, what you’re really doing when you hit the capital
letter A on your keyboard is telling the computer,
store the decimal number 65. But what does that mean? It can’t store 6, 5. But it can store 0’s and 1’s. So you’re really telling the computer
when you hit the capital letter A, give me seven bits, permute
them to look like this, and store those bits somewhere
in the computer’s memory. And indeed, it turns out that the
capital letter B, by convention, is 66. C is 67. Dot, dot, dot. Lowercase A is 97. Lowercase B is 98. And there’s a couple of hundred more
because of all the various symbols on the keyboard. And at the end of the
day, this is a system called 7-bit ASCII, which is just a
fancy way of saying American Standard Code for Information Interchange, which
is not something you need to remember. But it’s a slightly more refined system
than just the obvious A is 1, B is 2, C is 3. They also took into account
punctuation and other symbols, as well. Now it turns out that 7
bits isn’t all that much. Not to dwell too much on
math today, but how high can you count if you only have 7 bits? What’s the largest number you can reach? It’s 128. Why is that? Well, if you have two possible
values, here two possible values here, two possible values here,
it’s 2 times 2 times 2 times 2, which is 2 to the seventh,
which does equal 128. That’s all fine and good for American
English because we’ve got A through Z, so that’s 26. But then lowercase and
uppercase– so that’s 52. I like exclamation points. So that’s 53. There’s space bar, 54. There’s bunches of others. But certainly, in other languages,
especially certain Asian languages where there’s a whole symbology,
128 total possible characters isn’t going to cut it. So this was very American-centric–
American Standard Code for Information Interchange. So nowadays, not only is it
more common to use, certainly, 8-bit for extended ASCII, but
actually, 16-bit values called Unicode. And Unicode is just
a smarter system that actually accommodates the fact
that not everyone in the world only needs 7 bits. We sometimes need more to
represent certain symbols. And I guess, probably the last
math exercise– what’s 2 to the 16? How many characters can we handle now? How many keys on the
keyboard, so to speak? It’s bigger. 65,536. I know that only because I ask
that rhetorical question so often. So that’s a lot. And that’s decent, certainly,
to represent textual characters. So at the end of the day,
what’s the key ingredient? Well, we just had to decide on a system. We just had to decide,
somewhat arbitrarily, but globally consistently,
65 shall represent capital A. 66 shall represent capital
B. And it’s this mapping, this code, that we all just need to agree upon. But then how does the computer
know when a pattern of bits, 0’s and 1’s, represents A or 65? How might we address this? Yeah, Avi? AVI: Just one switch
that changes the mode. DAVID MALAN: Yeah. One switch that changes the mode. Not bad. Not bad. Unfortunately, the catch there is this–
we’re only, at the moment, narrowly talking about numbers and letters. But what about images and movies and
any number of other file formats? That seems a slippery slope, if we have
a way of encoding the type of program because there could be an infinite type
of files that people create over time. So the storage space is just
going to grow and grow and grow. So not bad, but it’s
probably going to run into problems as soon as we want to
do more than just numbers and letters. What might you do instead? AUDIENCE: Have another code for 65. DAVID MALAN: I’m sorry? AUDIENCE: Have another code for 65. DAVID MALAN: So have
another code for 65. It’s not bad. But at the end of the day, that code is
going to have to be a pattern of bits. So we could absolutely come up
with some other pattern of bits, but the problem is there’s an infinite
number of numbers in the world. We can count up past 4 billion, as
humans, to 8 billion, to 16 billion, to a trillion. And unless we decide no one can ever
use the following numbers in a math program, you can’t really
reserve any patterns of bits for your A’s and your B’s and your C’s. So we need them both
to co-exist, somehow, where sometimes this represents 65,
but other times it represents A. Yeah? Grace? GRACE: [INAUDIBLE] logic
in the individual program that knows how to read the file then? DAVID MALAN: Yeah. Why don’t we kick it to
the program to decide? So it’s not bad intuition to kind of
try to embed the information in the data itself. But it’s easier and,
daresay, more scalable to just leave it to the program
that you’re using to decide, am I looking at numbers,
or am I looking at letters? And to oversimplify, Microsoft Word,
when it sees a pattern of bits, presumably is going to
interpret that pattern of bits as letter, letter,
letter, letter, letter because, together, that gives you a
whole essay or document or whatnot. Meanwhile Excel might view that
same pattern of bits as numbers. But of course, Excel
can handle other things. An email program might
view it as letters. A calculator program
might view it as numbers. But of course, those programs
could use different patterns for different types of
data– numbers and letters. So it all is context dependent. And so Grace is spot on. It really just depends on the program. It is up to the program
to decide whether it’s looking at numbers or letters. Yeah? AUDIENCE: Does it have something to do
with the [INAUDIBLE] strings, binary [INAUDIBLE] that we find
in programming, maybe? DAVID MALAN: It does. Yeah, absolutely. In fact, we’ll see a little
bit of this tomorrow when we talk about programming, specifically. A lot of programming languages
actually put the burden on the programmer, the author of
the software, to decide in advance, do you want to represent
a character or a number with the following sequence of bits? And these are called data
types in programming languages. So it turns out, we
leave it to the program. And it just has to decide how it’s going
to interpret these patterns of bits. All right, so if that’s the
case, we now have this scheme for encoding letters and numbers. What about colors? Let’s kind of step things up a notch. How could we represent colors? Yeah, Sarah? SARAH: [INAUDIBLE] similar string of
numbers, like, what is it, hexogonal– DAVID MALAN: Yeah. Hexadecimal. SARAH: Hexadecimal. DAVID MALAN: Yep. So we do have systems like that. And in fact, we’ll see this,
actually, in HTML later today. You can see this in Photoshop, actually. And in fact, I don’t know if I’m going
to be able to launch Photoshop on here in this account. Let me see. I’ll accept that. Oh, we got away with
it, this guest account. No, dammit. Wait, let’s see if we
can use it, anyway. OK we can. All right, this feature is free. So notice here I have the ability
to type in this number down here. I’m going to go ahead
and type in 000000. And notice what color I get– is black. And this is actually equivalent
to this here, 0, 0, 0. And these letters, you might
recall from yesteryear, RGB. It’s an old school expression,
but still has meaning. What’s RGB? AUDIENCE: Red, green blue. DAVID MALAN: Red, green, blue. So it turns out that you can
make any color of the rainbow by just mixing in some red, some green,
and blue– so different wavelengths of light. So let me go ahead and try that. Let me give a little
bit– and notice right now we’re at black because I
have 0 of each of those. So the absence of any of
those colors shall be black. And now let me give myself
a little bit of red. That didn’t really change things. And actually, let me
change this to black, so we can actually see a difference
here, so new and current. So let’s give it a little red. Definitely don’t notice the
difference on this projector. How about 10 of red? No. How about 100 of red? OK. So once we ramp it up to 100,
that’s a good amount of red. Let me try 200. Oh, that’s really red. And now 255, really red. How about 1,000? That’s a bug because you can
only count as high as 255. So here we’re already seeing
issues of representation. Photoshop, to represent the
amount of red in a program, is using how many bits, apparently? AUDIENCE: 8. DAVID MALAN: 8, yeah. So I keep asking numbers,
but I think I’m only asking the same three numbers all the
time, or maybe the same four numbers. 128, 256, 65,536, or 4 billion shall be
the answer to any math question today. So 8 bits is sufficient
because 2 the 8 is 256. So indeed, it looks like Photoshop,
to represent the amount of red in the program, is using 8 bits
where, apparently, 0 means no red, and 255 means a lot of red. So let me try that. 255 gives me, indeed, what
looks like a lot of red. Now, if I change that back to
0, how about a lot of green? Ah. You see that’s how we
can represent green. And if we get away from that and do
255, we might represent a lot of blue in this way. And now we can really start to have fun. If we do a lot of red,
a lot of green and blue, we get white, as though you’re combining
all those different wavelengths of light. How about an even mix of 100, 100, 100? That’s, apparently,
how you represent gray. Anyone want to request a value? There’s not much
intellectual upside of this, but we’ll just try one more color. 50– so give me 50 red, 50
green, 50 blue– still gray. Even quantity seems to give us gray. So how about we ramp this up to 250? That’s an even. Still red. How about 100? Oh, there’s orange. 150– so that’s how you make
orange, by mixing in that. And what if we ramp this up? Now it’s a little more peach-like
on my screen, at least. That’s how you get pink. And we could do this all day long and
really not get anything out of it. So what’s the real takeaway here? RGB is really telling us
how much red do you want? How much green? How much blue? Each of these is 8-bit values. So we have a number and
a number and a number. And combining those numbers–
8 plus 8 plus 8 bits– gives me what I would
generally call 42-bit color. So if you’ve ever seen that, maybe
even in your Mac or PC– if you ever futz around your display Control
Panel or the equivalent, sometimes you’ll see, do you want
thousands of colors? Millions of colors? Or even bigger numbers still? That’s just the
user-friendly way of saying, do you want 16-bit color whereby
you’re using only 16 bits to represent the color of a dot on a screen? Do you want 24-bit
color, which means do you want to use 24 bits to
represent the color of each dot? Or perhaps even more still? Back in the day, when some
of you first got computers, how many bit color was there? 16 was actually pretty good. Games were starting to look
pretty nice in 16-bit color. Windows 3.1 was probably 16-bit color. Or, no, it was probably 8-bit color. So 8-bit, back in the day, but some of
you surely had older computers still. Even my first games were two colors
for which you only need one bit. And it wasn’t even black and white,
it was probably black and green, where green was white. And so that was
monochrome, which is really just a fancy way of saying 1-bit color. Or maybe this is the fancy
way of saying monochrome. So what’s the takeaway here? We can represent colors
simply by using numbers, for instance, 0 to 255, where
0 means very little of this, 255 means a lot of this. And so long as we humans
standardize the world of colors, also, as we did for letters, with
something like RGB– how much red, how much green, how much blue do you
want– we can mix those together. And now, in the context of graphical
programs or maybe web browsers, we can say, oh, this
pattern of 24 bits is meant to be represented as the
color of a dot on a screen. I shall interpret it as such and not
as a number or as a really big letter or as a sequence of three letters. So this gives us the ability to
represent the color of one dot, one so-called pixel, on your screen. And pixels are so small
you can’t really see them. But if we went up to the
TV, on the screen here, or if you have an older school TV
at home, if you go up close to it, you can probably see tiny,
little dots, essentially, pixels. That’s not an image. That’s just a dot. So what, then, is an image? How does a computer represent images? AUDIENCE: A sequence of pixels. DAVID MALAN: A sequence
of pixels, right. An image, at the end of the day,
especially old school comic books, is really just a sequence of
pixels, something like this. Whereby this is a 4 by 4 grid of pixels. And so long as I allocate 24 bits for
this color, 24 bits for this color, 24 bits for this color, and so
forth, I can represent a graphic. And in fact, let me I
can find one real fast. Actually, let’s see if we
can make one real fast. So if I draw a grid like this. Let’s see if I can allocate
enough– not to scale. But if I have a grid,
each of which allows me to store the color of that pixel in
the screen, you can imagine now using, let’s just say, 1-bit color. So here is that old monochrome
screen from yesteryear. The grid just means you
have rows and columns. And either I want to put a 1
or a 0 in each of these boxes. Well, how do you represent an image? Well, let me try something
kind of arbitrary. How about 1, 1, 1, 1, 1, 1–
no, not there– 1, 1, 1, 1, 1– this is not to scale. Dammit. OK, 1, 1, 1, 1, 1, 1. What did I draw? AUDIENCE: A circle. DAVID MALAN: It’s just
a filled-in circle. It was supposed to be a
smiley face, but then I realized I didn’t leave enough
room for the eyes or the nose. So this is a circle. This is how a computer
would represent a circle. But it really is as simple as that. So if you think back to those green and
black screens, all that was going on was the computer had a pattern
of bits, but in this case, it was 1 bit per cell in this
grid of rows and columns. And if it were a 1, it should be green. And if it should be black, it
should be 0, or vice versa. Doesn’t matter, so long
as we’re consistent. So now this is an image. An image is a sequence of pixels. A pixel is just a
pattern of 0’s and 1’s. Those 0’s and 1’s are just
three different numbers. And those numbers are,
respectively, just patterns of bits. Notice the layering. And this, too, is going to be
a theme of today or tomorrow. Lowest level, which we won’t really
come back to beyond today, is the bits, the 0’s and 1’s. Now we’re at this point in the
story where we’ve forgotten about bits and letters and pixels. We’re just talking about images. What comes next? Well, what’s a video? How do you implement a
video as an engineer? Grace? GRACE: [INAUDIBLE]. I mean, it’s sequential images. DAVID MALAN: Yeah. It’s just sequential images. So the phone book doesn’t
quite behave in the same way, but if you remember those
little books from childhood where you flip the
pages really fast, you would actually see Mickey Mouse
or whatever moving as though there were actually a video in your hands. But really, all that’s
happening is you’re seeing an artist’s sequence of
pictures where Mickey in one picture looks like this, then the next one he
looks like this, this, this, this– until he’s waving at you. But that’s just because it’s 100
different pixel pictures where in each one, his hand is moving as
though it’s in a waving movement. And so that’s all a video is. A video that you might double click
or watch on YouTube or live really is just a sequence of images, moving
pictures, as they used to be called. Motion pictures are just pictures
that move, that fly by the screen. We humans typically expect at least
24 frames or pictures per second, or TV commonly has 30 frames per second. Beyond that, we don’t really
notice the difference. Although, if any of you
have– a lot of smart TVs these days have auto motion
plus or 60 Hertz or 120 Hertz. If you’ve ever noticed
that all of your TV shows on your nice, new
TV look like soap operas, like daytime soap operas where the
movement is just sort of unnaturally smooth– and yet, that is
more natural because it’s capturing more of the
image– that’s because you have this feature that manufacturers
decided would be better whereby even if a show or movie has
been shot 24 frames per second, they actually fill in the gaps. They interpolate what the scenes
might have looked like for those split seconds between those frames. And the result is what looks more
like a soap opera, which tends to be shot at 30 frames per second. So they’re just smoother,
which is not necessarily the director or the
photographer’s intention. So that’s all related there, too. So if a video is just a sequence of
pictures, well, to represent a circle, we need this many bits. If we want to represent a whole second
in a movie, we might need 24 times this many bits or 30
times this many bits if we want 24 pictures per
second or 30 frames per second. And this is why videos
are so damn large. There are so many individual pictures
in them, those bits need to be stored. And that, too, is a bit of a white lie. In reality, there is fancy algorithms
or processes for compressing things. So you don’t need one
true image per frame. But that’s actually a perfect segue
way to the last of our ingredients, which is this one here. Algorithms. But where have we left things now? Started with bits, we went to numbers. We went to letters. We then went to colors or pixels. We then went to images. We then went to videos. And now we’ve pretty much
abstracted so far away, we don’t even need to care
about the 0’s and 1’s anymore. And indeed, when you walk out
of here today and tomorrow, you shouldn’t really care much
about the 0’s and 1’s that underlie any of your data because of
this feature known as abstraction. And indeed, this is one of the powerful
tools in a computer scientist’s toolkit. We’re just playing
with it verbally today, but indeed, when writing
software, abstraction is actually a feature
of programming languages that we’ll see a little
bit of tomorrow whereby you don’t need to know or care
how something is implemented. You can think of a computer
as just a black box that has certain functionality
built in because other humans before you have abstracted
away these lower-level details. Or more casually, you can think of it
as standing on the shoulders of people who came before you. Someone, years ago, figured
out how to store 0’s and 1’s. Someone after that figured out
how to store letters and pixels. But thereafter, we humans
can build on top of that work and implement things
like pictures and movies and yet other file formats, still. And so that general feature
of abstraction or layering is actually a powerful
ingredient when it comes to actually solving some problems. JP: David? DAVID MALAN: Yes. JP: [INAUDIBLE]. DAVID MALAN: Yes. Of course. JP: OK. DAVID MALAN: JP. JP: [INAUDIBLE] because I think
[INAUDIBLE] talking about that. So we’re back to the bits. That bit we saw there on the screen,
is it being used in several codes? So one bit’s being
used in several codes, like [INAUDIBLE], letter, number. Or is it being used once? And then, how are we to know to
use it as a bit? [INAUDIBLE]. DAVID MALAN: Good question– the latter. A bit is only used one time, so if the
picture that’s on the screen right now is just the colorful
version of this– So if this is a top-down view of a
platter inside of your hard drive– which, a platter is just
like an old school record. And on this platter are
whole bunches of bits that are either north,
south or south, north. So they’re not all 1’s. These are just actually–
do we have any other colors? No other colors here. So there’s all these little magnetic
particles, much like the Woolly Willy photograph I pulled up earlier. So if you create a Microsoft Word file,
the operating system, Windows, or Mac OS or Linux or whatever else, decides,
OK, that’s a pretty medium-sized file, this shall be your
resume.docx or some such file. If you then have an image like
reunion.jpg from your family reunion, the operating system
might decide, all right, it’s actually a pretty big image. You took it at really high resolution. These bits shall be used
for your reunion.jpg. JP: At the level of bits, if you
have a high-definition screen, right, every pixel you have,
you know, [INAUDIBLE]. DAVID MALAN: Indeed. JP: [INAUDIBLE]. DAVID MALAN: Absolutely. JP: Because the number
of bits is insanely high. DAVID MALAN: It is. It is. So on our screen alone– so
we can see this, actually. In Mac OS, and Windows has an
analog here, if I go to Display, I am currently in a
resolution called 720p, which means 720 pixels,
top to bottom, and 1280– this is implicit– 1,280 pixels across. Which means on my Mac screen here alone,
I have this many pixels, so 921,600, just on my screen alone. JP: But every pixel
has a different color. DAVID MALAN: Every pixel
has a different color. And if my computer is
in 24-bit color, that means I’m using this many
bits just to represent what you are seeing on my screen right now. So this is– 22 million
bits are being used just to display what’s on the screen. But that’s OK because
let’s consider this. So if that’s bits– I didn’t
use this word formally before but a byte, which is the unit of
measure with which most humans speak, is just 8 bits. So to represent everything
on my screen right now is actually just 2 million bytes. And how many bytes are
in, say, a kilobyte? This is 2,700 kilobytes. And if I divide it again by 1,024,
it takes roughly 2.6 megabytes of information, 2.6 million bytes,
to store the colors of everything you see on my screen here, whether it’s
black and white or perfectly colorful. JP: How does it know that that
particular section is [INAUDIBLE]? DAVID MALAN: Good question. So the latter piece of your question
is answered by the operating system. One of the purposes in
life of Mac OS or Windows is to manage the space
on your hard drive. And essentially, all of the
bytes on your hard drive, much like all of the bytes in RAM,
the technology we discussed earlier, have addresses. So if you have a 4 terabyte hard
drive, which is actually– no, let’s call it a 4 gigabyte
hard drive, which is actually super small these days, certainly. But if you can only
store 4 gigabytes, you can think of each of the areas of this
hard drive– if each of these squares represents a byte, you can think of
this as being 0, 1, 2, 3, 4, 5, 6. You can imagine numbering all of
the bytes of space on a hard drive. What an operating system
has underneath the hood is a directory, many
directories, actually, that has at least two columns–
the file and its location. So resume.docx might have an
entry in this table, a.k.a. a directory, and it might be at
location 7, whereas reunion.jpg– this is a little farther to the right. So I’m going to say that
this is at location 117. I made that one up. So your operating system
literally has a table, not unlike what Microsoft Excel might
do or Google spreadsheets, that just maps file names to locations on disk. And in fact, this is a
good opportunity to raise one of tomorrow’s topics,
privacy and security. What does it mean when
you delete resume.docx? And better yet, suppose
it’s not resume.docx, suppose it’s your financial
information or passwords.docx. You decide, oh, I probably
don’t want this anymore. I don’t want someone seeing it. So you delete it. What does it mean in Mac OS or
Windows when you drag a file to the trash can or the recycle bin? AUDIENCE: It might just take
it out of the directory. DAVID MALAN: It just takes
it out of the directory. And can you recover the file, still? AUDIENCE: [INAUDIBLE]. DAVID MALAN: Yeah,
it’s in the trash can. So really, there’s a separate
table for what’s in the trash can. But suppose you’re smarter than that. You know that just dragging something
to the trash can doesn’t remove, empty the trash. It’s still actually there. So what happens when Mac OS
or Windows users actually empty the trash can or recycle bin? AUDIENCE: It deletes the location. DAVID MALAN: That’s all it does. When you delete a file, typically,
on a modern operating system, all you’re doing is that. You’re forgetting where the file is. But what have I not
changed on the white board? AUDIENCE: The disk. DAVID MALAN: Right. I have not touched
the magnetic particles on the hard drive, which is to
say, if you remember programs like Norton Utilities
or other such programs from yesteryear that claim to be able
to undelete files, how do they do that? Well, they scour your hard
drive looking for what looks like the remnants
of files like resume.docx. And if it finds them,
with high probability, it creates a new file with
those same bits inside of them. So this is a side effect of
computers remembering where things are by way of these directories. But deleting is usually
just a matter of forgetting, not securely changing those 0’s and 1’s
to random nonsense or all 1’s or all 0’s. So this was a bit of a
tangent from your question, but does that make sense as to how the
computer is storing that information? In short, the computer
has a little cheat sheet that remembers which bytes
are used for which files. Yeah. AUDIENCE: But over time, didn’t
you use your computer? [INAUDIBLE] the bytes [? started being ?]
organized, and the files will be eventually be deleted? DAVID MALAN: They will
not so much get deleted. They will get overwritten or reused. Correct. You can’t just delete bits. You can’t remove– I mean, if you
open the old school floppy disk and rubbed it, you would move
the magnetic particles off. That doesn’t, of course, happen
underneath the hood of your computer. But yes, you’ll eventually
overwrite files. And I’m oversimplifying. It turns out reunion.docx
wouldn’t necessarily be all contiguous like this. Hard drives and SSDs get
fragmented, with some file bits over here, over here,
over here, over here. So really, you can think of
this as being like a list. Well, some of the bytes are at 117. Others start at 200. Others start at 350, and so forth. So there’s this whole notion
of linking files together just so that you can consume
the whole hard drive. However, if some of
you had PCs years ago, you might have had something like
Norton Utilities, one of whose features was to defragment a hard disk. And this was something
tech people would sometimes recommend– that you should actually
move around all the 0’s and 1’s so that all your files are,
in fact, next to each other for efficiency on the disk. The reality is these days disks
are so fast that this is not a good use of time. So if people are still telling
you to defragment your hard drive, they’re living in the
past, for the most part. Let the hard drive do the thinking. Don’t let utilities that you pay
for have to worry about it anymore. Used to be useful. Other questions? Yeah? AUDIENCE: [INAUDIBLE] would be 16-bit. Is that the reason for the
Excel sheet [INAUDIBLE]? DAVID MALAN: Yeah. So for years Excel would only
allow you to have 65,535 rows possible, after which you just had
to upgrade and pay for Microsoft Access, a database program. That was because they
were using 16-bit values. And I know this only
because in graduate school I was working with
really large data sets. And it was just convenient be able
to load them into Excel sometimes. And I remember swearing under my breath
when I couldn’t load my really big data set because Excel wouldn’t support it. So they’ve since upgraded. They now use bigger
values for their rows. Yeah, David? DAVID: You said, when you talked
about 32-bits and [INAUDIBLE] 64-bits. But now machines have
[INAUDIBLE] 32-bit registers, 64-bit registers, and it seems like
they put them in blocks and [INAUDIBLE]. What’s that terminology
for and why’s it used? DAVID MALAN: Sure. So a register is really
the smallest usable chunk of memory inside of a computer. Those registers are typically
inside of the CPU itself. When we say, Intel inside,
that means that it’s an Intel CPU, central processing unit. And the CPU is the device
that, at the end of the day, does all of the thinking
in the computer. It does all of the arithmetic. And indeed, we didn’t
go down to this level. It turns out that all of the
operations we know on a computer really do boil down to addition
and subtraction and multiplication and just moving memory around. It really is that low level. So that’s what the CPU does. So when someone claims
that a computer has a 32-bit register or
64-bit register, that just means that when the CPU understands
operations like add or subtract or multiply, the value on the
left and the value on the right are each 32- or each 64-bits. So the bigger the register,
the bigger the math the computer can do all at once. So computers can still do
math on really big numbers on 33-bit values or
65-bit values, but they need to use multiple numbers
or multiple sets of bits to represent bigger numbers. The bigger a byte a computer can take
out of a problem, though, typically, the better. So 64-bit registers or a 64-bit CPU can
generally do more work per unit of time than a 32-bit computer. So more is better. And we’re kind of in that stage of life
where 64-bit machines are all the rage. And most of us in this room, if not
everyone, have 64-bit computers. And I don’t know if it’s available. Mac OS doesn’t say it here. Does it say somewhere? If I poked around
enough– let me try this. OK. Here we go. So this is an esoteric command that
most people would never execute. But I just asked my computer via this
text-based interface for information about it. And the fact that it says
x86 64 here and here, that means this is a 64-bit computer. All Macs are this way. PCs– some 32-bit machines still exist. And the symptoms you would run
into is if you try and install a new program on old hardware,
you might not be able to. For instance, Windows 10 maybe
only works on 64-bit machines. I’m not sure. I’d have to Google that. But newer software often no
longer supports older hardware. So that’s the only time you
would really care these days. Felicia? FELICIA: [INAUDIBLE] like
the comment in memory, like if the program
is running in memory. That’s a speed factor. You’re referencing that because? DAVID MALAN: Can you be more specific? FELICIA: [INAUDIBLE] program is
like [INAUDIBLE] intelligence tools like [INAUDIBLE]. [? Oh, ?] it runs in memory, as if
it’s a [INAUDIBLE] feature that– DAVID MALAN: Can you elaborate on what
those programs are supposed to do, and I can try to infer what they mean? FELICIA: [INAUDIBLE] do a calculation,
drawing graphs, [INAUDIBLE]. DAVID MALAN: I’d have to look closer. It sounds like stupid
marketing speak, to me. FELICIA: OK. DAVID MALAN: I think
that “me” is meaningless. That’s where computers store all
of their data while being used. I’m probably doing them
a slight disservice. AUDIENCE: Well, I use
them, and basically, it’s a difference that it’s an analytics
tool that is pre-loaded in that case. So rather than querying against a
database, it’s done its aggregation. DAVID MALAN: Oh, OK. AUDIENCE: And as you’re
doing analytics it gets– it’s pre-aggregated, pre-analyzed data. You’re exploring, but you’re not
hitting against the database. DAVID MALAN: OK. So that’s a little more illuminating. The reality is– and we really
talk about this later today, but databases tend to
be relatively slow. But it’s where you store data
long term, much like for files, get stored on a hard disk. But in so far as they’re slow, if you do
want to do a lot of real-time analysis, it’s better to read all of
the data from the database where it could be living in disk
space, physical disk space, into RAM. But even then, it’s not
necessarily the database. You can have databases that, themselves,
exist entirely in memory or in RAM, not on disk. So that’s probably what they’re
pointing to and probably how they’re justifying charging you
more for the additional RAM that’s requisite for that. So before I impugn some
software I’ve never used– but it’s also a little bit
of marketing speak, it sounds. I would hope it would be in memory. OK other questions? All right. Well, let’s address the last of
these ingredients, algorithms, and make this a little more concrete. We’ve talked, daresay, ad
nauseum about inputs and outputs and how we might
represent them and how we might abstract from the
lowest-level details to the highest-level
incarnations of them. But what’s important when it
comes to things like algorithms? Well, I promised that
I would take out this. And this is a problem that most
of us don’t really have anymore, looking up someone’s phone
number in a device like this. These days, you take out your phone. But even the phone is just kind of the
digital incarnation of this because on Android or iOS, if you
go to the contacts app, everything is still
sorted alphabetically. So even though what we’re
about to do is very physical, turns out that even digitally,
the problem is the same. It’s just implemented not with
physical hands but with software. So more on that in a bit. So this is a phone book. Let’s assume it’s all white pages. And let’s assume I’m looking
for someone like Mike Smith. If I want to look up Mike
Smith in the phone book, I could open up the phone book,
look on page 1, look down. I’m in the A section, so
Mike’s obviously not here. So I keep turning the page,
turning the page, turning the page. So these are step-by-step
instructions for solving this problem. I’m looking at– I’m supposed to be
looking at each page for Mike Smith until I find him. Is this algorithm, these
step-by-step instructions, correct for finding Mike Smith? AUDIENCE: Yes. DAVID MALAN: Yeah. If Mike is in here, I will find him. But what’s the obvious objection? It’s just slow. This is silly. So in grade school, two’s,
like in 2, 4, 6, 8– this algorithm is twice as
fast– twice as inaccurate because I’m not really
doing two at a time. But if I did this accurately,
it’s twice as fast. Is it correct? Yes? I heard a yes. I heard a no. It’s not precise. How so? AUDIENCE: [INAUDIBLE]. DAVID MALAN: Yeah. So it’s buggy, so to speak. It has a potential mistake
because what if Mike just happens to be sandwiched
between two pages as I fly by? So is this irrecoverable,
or can I fix this? Or do I have to go back to the
one page at a time approach? What’s a fix for this algorithm if
I’m flying through the phone book two pages at a time, and I don’t
want to risk missing Mike altogether? AUDIENCE: [INAUDIBLE]. DAVID MALAN: Yeah. I know it’s alphabetical, so maybe my
heuristic is, if I hit the T section, I’d better double back at least one page
because I’ve clearly gone past Mike. Or maybe he wasn’t there, but I’d
better go back one page at a time. So you might fly through
the problem twice as fast, but you might have to
double back a little. So overall, the net effect is
certainly a faster algorithm, but it’s not quite as simple
as just two pages at a time. But none of us are going to
do either of these algorithms. What’s a typical human
going to do 10 years ago? Or recently, I suppose? AUDIENCE: Look for the first letter. DAVID MALAN: Look for the first letter. OK. But I don’t know where
the first letter is. AUDIENCE: [INAUDIBLE]. DAVID MALAN: 2/3 of the way through. All right. So here we are. I didn’t go quite far enough. I’m in the M section. So what do I do? What comes next? I start turning one page at a time? AUDIENCE: [INAUDIBLE]. DAVID MALAN: Yeah, a second letter. But I’m in the M section, so
I’m still looking for Smith, S. AUDIENCE: Flop over, like– DAVID MALAN: Yeah, flop
over like this much. So it’s kind of the same problem. A moment ago I was here. I went roughly 2/3 or
halfway, and I ended up here. But now I know, by
process of elimination, Mike has got to be in this half of
the phone book, if he’s in here, because he’s an S, and
this is the M page. So both literally and figuratively,
we can tear this problem in half once. So close. Almost recycle. So now we have fundamentally
the same problem, but instead of it being 1,000
pages, now it’s only 500 pages. But it’s the same problem. How do I find Mike Smith? I don’t want to devolve back
to the original algorithm. I, again, want to go, let’s
say, roughly to the middle. Keep it simple. And now, oh. I went a little too far. Now I’m in the T section. But again, I can tear
the problem in half, going to a problem that’s now 250 pages,
throw that half of the problem away. Problem is still the same. And what’s sort of amazing
about this approach is that the algorithm is not changing. The input is changing. The input’s getting halved and
halved and halved and halved. So the problem is getting smaller
and smaller until, eventually, if I repeat this process,
I’m left, theoretically, with just one page on which
Mike either is or isn’t. And I can decide whether to call him
or give up, if he’s not actually there. So if this did, indeed, have
1,000 pages, that first algorithm, one page at a time, how many steps might
it have taken me to find Mike Smith? Or anyone, for that matter? What’s the upper bound? In the worst case, how many
times can I turn the page to look for someone who
may or may not be there? 1,000, right? Maybe it’s not Smith. It’s someone with the last
name of Z. And so I only really know if I go all the way to the end. It might be 1,000 pages. So that’s something. How about, though, the second
algorithm, two pages at a time? AUDIENCE: 500. DAVID MALAN: Yeah. It’s like 500, 501 just in case I
have to double back one page to see if someone was in between two pages. So that’s good. That’s half as big. But what about if the
phone book is 1,000 pages, and I use that third approach where
I use Grace’s intuition of sort of splitting it and
splitting it or, if you will, dividing and conquering the
problem again and again? How many steps does that take? AUDIENCE: 16. DAVID MALAN: 16? Even fewer if it’s 1,000,
but that order of magnitude– a little more– 10 or so. You can go from 1,000 to
500 to 250 to 125 to 67-ish. We have to deal with the
rounding errors a little bit. But we get to one page
much, much faster. So if we were to plot this– and
we can reuse this space here. If we were to plot the relationship
among all of these numbers, let’s consider where we’re at. So we don’t have to worry
too much about numbers. Let’s just worry about shapes. If this is an xy plot, and
this is the size of my problem that a computer scientist
would usually describe as n– n is the number of pages,
the size of the input, whatever your unit of measure is– and then
this axis is time– number of seconds, number of page turns, however
you want to count things up– the first algorithm has
a very linear relationship whereby, if there’s more
pages in the phone book, it’s going to take me more seconds. Or put another way, if Verizon,
or whoever the phone book person is, doubles the number of pages–
instead of it being 1,000 pages here and ending up here, if
we have 2,000 pages, it’s going to double the amount of time. This isn’t quite to scale, but this
would have intersected twice as high on the chart. Meanwhile, the second algorithm,
where I go two pages at a time, still has a linear relationship, but
it’s faster, so it’s a little lower. In other words, I’m going to
call this on the order of n. This is a computer science
notation for on the order of or the size of the problem. Whereas this one, I’m going
to call order of n over 2. And I’m cheating a little bit. We normally wouldn’t
say over 2, even here. But the shape is fundamentally the same. It’s still a straight line. But if there’s 1,000 pages, my first
algorithm might take 1,000 seconds. But my second algorithm is only
going to take 500– and again, not quite to scale. Otherwise, this would
be split perfectly. But that third algorithm is
kind of an interesting beast because the bigger the problem gets,
the more pages in the phone book, the bigger my bytes automatically gets. The first two algorithms, I’m always
taking a one-page byte or two-page byte out of it. But the third algorithm
is kind of impressive. It takes 500-page bytes, then a little
smaller, but still 250-page bytes, then 125-page bytes, which is
certainly bigger than 1 and 2. So what might the shape
of that graph look like? A straight line, but much lower? It’s more of a curve. And if you remember
your logarithms, it’s something that looks a little like this. It never gets perfectly flat. It’s always increasing, but it
increases ever so slowly over time. And you can appreciate
this in the following way. So 1, we already know that
if you have 1,000 pages, this third algorithm
was the fastest so far. But this algorithm is way more
powerful and impressive when you think about even bigger inputs. Suppose that Verizon had a
4 billion page phone book– promised we’ll stick with these same
numbers– so 4 billion page phone book. With that first algorithm,
one page at a time, how many pages might I turn
before concluding that someone is in or not in the phone book? 4 billion, right? That person might be
at the very end, and I might have to turn all 4 billion pages. Second algorithm, 2 billion because
I’m going two pages at a time, plus 1, so 2 billion and 1 in
case I have to double back. That’s faster. That’s twice as fast. But how many times can
you divide a 4 billion page phone book in
half, in half, in half, until you’re left with just one page? Only 32. And that’s kind of amazing. You start with a 4 billion page problem. And those first two algorithms
take 4 billion or 2 billion steps, give or take, worst case. By contrast, this fairly intuitive thing
that we’ve been doing for 20 30, 40, 50 years is so much more
powerful out of the box because it only can be
divided in half 32 times before you actually get
to the page in question. And that’s why this line is so low. Even as the problem gets
massive, it’s still not going to take you all that many steps. Conversely, if Verizon
doubles the number of pages next year, from 1,000 to 2,000,
each of those first two algorithms takes twice as long, either
2,000 steps or a full 1,000 steps for a 2,000 page problem. By contrast, if Verizon doubles
the size of the phone book from 1,000 to 2,000
pages, and previously it took 10 steps to find
Mike Smith, how many steps does it take if Verizon doubles
the phone from 1,000 to 2,000 steps if we use the third algorithm? AUDIENCE: 11. DAVID MALAN: 11. And that’s what’s really powerful. And so, indeed, one of the exciting
takeaways of computer science is that, even though it, by
the nature of the hardware devices with which
we’re surrounded, might seem very unfamiliar, if not a little
intimidating, the ideas that engineers and, specifically, programmers
bring to bear on problems are often not that dissimilar
to intuition we’ve long had. It’s just a matter of
applying that same intuition to bigger and bigger problems. We don’t really have 4 billion page
phone book problems, necessarily. But Google certainly has a 4 billion
page web page search problem– or trillions of pages these
days, or financial transactions in other companies, or logs in
some data analytics companies. And so there are so many
applications of big data, so to speak, whereby
getting the right algorithm, the right middle ingredient
here, really, really, really has an impact on the performance. And in fact, your
question of hardware, too, is sort of a different
view of the problem. Rather than focus on the algorithms,
they were focusing on the hardware. And simply having more RAM, presumably,
with which to query the data means things can operate faster
because of that trade off between cost and speed and size of memory. So these ideas are recurring and
they really aren’t that dissimilar. So there is one catch here. Unfortunately, computers only really
understand what you tell them. They’re fairly dumb devices. They only know what we’ve
trained them to know. They can’t just infer
or fill in the blanks. They can only do literally
what they’ve been told. And so a useful way of
seeing that is the following. Might someone be comfy volunteering
and coming up front here? You have to be comfortable appearing
on video and, in turn, on the internet. But no dumb questions here. We’re all friends. Just need someone to come
look at what’s on my screen without anyone else seeing
or looking behind you. I forgot about that one. No one look behind. I need one volunteer who
didn’t look behind already. One volunteer. One volunteer. There’s room full of 25 people. No one is volunteering. Who wants extra lunch? OK, Grace, come on up. Thank you. All right. So Grace is going to come on down. And honor system– no
one look at that TV because I’m not sure how to turn it
off without physically unplugging it at the moment. And let me go ahead and do this. We’re going to give
everyone else in the room, if you don’t mind helping out for just a
moment– can you just pass those around on the left. And I’ll pass this around on the right. Everyone just take one sheet of paper. All right. OK. There we go. So Grace, if you’d like
to come over here– Grace, now, is going to be,
we’ll say, the programmer. And everyone else in the
room should be the computer, playing the role of computer,
so much so that you’re only allowed to do what you’re told to do. You can’t make assumptions. And so any abstractions
that you might have come in the room with this
morning can’t necessarily be leveraged because you can’t ask
any questions or for clarifications. But we’re going to see just how well
Grace can communicate or can program the room– such that she’s going
to give you some instructions, you’re going to do what she says
with the respect your pencil or pen and the piece of paper, and we’ll see
if you can draw exactly what Grace sees before herself here. So if you could tell people
how to draw this picture. And let me hide it in this folder. OK. All right. Execute. GRACE: You’re going to draw four
shapes on a horizontal line. Don’t draw a line– horizontally in
the middle third of your page, which should be landscape. The farthest left shape is a square that
is about 1/3 the height of the page. And it’s in the middle between
the top and the bottom. Directly to the right of that, in
the second quarter of the page, from left to right, is a circle. Then there is a diagonal
line in the third quarter of the page that is from
approximately the bottom of the circle to the top of the circle, same
height as the square and the circle, and then a triangle that is also
approximately the same height as those shapes in the final,
right-hand quarter of the page. DAVID MALAN: All right. End scene. Thank you. GRACE: OK. DAVID MALAN: And now let me come around. And you’re welcome to return. I’ll come around to everyone else just
so we can project some answers here. And so one of the takeaways here, to set
the stage for what we’ll do tomorrow, especially when we translate
computational thinking to programming, is just how much in the way of
detail we humans take for granted, and how a programmer,
an engineer, really has to think through
details at the lowest level so that what appears on the screen
ultimately, or what the software does, is actually what’s intended but is
expressed in as clear as possible way. If I can steal some of these
sheets– I don’t need to grab all. Let me just grab a few
from each side, over here. Thank you. Over here. OK. Very good. Thank you. Thank you. Sure. Thank you. OK. Sorry. There you go. OK. That’s OK. Let me grab a couple
from the other side. All right. OK. May I reach here? Thank you. All right. Wonderful. So we have a nice variety of
interpretations here, if you will. And let’s take a look at what
some of them came out to be. So here we have this one here. All right. Quite a few of you had
something that looked like that. Indeed, here’s another such drawing. And notice, actually, they
differ only in so far as which way the line goes, if you can see. Then we had some other
interpretations here. This one had it touching. So that’s a little different,
but a similar angle. This one I like to see, so
definitely a little different. The line is going the
right way, but it’s inside. This one was sort of an
undo, perhaps, over there. This one is cute in some way, over here. And then this one, too,
was a little different. So reasonable people
will certainly disagree. It is perhaps one of the takeaways here. What Grace was actually
drawing was this. So it looks like the majority, just over
majority, did indeed get this right. But you can notice the sort
of ambiguity sometimes. And she did take some
liberties, certainly, like assuming that we all know
what a square is, which is fine. But think about a square. What is a square? That’s kind of an abstraction for a
lower-level implementation of what really is just four lines– four
lines that are all connected and form right angles with each other. But no one would say, draw four lines,
each at a right angle and equidistant. We just say, draw a square. And so that is an example
of an abstraction that’s useful because, otherwise, you get
lost in the weeds, so to speak, when trying to communicate. But it does leave possible
some ambiguities, as we saw. The line going from
bottom left to top right was misinterpreted in
a couple of places. The triangle could,
perhaps, have been rotated. I’m not sure if you specified
if the flat edge was on the bottom, though most people
just seem to assume as much. But that was not terribly precise. And indeed, it’s no
coincidence, perhaps, that some of the earliest programming
environments– if any of you have heard of Logo, for instance–
were sort of graphical in nature. You might write textual
commands, but all you could do is tell a turtle in one
version of the software’s case, to go up, down, left, or
right and it could put down a marker or a paintbrush, essentially. And if you turned on the paintbrush,
it could actually draw a line as it moved around the screen. And so in this way, you could
imagine that Grace could have really kind of belabored the point here. And she could have said something like,
OK, put the paper in landscape mode. Lift up your pen. Hold it over the left third of the page. Touch pen to paper. Move it down 2 inches. Move it right 2 inches. Move it up 2 inches. Move it left 2 inches. And with, maybe, higher
probability, we would have all been able to draw
it perfectly the same. But we would lack, then,
the sort of mental model for what it is we’re doing. And so these abstractions both help,
as in the case of draw a square, but might potentially
hurt if you’re just trying to communicate to a
higher, sort of, form device. And so if you think about maybe some
of our more modern technology, iPhones and Androids and things
like Siri and Cortana and the like– those are trying to be
more human-like, so that you could say, hey, Siri, draw me a square. Whereas back in the day,
or even in this room, we might need to be ever more precise
as to what that actually means. Let’s try just one other example,
flipping the tables this time if we could, whereby we need one more
brave volunteer to join me up here. Everyone’s really busy checking
their mail now or something. OK. Come on down. And we have Griff. Come on down. All right. So Griff, the only rule here
is you take the pen this time, face only this direction without
looking at any of the three screens that are about to spoil what this is. And everyone else here is going
to play the role of programmer. And we’ll sort of round robin, take
one instruction from each person, see if we can’t learn a little
something from the previous example as to precision, while still making it
possible for Griff to get this right. All right. So don’t look anywhere
other than straight ahead. GRIFF: OK. DAVID MALAN: All right. And we’re going to
give you this one here. All right. Would someone in the
audience– not Griff, who should only be looking straight
ahead– like to propose how to begin? Yes? Owen? OWEN: Draw a circle near
the top of the board. DAVID MALAN: OK. Someone else? Yes? AUDIENCE: Draw a straight line
from the bottom tip of the circle to almost the middle of the board. DAVID MALAN: Straight line to
almost the middle the board. Good. AUDIENCE: I think that’s right. DAVID MALAN: Oh, a little
longer, little longer. AUDIENCE: To 3/4 of the board. DAVID MALAN: 3/4 of the board. There we go. OK. Someone else? Someone else? Yeah, Dan? DAN: From the end of that line, draw
down, angled slightly to the left, like, let’s say, a 15 degree angle– AUDIENCE: 7 o’clock. DAN: Yeah, thank you, toward 7 o’clock. DAVID MALAN: OK. 7 o’clock. Someone else? AUDIENCE: Put an identical line
to the top of your original line that you drew– yep, an identical
line also at that angle. DAVID MALAN: Nice. A little copy, paste there. What’s next? AUDIENCE: Draw another
line from where you just ended back to close to the end
of [INAUDIBLE] straight line that’s coming down. AUDIENCE: Yeah. DAVID MALAN: Seems the computer is able
to ask some questions here, but OK. All right. Almost there. What’s last? Or next to last? Anyone? Yeah, Dan again? DAN: From the point in the circle
where the second line goes out, draw that same angle,
but going to the right. So the first line will
go to about 5 o’clock, and then it’ll be another angle. The point, the two lines intersecting
with a circle from there, put your pen there. And now draw a short
line towards 5 o’clock. A little bit longer. Keep going. Yeah. Keep going there. And now, from the end
of that point, angle it a little bit to the right
again, towards, say, 4 o’clock. Yeah. DAVID MALAN: All right. Almost there. And I forgot one piece, one more step. One step remains, pun sort of intended. OK. AUDIENCE: Now you do the clock. Go back 2/3 of the way
down the long line. Draw a line to, I guess, 4:30. Half the distance of that big line. DAVID MALAN: Repeat. AUDIENCE: Start there. DAVID MALAN: Start there. AUDIENCE: Draw a line
that’s as long as half [INAUDIBLE] of that line toward 4:30. [INAUDIBLE] DAVID MALAN: And? Just that? AUDIENCE: Oh, and then another line
from there till 5:00 the same length. DAVID MALAN: All right. And Griff, what do you
appear to have drawn? GRIFF: A person walking. DAVID MALAN: A person walking. All right, how about a
round of applause for Griff. Thank you. And most of us, upon first glance,
would describe this as what? Draw a stick figure. And what’s interesting
about this is that, by contrast, whereas the square and
the circle and the triangle on the line all have, perhaps, a more obvious
semantic meaning, draw a stick figure is probably not nearly
precise enough for anyone to have recreated precisely this
stick figure that I just so happened to find on Google Images. There’s enough sort of kinks in his
legs and his arms at certain angles that much more precision
was, indeed, necessary. And what’s interesting, too,
is that on a few occasions, we were sort of struggling to orient
Griff as to where he should go. And so it felt like there
was this opportunity even to add some metadata to the picture,
whereby, maybe, every time we had him put his pen down, we could
say, define that point as A or B or just some way of verbally
back referencing it later. And indeed, we’ll see a little bit
of this tomorrow with programming. It’s actually quite useful to declare
things like variables, placeholders, so that you can reset yourself
to some known past position. And we were using far more words to
sort of reorient Griff, when, had we simply put some bread crumbs,
defined some variables, so to speak, we could have jumped
right back to that point.

Tags: , , , , , , , , ,

7 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *