Misunderstanding Computers

Why do we insist on seeing the computer as a magic box for controlling other people?
Why do we want so much to control others when we won't control ourselves?

Computer memory is just fancy paper, CPUs are just fancy pens with fancy erasers, and the network is just a fancy backyard fence.
コンピュータの記憶というものはただ改良した紙ですし、CPU 何て特長ある筆に特殊の消しゴムがついたものにすぎないし、ネットワークそのものは裏庭の塀が少し拡大されたものぐらいです。

(original post/元の投稿 -- defining computers site/コンピュータを定義しようのサイト)

Sunday, June 9, 2013

Classes of Machines, Revisited

I think I've found a place to get some traction.

Although I will have to write under the assumption that I will be misunderstood. There is no way to avoid that, however, and keeping silent is not an option any more.

Instead of four levels of complexity, I'm going to go for just two. There are two classes of machines:
  • Machines that can be understood, and
  • Machines that cannot be understood.

Now there is a nice division, and a proper binary partition, I think. Except, it's semantically incomplete. Understood by whom? Is it
  • Machines I can understand, and
  • Machines I don't understand.
Or is it
  • Machines that some human somewhere can understand, eventually, and
  • Machines that no mortal human anywhere could ever understand?
Okay, it's neither of the above. Not only have I moved the boundaries of the partition, but I have failed to make proper partitions in either of the latter two. Let's try this:

There are machines that a human might reasonably expect to understand in a lifetime of attempts to characterize them, and there are machines that would take longer to understand than any human is either willing or able to devote the time to understanding.

Now the boundary is fuzzy, but it provides a better working basis for discussion.

The next problem is to try to characterize the boundary.

Woops. There are two ways to cut this boundary. One is the boundary between real machines and ideal machines:
  • A human might understand an ideal machine.
  • A human can never fully describe and understand a real machine.
This is an important division, and it has serous implications in the patent office as well as in the laboratory. It's a hard fact for proud humans, whether engineers, sales crew, customers, managers, bureaucrats, or monarchs. It's a bitter pill to swallow.

Fortunately, we can understand many real machines sufficiently for practical purposes. That is, we can build understandable models of ideal machines that match real machines closely enough for many practical purposes.

That makes the pill a little easier to get down. Unfortunately for patent examiners, judges and juries, plaintiffs and defendants, the question of which practical purposes remains open for any specific machine.

There is another division, somewhat orthogonal to the division between engineering specifications and mathematical models on the one hand, and actual machines on the other.

It applies to our engineering models. and projects onto the real machines modelled thereby.

Again, the boundary is a bit fuzzy, but it involves an arcane device from computer science called a stack.

A stack is a place to remember things. We push facts onto the stack, do some work that might make us forget those facts, and then come back to the stack. The "last-in, first-out" nature of the stack helps us keep our work organized and flowing. Until we find that something in our work alters a fact we have buried down the stack somewhere.

So it's useful to have another stack, and we can shift facts from one stack to the other, keeping them in order, and then shifting them back where we are done.

Some people don't like stacks. Too many constraints. It's easier to have a pigeon-hole rack of boxes to keep those pesky facts in, and then you can just grab whatever facts you need when you need them. Pigeon-hole racks are useful for many things, like post-office boxes and such. But they really don't provide a basis for remembering what you need to work on next in solving a problem, or in maintaining control of a machine. Stacks provide that organization.

This is the best basis for the partition that I know of:
  • Machines that only have to track a few facts (operating states) can usually be understood. But they aren't very flexible. Think of a simple light switch.
  • Machines that have a single stack to track facts can be described and understood in most cases. The stack provides our basis for understanding. If we or the machines lose track of what we are doing, we can go back to the stack to remind us.
  • Machines that have two (or more) stacks are a bit trickier, but still are generally within reach, as long as we don't end up shifting too many things from one stack to another.
  • Machines that keep too many facts (states) randomly accessible are easy to lose control of, easy to think we understand when we don't.
And the partition between machines which we can understand and those we can't falls between the last two classes.

Unfortunately, human language, when analyzed mathematically, falls into the latter class. So does pretty much every machine, tool, or system with enough flexibility to be useful. Animals, also, when we try to analyze them in some methodical way, fall into the last class. Humans? Of course we fall into that last class. Simply feels free, but we quickly find simple to confining. Free of cares is not freedom.

The tools we call computers? Well, that's a good topic for discussion and consideration. Later.