Misunderstanding Computers

Why do we insist on seeing the computer as a magic box for controlling other people?
人はどうしてコンピュータを、人を制する魔法の箱として考えたいのですか?
Why do we want so much to control others when we won't control ourselves?
どうしてそれほど、自分を制しないのに、人をコントロールしたいのですか?

Computer memory is just fancy paper, CPUs are just fancy pens with fancy erasers, and the network is just a fancy backyard fence.
コンピュータの記憶というものはただ改良した紙ですし、CPU 何て特長ある筆に特殊の消しゴムがついたものにすぎないし、ネットワークそのものは裏庭の塀が少し拡大されたものぐらいです。

(original post/元の投稿 -- defining computers site/コンピュータを定義しようのサイト)

Sunday, June 9, 2013

Classes of Machines, Revisited

I think I've found a place to get some traction.

Although I will have to write under the assumption that I will be misunderstood. There is no way to avoid that, however, and keeping silent is not an option any more.

Instead of four levels of complexity, I'm going to go for just two. There are two classes of machines:
  • Machines that can be understood, and
  • Machines that cannot be understood.

Now there is a nice division, and a proper binary partition, I think. Except, it's semantically incomplete. Understood by whom? Is it
  • Machines I can understand, and
  • Machines I don't understand.
Or is it
  • Machines that some human somewhere can understand, eventually, and
  • Machines that no mortal human anywhere could ever understand?
Okay, it's neither of the above. Not only have I moved the boundaries of the partition, but I have failed to make proper partitions in either of the latter two. Let's try this:

There are machines that a human might reasonably expect to understand in a lifetime of attempts to characterize them, and there are machines that would take longer to understand than any human is either willing or able to devote the time to understanding.

Now the boundary is fuzzy, but it provides a better working basis for discussion.

The next problem is to try to characterize the boundary.

Woops. There are two ways to cut this boundary. One is the boundary between real machines and ideal machines:
  • A human might understand an ideal machine.
  • A human can never fully describe and understand a real machine.
This is an important division, and it has serous implications in the patent office as well as in the laboratory. It's a hard fact for proud humans, whether engineers, sales crew, customers, managers, bureaucrats, or monarchs. It's a bitter pill to swallow.

Fortunately, we can understand many real machines sufficiently for practical purposes. That is, we can build understandable models of ideal machines that match real machines closely enough for many practical purposes.

That makes the pill a little easier to get down. Unfortunately for patent examiners, judges and juries, plaintiffs and defendants, the question of which practical purposes remains open for any specific machine.

There is another division, somewhat orthogonal to the division between engineering specifications and mathematical models on the one hand, and actual machines on the other.

It applies to our engineering models. and projects onto the real machines modelled thereby.

Again, the boundary is a bit fuzzy, but it involves an arcane device from computer science called a stack.

A stack is a place to remember things. We push facts onto the stack, do some work that might make us forget those facts, and then come back to the stack. The "last-in, first-out" nature of the stack helps us keep our work organized and flowing. Until we find that something in our work alters a fact we have buried down the stack somewhere.

So it's useful to have another stack, and we can shift facts from one stack to the other, keeping them in order, and then shifting them back where we are done.

Some people don't like stacks. Too many constraints. It's easier to have a pigeon-hole rack of boxes to keep those pesky facts in, and then you can just grab whatever facts you need when you need them. Pigeon-hole racks are useful for many things, like post-office boxes and such. But they really don't provide a basis for remembering what you need to work on next in solving a problem, or in maintaining control of a machine. Stacks provide that organization.

This is the best basis for the partition that I know of:
  • Machines that only have to track a few facts (operating states) can usually be understood. But they aren't very flexible. Think of a simple light switch.
  • Machines that have a single stack to track facts can be described and understood in most cases. The stack provides our basis for understanding. If we or the machines lose track of what we are doing, we can go back to the stack to remind us.
  • Machines that have two (or more) stacks are a bit trickier, but still are generally within reach, as long as we don't end up shifting too many things from one stack to another.
  • Machines that keep too many facts (states) randomly accessible are easy to lose control of, easy to think we understand when we don't.
And the partition between machines which we can understand and those we can't falls between the last two classes.

Unfortunately, human language, when analyzed mathematically, falls into the latter class. So does pretty much every machine, tool, or system with enough flexibility to be useful. Animals, also, when we try to analyze them in some methodical way, fall into the last class. Humans? Of course we fall into that last class. Simply feels free, but we quickly find simple to confining. Free of cares is not freedom.

The tools we call computers? Well, that's a good topic for discussion and consideration. Later.

Monday, February 11, 2013

Security Tactics 1 -- Don't Be Valuable

Now that you know what you want to protect and what its value is, and you have spent some time evaluating the cost of security vs. the cost of replacement, you are ready to actually start on a tactic.

Passwords? Security systems?

No.

The first and most fundamental tactic in security is to reduce your potential loss in the case of a successful attack.

Lemme 'splain:

I mentioned the cost of replacement when I talked about planning the costs of security in the last rant.

The absolutely best way to reduce the cost of security is to bring the cost of replacement down to zero.

If you have nothing to protect, you don't need to spend money, time, or other resources protecting anything.

Moreover, you don't care if people walk off with anything, so you basically go into a sharing mode. That reduces the motivation of many attackers, since there is nothing to steal.

Sharing is a good way to turn enemies into friends, which is another good way of reducing the number of potential attackers.

Well, if everything were infinitely reproducible, we could basically get rid of both military warfare and excessive economic competition.

Okay, it's an ideal. But it is a meaningful ideal. If you are having serious security problems, you should re-evaluate your resources, operations, facilities, etc. If there are things you don't need to protect, quit trying to protect them, and security issues disappear like snow in the tropical sun.

And, until you take this step, everything else is a just a bandaid.

Security Basics 3 -- Matching Measures to Value

The third principle is to match your security measures to the value of what you are protecting.

As I said before, you don't usually want to secure a ten thousand dollar touring bicycle with a three dollar lock on a flimsy chain that could be cut through by a determined kid with diagonal cutting pliers.

Nor do you usually want to protect a two hundred dollar utility bike with a thousand dollar chainlock.

Generally, you want to spend something around a tenth (plus or minus a bit) of the cost of replacement on protection measures.

Now, I just said a mouthful there. Let me unpack it.

I didn't exactly say it before, but knowing the value of something includes knowing it's replacement value, or, rather, how much it would cost to replace.

Replacement value. Cost of replacement. Not the same, and neither the same as the actual value, much less the perceived value.

Everything that you might want to protect has a replacement value or a cost of replacement.

You cannot secure something that is priceless. Period.

If you don't understand why, go back to the popular song from the '60s, "One Tin Soldier" (Lambert/Potter).

Of course, there are other issues relative to priceless stuff, primarily that what is priceless to the owner of the company is generally not priceless to the company itself. If the company itself has something that the company considers priceless, the accountants are not doing their jobs.

If the company has something that it considers priceless, that thing will sooner or later cause things at the company to seriously wonky. If not corrected, it will destroy the company. You can't operate a company long-term unless everything the company owns has a given and fairly reasonable cost of replacement.

If the company has something priceless, call in the boss and the accountants and whoever else it takes, and get a cost of replacement assigned to it.

Often, the actual cost of replacement, sentiment aside, will be surprisingly low. That's no offense to the boss. If it could be valued, it wouldn't be priceless.

Why roughly a tenth of the cost of replacement?

I'm reading the mind of the thief or other attacker. He's saying to himself something like

I'm not going to be able to sell this thing for the full value. If I have to carry in a thousand dollars worth of tools to steal something worth a thousand dollars, when the risks include having to leave the tools behind, I'm going to get a real job.

Yeah. I'm guessing when I say a tenth. That's why I say plus or minus. The object is to spend just enough to discourage most potential thieves.

What you're doing is an augment to insurance. Insurance attempts to take care of things after the probabilistic event of an intrusion/theft. Security is reducing the probability of the event. Together, you want to bring the costs down to a manageable level.

And adjusting the cost of security measures to the value of the thing being protected is one way to manage the costs.

Really, a tenth is a bit high, but we aren't ready to calculate for real, just yet.

One last thing before I move on:

If the argument of replacement vs. protection runs into the problem of having to replace something repeatedly, you will have to shift from security tactics to war tactics, but that is also a topic to be dealt with later. (I will deal with it partially in the next post.)

Security Basics 2 -- What Are You Protecting?

The second rule of computer security is to know what you are protecting (or, rather, trying to protect).

If you don't know what it is you are protecting, you'll tend to leave the valuables in the middle of the road while you haul meaningless junk into the safe.

Or you will spend hundreds of thousands of dollars trying to protect something worth only a thousand or so.

You also need to know what it is you need to do with the valuables. If you don't know this, you'll tend to leave the valuables in the middle of the road while you are busy building walls, safes, locks, gates, etc., in buildings where you never intend to take them.

(Of course, certain large system houses -- cough -- MS -- cough -- IBM, too -- ahem -- Cisco -- gack, Apple, too? erk -- erm, well, certain, uhm, most large systems houses are just delighted to help you build security measures you will never need or use. Especially, if you never use them, no one will know that they don't really work.)

If you know what you are protecting and how you need to use it, you can focus your resources on real protection measures. In other words, you are less likely to run out of resources for security before you can actually get meaningful measure implemented.

("Measures" is such a buzzword. It just happens to be the best word I can think of, since security is a lot more than just walls, gates, locks, passwords, strongboxes, sandboxes, etc. Well, buzzwords are only really buzzwords when misused.)

Now, there there are some hidden issues here.

Not only do you need to know what you are protecting, you also need to know its value. You don't usually want to spend thousands of dollars protecting imitation jewelry, and you don't usually want to leave real jewelry in a cheap lockbox you bought at a discount shop.

Hmm. I was going to leave the question of whether there is such a thing as "real" jewelry begging, but it is one way to approach another hidden issue, which also happens to be a core issue.

One geek's aunt left him her wedding ring set when she died. The geek's wife now has that set. It appraises in the thousands of dollars range, but, because he didn't work hard and sweat blood to buy it, it is not worth very much to her. Maybe she's being unreasonable, maybe she isn't. But these sorts of things need to be know when deciding how to allocate security resources.

(It seems like I should offer some advice, but each situation is different, and I want to talk about matching value to resources elsewhere.)

Knowing what you are trying to protect includes knowing how it is valued, and who values it that way.

Once you know what you are protecting, you need to match your efforts to its value.

Security Basics 1 -- Perceived Value

The first principle of computer security is the same as in the real world:

If what you have is perceived to be valuable, there will be people who will decide they want it.

So the first rule of security is to avoid making something look more valuable than it is.

(A derived rule is to try to make it appear less valuable, but such attempts are generally all too easy to read through, and thus backfire. Going that route should be reserved for special cases, not engaged in without careful planning, and definitely left alone if you haven't throughly understand all the principles of security.)

Think about the old MS/PC-DOS machines. Internal storage was small. Networking was primitive. Data tended to be stored off-line. The biggest security problems were computer viruses written mostly by kids who had no idea of the value of the data their toys were mucking around in.

Well, the data itself wasn't that valuable either, because it was hard to dig into, hard to aggregate, hard to interpret.

Security was not a big problem because of a lack of value, and a lack of perceived value.

To get a grip on perceived value, however, you need to know what you are protecting.