10.16.2010

False-Light Security

Security is a losing battle.

If I told you to build a lock that could never be defeated, you could try. But you'd never succeed. Suppose you create a self-destructing lock that eliminates all visible, auditory, and molecular traces of the item in question you're trying to protect...

When you look at something, what do you see?

If you said you see whatever you're looking at, you're wrong...

If you said you see the light that reflected off of whatever you're looking at, you're closer...

If you said you see what your brain interprets that light to be, then you're even closer...

So now visualize what happens in the area outside the scope of the lock in question. Even if the item itself is obliterated and gone forever, the light that bounced off of it is cast out into space, and can be viewed by any passing starship.

Assuming a technologically advanced species, said item protected by said lock could be and likely would be recreated down to the molecular structure.

A long time ago I mentioned to friends about an idea I had called the Light-Scholars. The basic gist of it is that if a faster then light travel is ever invented, the complete, unabridged history of every atom of the Human Race could be recorded and stored for future generations, based solely on the fact that we could teleport to a distant part of space, point powerful telescopes at the image of earth, and record. What we'd see is not what Earth is, but what Earth was. Light that traveled from earth out into empty space. Light with no lifespan.

Understanding this, we're capable of impressive feats of intelligence.

For example, if what we see might not be correct, how could we ever trust what we see?

It's an interesting paradox.

But we're actually going to apply it.

In a typical hollywood-style hacking scene, a username and password are used to gain entrance into an impressive intimidating mainframe type system. If the wrong password is typed, the person attempting to log in gets an error message and the opportunity to re-type it in. Likewise if someone trying to log in quickly to check mail gets their password wrong courtesy of a typo, they also get that error and a chance to type it in again.

This is the wrong way to do it on both counts.

If the hacker uses the wrong username/password, they should be sent to a generic windows screen with almost no content to look through. a keen eye would spot forgeries and fakes here, and they'd have to log out and back in to attempt again.

If the genuine user makes a typo on their password the same thing should happen. they might be logging on just to surf a few pages on the net, at which point they'd log back out. this actually gives them more security.

It's like the difference between logging in as a user and logging in as root. But to take things even further, it could serve as a panic button - if someone is alerted every time the system is activated, it could serve as a warning that an employee might be held at gunpoint (or blackmailed) and forced to log in, at which point further action could be taken.

The key of the system is that you're falsifying information that people accept as truth. You're not building an unbeatable system, you're attacking a very vulnerable insecurity. In short, instead of forcing a person to walk around in platemail armor and drive around in a tank, you're allowing them to be anonymous, like they're walking around town next to a more inviting target, which turns out to be a fake.

Obviously it's not foolproof. But as our world becomes more and more digital, and as new technologies are developed that improve speed and accuracy of everything we do, it's going to become more and more important to attack the human side of the algorithms.

Followers