DRAFT DRAFT DRAFT: PARTIAL DRAFT THAT IS ONLY PARTLY COHERENT AND THAT IS INCORRECT IN SOME RESPECTS!

Trust, Rules Of Engagement

Up to this point we have glossed over the matter of establishing trust between objects. In most ways this follows the human intuition of how trust flows. Unfortunately, these rules are subtler than is obvious to the naive analysis, which makes error possible. Therefore, even though your intuition will be correct 99% of the time, it is inadequate for building a truly secure system, because a system that is 1% unsecure is not secure. To really build secure systems, one needs to explicitly understand the rules of trust transfer.

Trust is not actually a part of the implementation of a system. Trust is a part of the specification of a system. The capability transfers within a system represent one possible implementation of the trust transfers described in the specification. If more capability is transferred in the implementation than is allowed by the trust in the spec, it is a security violation. You cannot have a secure system without a specification of all the trust relationships. (Author's note: it is theoretically possible to have a secure system without such a spec, but the chances of success are ludicrous).

If you do not have a trust specification for your system, do not despair. It is possible to create the spec for the trust relationships while you are implementing the capabilities. The rules and conventions laid out below are designed to allow you to use this approach.

In capability based security, all trust is based on prior relationships. Because all trust is based on prior relationship, if you understand the rules by which trust is transferred along with objects, you understand all the valid trust relationships in the system. If there is a trust relationship not explained by these transfer rules, it is a breach of security.

Rules for constructing a documentation convention:

In many ways, the rules below embody a particular set of documentation conventions for creating the trust specification. There are multiple possible such documentation conventions. It would be possible to argue over which convention is best. To be best, the following criteria should be met:

If a different set of rules meet these criteria better than the set presented below, modifying the rules (and documenting the change in the rules!) may be an efficient strategy.

Critical Terms

Rules need to be concise to be followed. To achieve this goal, it helps to have carefully defined terminology that can embody key concepts very clearly. Therefore, the rules below use the following terminology, which the programmer of rule-compliant software must understand:

POV, client, server

In the following discussion, we will refer to the object whose point of view we are using to assess trust and security as the Point Of View object, or POV. Objects with references to the POV are known as clients. Objects to which the POV has references are called servers when being considered from the POV location. All the rules defined below are rules to apply when looking at an object as the POV.

Auditability, failures, breaches

In order for us to have confidence that a system is secure, the system must be auditable. To be auditable, it must be possible to prove that a system is not secure. It must also be possible to prove that a system is secure. Auditability is a direct consequence of having a spec to which you can compare the code. The rules described below are sufficient for creating auditability. Please note that, in practice, auditability is more than just a confidence builder. In practice, auditability is the only path to real security.

To audit a system, we have constructed the rules to support the following approach:

All trust analysis is reducible to the examination of individual messages. To prove that a particular message is secure, examine the message first using the sender of the message as the POV, then examine the message again with the receiver of the message as the POV.

Looking at all the messages from a single POV, if each of these messages passed to or from it is provably secure, the POV is a secure player.

To prove that a whole system is secure, then, iterate through every object in the system, taking each object in turn as the POV. If every message for every object-as-POV follows every rule, the whole system is secure. If one message for one POV violates one rule, the system is not secure.

One last note on the reliability of security. Unfortunately, even if a system passes a full security audit, it is still possible for capabilities to go to the wrong people and objects, because the specification may be wrong. If the code implements the spec correctly but something bad still happens, we will refer to it as a security failure. It is a security failure if, for example, your system administrator gives his password to a reporter from the National Enquirer (we could say that we did not spec the behavior of the system administrator correctly :-).

If the code does not properly implement the spec, we refer to it as a security breach. Not all breaches lead to failures, not all failures can be prevented by fixing breaches. But this is certainly the place to start, and often, this will be enough.

"more" trusted, "differently" trusted

Often we intuitively think of trust in terms of "more" and "less". Because of the fine-grain access right control in capability based systems, this is not complete. If you trust A with append access to a file, and you trust B with read access, you neither trust A more than B, nor vice versa: instead, you trust them differently. Nonetheless, "more" and "less" are useful concepts even in capability systems. You trust object A more than object B if you are willing to grant object A all the capabilities you grant to B, plus at least one more.

frontend trust, total trust, distrust, working trust

Because of the fine-grained nature of capabilities, the question "Do you trust an object?" cannot generally be answered "yes" or "no". However, the yes/no categorization can be with more rigorous questions. In general, the intuitive human meaning of the "do you trust an object?" question boils down to one of several other questions that we will now examine.

If you always replace the question "do you trust A" with one of these 4 questions, you will be confused less often, and your chances of creating a secure system will rise dramatically.

At the mercy of

There are objects which the POV must trust to some extent because there really isn't a choice. If the programmer worries about trusting these objects, he is wasting his time, he might as well relax and take advantage of the trust relationship he must have with these objects. We say that the POV is at the mercy of this object. The creator of the POV is one such object: the POV's total trust is at the mercy of its creator.

Platform, distrusted platform

The platform is the collection of all the underlying objects that an object depends upon for existence. A typical platform for building a secure system would include the hardware (CPU, disk drives, RAM, etc.), an operating system, the JVM, and the E Vat.

If any element of a platform cannot be given total trust, the whole platform is referred to as a distrusted platform. The typical way a platform becomes distrusted is if the hardware is not under the control of someone who is totally trusted.

Fully inspectable objects, wide open objects

A fully inspectable object is an object that, one way or the other, winds up granting to its clients all the capabilities embodied in its servers. A trivial fully inspectable object would be an object that has as many public methods as it has servers, and for each server there is one public method that simply hands the server to the POV client. Fully inspectable objects give total trust to anyone who gets their frontend trust, because the frontend and the backend are the same. Usually in this document we refer to such objects as wide open.

utilities, provable utilities

Objects that have no capabilities (i.e., objects that simply perform computations or just carry data) are called utilities. Utility-ness is transitive: if all the servers for an object are utilities, the object itself is also a utility. If an object is provably a utility, it can be passed around the system confident it will cause no security breaches. To be a provable utility, an object must be immutable, and all possible subclasses must also be immutable. In Java, this means the class must not be subclassable (i.e., it must be final). In E, this means the class must be constant (final classes are a subset of constant classes). Primitive Java types (integers, booleans, strings, floats, doubles) are provable utilities.

Closely held

A powerful set of capabilities is said to be closely held if the set is carefully protected from the outside world by another set of objects who have protection as an explicit goal. Objects inside the closely held world can have total trust in each other, and no auditing is required. The protection objects, however, must be audited very carefully.

Guests, Stewards, and Crew

The concept of a closely held object makes the most sense in the context of the other kinds of objects that are not closely held. A powerful metaphor has been found in the realm of ocean cruise liners. Guests are the passengers, in whom no real trust is conferred. Crew are the objects that make everything operate, and have access to very powerful capabilities: they can sink the ship, for example. Stewards are the objects that fulfill guest requests, often using assistance from the crew. In this metaphor, the crew is closely held. The stewards are the objects that have as one purpose the presentation of facades that give particular limited capabilities to guests under certain circumstances. Stewards are the objects that must be audited most closely for security breaches. It is possible for objects in the crew to have total trust in one another, although, for reasons discussed later, it is always better to follow the Principle of Least Authority, even among Crew.

Alerts

The auditable security documentation used in conjunction with the following rules come in the form of alerts, of 2 kinds. One of the types of alerts informs the client that part of the exposed POV capabilities are exposed only via objects handed in to it.

Objects may have capabilities that it only makes sense to expose to objects handed to them. Though these capabilities are not directly exposed in the public methods, these capabilities are actually part of the frontend: an object could hand itself in to acquire the capability if it wanted to. If a POV exposes a capability to objects handed to it, and that capability is not directly exposed through the public method interface of the POV, the programmer must document an alert, that this capability is so exposed.

The other type of alert is an alert that an object handed over will receive more authority than is recommended by the Principle of Least Authority.

Truths

A truth is something the POV developer really doesn't have a choice about with respect to trust relationships. Truths are not things you can alter; the rules, programmers, and users must ensure that the system remains secure in the presence of these truths.

The POV necessarily totally trusts the object that created it.

All the POV's references to other objects ultimately derive from its relationship to this creator: either the creator gave the reference to the POV during initialization, or made someone else a client of the POV, who in turn handed the POV a reference. As noted earlier, the POV is at the mercy of its creator.

The POV necessarily totally trusts the platform upon which it runs.

The POV is at the mercy of its platform.

The POV necessarily gives frontend trust to all clients.

If someone asks the POV to do something, the POV programmer can just relax and let the POV do it. Please note, this is the most fundamental underlying truth of a capability based system. All other features and rules of trust-transfer in a capability based system are designed to guarantee that this form of frontend trust will not break. If you find yourself wondering whether you can give POV frontend trust to absolutely all clients, you probably have a security breach. All security breaches in a capability based system ultimately express themselves in the granting of some POV's frontend trust to the wrong object.

Rules

The POV must give no capabilities to an object that is distrusted.

This follows pretty directly from the definitions of the terms. You can give references to utilities to distrusted objects.

The POV cannot trust an object more than it trusts the platform upon which the object runs

Consequently, you can trust an object running on someone else's computer only to the extent you trust the person who owns the computer, for example. This has the following corollary:

Facades for powerful objects must only run on platforms in which the powerful object may have full frontend trust.

Because powerful objects tend to be wide open, this often translates as a requirement that the facades must run on platforms of total trust.

In the simple case, both of these corollaries mean that the facade must reside on the machine with the object for which it is a facade. Note: the facade may have a proxy on a distrusted platform, but the facade itself must co-reside with its object.

The POV may always send/receive a provable utility.

If the POV sends a utility, such as a string with information, to the wrong party, the security breach actually occurred when the wrong party got the capability to request the information.

In the absence of an alert, the POV must not grant an object a capability that is not obviously exposed in the POV frontend.

The importance of documenting the hand-out of implicit capability with an alert cannot be emphasized enough. If capabilities are handed to objects that are received from clients, the client could easily send itself, or a special subclass of the object type, to return these capabilities to the client. Therefore, these capabilities really are part of the frontend whether you like it or not, and can easily be turned into a breach that is very difficult to find during the audit.

In the absence of an alert, the POV must not grant an object more than working trust.

This is essentially the enforcement of the Principle of Least Authority. The more rigourously you follow the Principle of Least Authority, the less documentation you must produce.

The creator of a facade has an interesting question with respect to documentation. In a sense, the facade must by definition be granted more authority than it will use: it has a reference to a powerful object, an object more powerful than it would technically need to fulfill its frontend contract. For example, if a facade for the powerful object already existed, a new identical facade could get along quite nicely with just a reference to the pre-existing facade. We will assert, however, that in order to fulfill its job, the facade must receive an object of the type specified where it receives the reference to the powerful object it represents. If the facade is handed an object with only the powerful capabilities specified in the type spec, the creator of the facade is not required to document an alert. If, however, the creator hands the facade a subclass of the type specified that has even more power, this must be documented with an alert.

The POV programmer must not enlarge or make different, the capability represented by the POV, or the capability delivered in the frontend of the POV, without a full audit of all client objects

Changing the capability represented by an object can occur if the object simply gets a new public function that hands out a string, such as a list of salaries. Changing the capability delivered means adding a public function that hands out a reference to another capability.

This rules means that expanding the capabilities delivered or represented in the frontend of an object is a major security upheaval. In many cases, it will make more sense to create a new object with the different/greater capabilities, and hand this new object to the clients who need it.

If the POV programmer reduces the capabilities in the frontend, no audit is required.

The POV can trust an object to the same extent that the POV trusts the object that hands it to the POV.

This has a couple of standard forms, based on the POV's relationship to its creator, and its relationship to its clients. These subrules are listed here:

In the absence of documentation, the POV totally trusts the servers handed to it by the POV's creator.

The POV may promise in documentation that the level of trust it will confer upon objects handed to it by the creator will be less than total trust. Only in this case, the creator can hand the POV an object in which it would place less than total POV trust. A creator should not hand the POV an object to which it would not give total POV trust unless the POV has made a documented promise. If the creator does hand the POV an object that the POV cannot totally trust in the absence of documentation, security is breached.

In the absence of promises, the POV has total trust in objects handed to it via its package interface.

Valid in E but not in Java.

In the absence of promises, the POV can give frontend trust to objects sent to the POV by the POV clients.

After all, the client could have given the new object a reference to the POV without the POV's permission. The POV is not in a position to assess whether this has happened or not. So the POV might as well accept this situation as the default assumption: assume that the handed-in object has POV frontend trust.

In the absence of documentation, if multiple servers will vouch for an object, the POV can grant the vouchsafed object a union of all the trusts the POV has in all the vouching servers.

This is the only mechanism in these rules wherein the trust in an object can be increased. It is described in more detail in the Advanced Design Pattern section.

Applying the rules in some very simple situations:

Object A requests B from C

There can be no security breach or security problems in this limited encounter. Granting authority on B is part of C's function. A is merely using C's frontend as expressed. If any security breach occurred, it occurred when someone handed A a reference to C.

Object A, with references to B and C, hands B over to C

Object A must ask 2 questions: Does A trust C with B's frontend? Does A trust B with the working trust C will have to grant B?

Briefest possible summary:

Herewith is the shortest possible description of the rules that encompasses the critical things you need to remember as you are implementing code. This is not all the rules, but these are the ones that are non-obvious that can never be forgotten on a day-to-day basis:

Example Audits

Shortcuts for Developers and Auditors

utilities only need to be inspected for capabilities

guest objects and crew objects do not need security auditing

Only capability passing events need to be questioned as possible breaches.

You don't really need to look at every message, you need to look at every passing of a capability. In the Salary Printer example, it turns out that the efficient algorithm for performing an audit is to make a list of all the situations in which someone forces someone else to accept an object (concept of force: I made you take it if I send it to you via your method void SetObject(Obj)). If you request an object from me, Obj SendObject(), this cannot be the source of a breach, you're just using the guy's frontend as a client. Use the sender of the object as the POV. If the sender is willing to give the sent object working trust, and is willing to grant the recipient object frontend trust on the sent object, no security breach.

Stuff to Ponder

You can trust a "final" object more than you can trust the guy who sent it to you. While true, it requires a serious inspection of the object to see if it has a reference back to its boss and sends capabilities back. Far better to avoid this kind of trust. It needs to be documented like hellfire if you do do it. If the final object is modified, it mus be inspected like hellfire again, but how would you know it-it was the other guy's problem.

Does crew code need to be audited?

untrusted

???Here we will refer objects granted less than frontend trust as untrusted.

Chip thinks you do not necessarily totally trust your creator

I can trust an object that I gave you myself with more trust than I grant you.

Promises are no longer needed with concept of working trust

circular reasonings

Test question: on facade creator, when does frontend convention require promises and why? When does least authority convention need alerts and why? When does alert/promise blend need documentation, and is it a good idea? Explain the differences.

How would Creator know if he was handing a Super File Handler to the Facade if its own creator gives Creator a factory for Super File Handlers? Sure the Creator's creator has to document the alert, but isn't that awfully far away? Maybe not...