Return-Path: X-Original-To: apmail-ace-commits-archive@www.apache.org Delivered-To: apmail-ace-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id EF9D510C02 for ; Tue, 25 Nov 2014 11:49:16 +0000 (UTC) Received: (qmail 63270 invoked by uid 500); 25 Nov 2014 11:49:16 -0000 Delivered-To: apmail-ace-commits-archive@ace.apache.org Received: (qmail 63242 invoked by uid 500); 25 Nov 2014 11:49:16 -0000 Mailing-List: contact commits-help@ace.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ace.apache.org Delivered-To: mailing list commits@ace.apache.org Received: (qmail 63231 invoked by uid 99); 25 Nov 2014 11:49:16 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Nov 2014 11:49:16 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Nov 2014 11:49:13 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id D4D5A23888FE for ; Tue, 25 Nov 2014 11:48:22 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: svn commit: r930432 - in /websites/staging/ace/trunk/content: ./ docs/ docs/analysis/ docs/design/ docs/design/src/ Date: Tue, 25 Nov 2014 11:48:22 -0000 To: commits@ace.apache.org From: buildbot@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20141125114822.D4D5A23888FE@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: buildbot Date: Tue Nov 25 11:48:22 2014 New Revision: 930432 Log: Staging update by buildbot for ace Added: websites/staging/ace/trunk/content/docs/design/auditlog-analysis.html websites/staging/ace/trunk/content/docs/design/bundlerepository-analysis.html websites/staging/ace/trunk/content/docs/design/security-analysis-flow.svg (with props) websites/staging/ace/trunk/content/docs/design/security-analysis.html websites/staging/ace/trunk/content/docs/design/src/security-analysis-flow.graffle (with props) websites/staging/ace/trunk/content/docs/design/template-mechanism.html Removed: websites/staging/ace/trunk/content/docs/analysis/ Modified: websites/staging/ace/trunk/content/ (props changed) websites/staging/ace/trunk/content/docs/design/index.html websites/staging/ace/trunk/content/docs/index.html Propchange: websites/staging/ace/trunk/content/ ------------------------------------------------------------------------------ --- cms:source-revision (original) +++ cms:source-revision Tue Nov 25 11:48:22 2014 @@ -1 +1 @@ -1641583 +1641589 Added: websites/staging/ace/trunk/content/docs/design/auditlog-analysis.html ============================================================================== --- websites/staging/ace/trunk/content/docs/design/auditlog-analysis.html (added) +++ websites/staging/ace/trunk/content/docs/design/auditlog-analysis.html Tue Nov 25 11:48:22 2014 @@ -0,0 +1,201 @@ + + + + Audit Log Analysis + + + + + + + + + + + + + + + +
+

Home » Docs » Design

+

Audit Log Analysis

+
+

An audit log is a full historic account of all events that are relevant for a certain object. In this case, we keep audit logs of each target that is managed by the provisioning server.

+

Problem

+

The first issue is where to maintain the audit log. On the one hand, one can maintain it on the target, but since the management agent talks to the server, it could keep the log too.

+

Then there is the question of how to maintain the log. What events should be in it, and what is an event?

+

Finally, the audit log should be readable and query-able, so people can review it.

+

The following use cases can be defined:

+
    +
  • Store event. Stores a new event to the audit log.
  • +
  • Get events. Queries (a subset of) events.
  • +
  • Merge events. Merges a set of (new) events with the existing events.
  • +
+

Context

+

We basically have two contexts:

+
    +
  • Target, limited resources, so we should use something really "lean and mean".
  • +
  • Server, scalable solution, expect people to query for (large numbers of) events.
  • +
+

Possible solutions

+

As with all repositories, there should be one location where it is edited. In this case, the logical place to do that is on the target itself, since that is where the changes actually occur. In theory, the server also knows, but that theory breaks down if things fail on the target or other parties start manipulating the life cycle of bundles. The target itself can detect such activities.

+

The next question is what needs to be logged. And how do we get access to these events?

+

When storing events, each event can get a unique sequence number. Sequence numbers start with 1 and can be used to determine if you have the complete log.

+

Assuming the target has limited storage, it might not be possible to keep the full log available locally. There are a couple of reasons to replicate this log to a central server:

+
    +
  • space, as said the full log might not fit;
  • +
  • safety, when the target is somehow (partly) erased or compromised, we don't want to loose the log;
  • +
  • remote diagnostics, we want to get an overview of the audit log without actually connecting to the target directly.
  • +
+

When replicating, the following scenarios can occur:

+
    +
  1. The target has lost its whole log and really wants to (re)start from sequence number 1.
  2. +
  3. The server has lost its whole log and receives a partial log.
  4. +
+

Starting with the second scenario, the server always simply collects incoming audit logs, so its memory can be restored from any number of targets or relay servers that report everything they know (again). Hopefully that will lead to a complete log again. If not, there's not much we can do.

+

The first scenario is potentially more problematic, since the target has no way of knowing (for sure) at which sequence number it had arrived when everything was lost. In theory it might ask (relay) servers, but even those might not have been up to date, so that does not work. The only thing it can do here is: Start a new log at sequence number 1. That means we can have more than one log in these cases, and that again means we need to be able to identify which log (of each target) we're talking about. Therefore, when a new log is created, it should contain some unique identifier for that log (an identifier that should not depend on stored information, so for example we could use the current time in milliseconds, that should be fairly unique, or just some random number).

+

How to find the central server? Use the discovery service!? This is not that big of a deal.

+

Events should at least contain:

+
    +
  • a datestamp, indicating when the event occurred;
  • +
  • a checksum and/or signature;
  • +
  • a short, human readable message explaining the event;
  • +
  • details:
      +
    • in the form of a (possibly multi-line) document
    • +
    • in the form of a set of properties
    • +
    +
  • +
+

The server will add:

+
    +
  • the target ID of the target that logged the event.
  • +
+

Storage will be resolve differently on the server and target. On the target, using any kind of database would amount to having to include a considerable library, which makes these solutions impractical there. We might want to consider something like that for the server though. The options we have, are:

+
    +
  • Relational database
  • +
  • Object database
  • +
  • XML
  • +
  • DIY
  • +
+

How do events get logged?

+
    +
  • explicitly, our management agent calls an AuditLog service method;
  • +
  • implicitly, by logging (certain) events in the system;
  • +
+

Implicit algorithms can be build on top of the AuditLog service. What we need to monitor is the life cycle layer, which basically means adding a BundleListener and an FrameworkListener. Those capture all state changes of the framework. Technically we can either directly add those listeners, or use EventAdmin if that is available.

+

What would be the best way for the target to send audit log updates to the server? I don't think we want the server to poll here, so the target should send updates (periodically). So how does it know what to send?

+
    +
  • it could keep track of the last event it sent, sending newer ones after that;
  • +
  • it could ask for the list of events the server has;
  • +
  • it could send its highest log event number, and get back a list of missing events on the server, and then respond with the missing events.
  • +
  • it could just send everything.
  • +
+

Discussion

+

Having two layers for the audit log makes sense:

+
    +
  • The first, lowest, layer is the AuditLog service that gives access to the log. On the one hand it allows people to log messages, on the other it should provide query access. Those should be split into two different interfaces.
  • +
  • The second layer can build on top of that. It can either be removed completely, which means the responsibility for logging becomes that of the application (probably the management agent). It can be implemented using listeners. Finally, it can be implemented using events.
  • +
+

On the target we should implement a storage solution ourselves, to keep the actual code small. The code should be able to log events quickly (as that will happen far more often than retrieving them).

+

Communication between the target and server should be initiated by the target. The target can basically send two commands to the server:

+
    +
  1. My audit log contains sequence number 4-8, tell me your numbers. The server then responds (for example) with 1-6. This indicates we need to send 7-8.
  2. +
  3. Here you have events 7-8, can you send me 1-3? The server stores its missing events, and sends you the events it has (always check if what you get is what you requested).
  4. +
+

This is setup in this way so the same commands can also be used by relay servers to replicate logs between server and target.

+

Conclusion

+
    +
  • The audit log is maintained on the target.
  • +
  • On the target, we implement the storage mechanism ourselves to ensure we have a solution with a very small footprint.
  • +
  • On the server, we use an XStream based solution to store the logs of all the targets.
  • +
  • Our communication protocol between target and (relay)server however, should probably not rely on XML.
  • +
  • Our communication protocol between server and (relay)server might rely on XML (determine at design time what makes most sense).
  • +
+
+
+

Copyright © 2012-2014 The Apache Software Foundation, Licensed under the Apache License, Version 2.0.
Apache ACE, the Apache ACE logo, Apache and the Apache feather logo are trademarks of The Apache Software Foundation. All other marks mentioned may be trademarks or registered trademarks of their respective owners.

+
+
+ + Added: websites/staging/ace/trunk/content/docs/design/bundlerepository-analysis.html ============================================================================== --- websites/staging/ace/trunk/content/docs/design/bundlerepository-analysis.html (added) +++ websites/staging/ace/trunk/content/docs/design/bundlerepository-analysis.html Tue Nov 25 11:48:22 2014 @@ -0,0 +1,136 @@ + + + + Bundle Repository Analysis + + + + + + + + + + + + + + + +
+

Home » Docs » Design

+

Bundle Repository Analysis

+
+

The bundle repository stores actual bundles and other artifacts. It is kept external to be able to leverage existing repositories and better protect the intellectual property of our users.

+

Problem

+

The bundle repository is an external repository that stores the actual bundle data and other artifacts. We keep this data external to our system to better protect the intellectual property of our users. Having only the meta-data in our system ensures the bundles and artifacts themselves can remain on a separate, protected network, even when the provisioning server itself is used in a hosted or cloud environment.

+

Access to the bundle repository is URL based.

+

The use cases are:

+
    +
  • Get bundle, which returns the full bundle. This use case is mandatory, as this is the main goal for having a bundle repository.
  • +
  • Get bundle meta-data, which returns only the meta-data. This one is nice to have, as it would help us on slow connections when we only want metadata.
  • +
  • Get a list of (a subset of) all bundles in the repository. When provisioning, we already know what we want. When managing the shop we might have use for querying features and we should seriously look at OBR as an implementation. Also, as part of the Equinox provisioning effort, they are defining a similar model.
  • +
  • Install/update bundle. Makes the repository editable from the outside.
  • +
  • Delete bundle. Mentioned separately here because of the dangers of deleting bundles that might still be in use (the repository has no way of knowing what's in use).
  • +
+

Context

+

Whilst we will no doubt create our own bundle repository, it would be a big bonus if we could work with other bundle repositories. OBR comes to mind, but there might be others. Therefore it's important to create an implementation that maps easily onto (for example) an HTTP based repository.

+

Our requirement to have URL based access to bundles ensures we can do that.

+

Possible solutions

+

As mentioned before, we basically have two solutions:

+
    +
  1. use an existing solution;
  2. +
  3. creating our own.
  4. +
+

Discussion

+

Most use cases can be done either way. If you look at the OSGi Alliance's RFC-112 for OBR, the only thing it does not support is manipulating a repository. You could argue that's because it is beyond the scope, and because currently, OBR can be implemented using any webserver (it's basically just a set of bundles and a single XML descriptor).

+

Conclusion

+

I think we should create our own implementation of OBR, extending it with editing capabilities, and perhaps subsetting it (at least initially, we might not want a whole requirements, capability and dependency mechanism in there right now, as that's something we deal with inside our provisioning system).

+

At the same time, adding these editing capabilities should not mean we cannot still generate static files that can be deployed on an external HTTP server. We do want to add an API for editing, but we don't want to make the whole repository depend on the capability to run code on that server, since we might want to do all maintenance on some client that simply uploads files to a server.

+
+
+

Copyright © 2012-2014 The Apache Software Foundation, Licensed under the Apache License, Version 2.0.
Apache ACE, the Apache ACE logo, Apache and the Apache feather logo are trademarks of The Apache Software Foundation. All other marks mentioned may be trademarks or registered trademarks of their respective owners.

+
+
+ + Modified: websites/staging/ace/trunk/content/docs/design/index.html ============================================================================== --- websites/staging/ace/trunk/content/docs/design/index.html (original) +++ websites/staging/ace/trunk/content/docs/design/index.html Tue Nov 25 11:48:22 2014 @@ -101,14 +101,18 @@

Home » Docs » Design

Design

-

Design documentation

-

The following documents explain some more details on the design of various aspects in -Apache ACE. Read them if you want to know more about why certain functionality exists and -why it is implemented in this way.

+

Design and analysis documentation

+

The following documents explain some more details on the analysis and design of various +aspects in Apache ACE. Read them if you want to know more about why certain functionality +exists and why it is implemented in this way.


Added: websites/staging/ace/trunk/content/docs/design/security-analysis-flow.svg ============================================================================== Binary file - no diff available. Propchange: websites/staging/ace/trunk/content/docs/design/security-analysis-flow.svg ------------------------------------------------------------------------------ svn:mime-type = image/svg+xml Added: websites/staging/ace/trunk/content/docs/design/security-analysis.html ============================================================================== --- websites/staging/ace/trunk/content/docs/design/security-analysis.html (added) +++ websites/staging/ace/trunk/content/docs/design/security-analysis.html Tue Nov 25 11:48:22 2014 @@ -0,0 +1,155 @@ + + + + Security Analysis + + + + + + + + + + + + + + + +
+

Home » Docs » Design

+

Security Analysis

+
+

Security is an important concern for ACE. The analysis needs to differentiate between the individual needs of each sub-system and the overall flow inside the system. Furthermore, several scenarios need to be taken into account and addressed. In general, safety issues are not part of this analysis but will be addressed separately.

+

Threat scenarios and possible countermeasures are given subdivided by and investigated in regard to authentication, authorization, integrity, non repudiation, and confidentiality. We need answers to the following questions, what kind of different "attacks" from both external and internal interfaces can we identify (threats); how can we authenticate the different actors (human and machine) so we really know who we're talking to (authentication); who is allowed to do what in the system (authorization); who did what at which point of time (non repudiation); and how do we encrypt and ensure the integrity of the communication/software/configuration data (confidentiality).

+

Security on the target and relay server needs special attention because they are most likely provided by a third party, might be accessible from the outside, and not easily reachable for maintenance. It is for example possible that a target is at a remote location, accessible via the internet, and requires days to be accessed physically.

+

Threat Scenarios

+

This analysis focuses on the OSGi framework and management agent part of the system and its interaction with a (relay) server as well as between the client (for this analysis we assume the client is a separate node, for our web based UI it just happens to be part of the server) and a server. The most likely scenarios are forced breakdown of the system (denial of service attack), malicious data that might change system behavior, attempts to take over control, and espionage.

+
    +
  1. (D)DOS - In general, it is not possible to prevent denial of service attacks. Attackers normally can find a way to overload the system. Regarding the management agent it would be for example possible to provide the agent with a huge amount of data to install so that the target either is running out of disk space or out of other processing resources. The same is possible for any other entity in the system if an attacker finds a way to make it accept data.
  2. +
  3. Malicious Data - An attacker might use malicious data as part of a DOS attack but it could be also used to gain control over the system or change some aspects of its behavior to make it easier to take over control or cause other harm.
  4. +
  5. Hostile Takeover - Attackers might be interested in taking control over (parts of) the system in order to either do espionage, change the behavior of the system to do work for them, or plainly destroy/disable entities (e.g., to harm competitors).
  6. +
  7. Eavesdropping - An attacker might be able to listen in on the communication between a target and its (relay-) server or the client and the server. This might allow to learn about the configuration of a target and getting hold of the installed software.
  8. +
  9. Physical Access - Another type of attack would be to gain physical access e.g., disassemble a target or a relay server in an attempt to steal its data and/or impersonate it. Probably the only way to avoid that is hardware encryption, which for ACE is out of scope (but can be used to further harden the system).
  10. +
+

Countermeasures

+

On the target there are two entities that are important namely, the (relay) server which is providing the target with instructions and data/code, and the management agent (i.e., the target itself). Regarding the communication between a client and the server the secure checkout and commit of object repository versions are important as well as the auditlog. The interaction between the server and a relay server is a two way data exchange where the relay server is comparable to a target in regard to the instructions and data/code it needs to get from the server and to a server that sends the auditlog to a client. One plus point from the security side is that the target is only polling the server – hence, it is not accepting any connection requests from the outside. This reduces the risk of a DOS attack but by no means makes it invulnerable against it (especially since there is a high likelihood that the underlying platform is vulnerable to DOS attacks as well). One way of workin g around the polling restrictions are ARP and DNS injection attacks that might make the target contact the wrong server. This allows for malicious data, DOS attacks, and hostile takeovers.

+

A good start to limit attack possibilities is to decouple the sub-net of the target from the internet / external world by using relay servers but this doesn't prevent the mentioned attacks and threats in all cases. Furthermore, relay servers need to support both polling and being polled due to their different roles (they are polled by the targets, need to poll deployment packages or object repositories from the server, and push the auditlogs of targets to the server). Finally, the server is only polled.

+

Authentication

+

As mentioned above, the most likely way of attacking a target or relay server is to spoof its connection to the server (whether it is a relay server or the real one). It is dangerous to rely on DNS and/or IP addresses because both might be wrong. Given the issues at stake, authentication will need to be based on certificates. An entity of the system should have a certificate (that has the id as part of it's common name) as its identity.

+

Furthermore, it needs to have a keystore of trusted root certificates (CA) and a certificate revocation list (CRL). The (relay) server needs to have a certificate as its identity that is part of a chain of trust to one of the trusted root certificates of the target or client and vice versa. Basically, this can be achieved via two ways, one is to use https with server and client certificates; the other to use certificates to sign all messages/data using our own protocol.

+

Authorization

+

We have to differentiate between several areas where authorization is needed. The provisioning part needs to make sure it is installing deployment packages from an authorized server.

+

The target itself is running an OSGi framework and can subsequently, make use of the built-in security. This is needed if deployed software components can not be trusted and would be advisable to foster "least privilege" security in general. However, the management agent will need to be able to cooperate with the framework infrastructure to set-up needed rights. Special care needs to be taken to avoid installing malicious software in a framework with security disabled or with too powerful a set of rights. Due to the life-cycle capabilities of OSGi, a malicious or faulty bundle could for example uninstall the management agent itself if the bundle is started in the absence of security or with admin permission (This aspect is not part of this analysis and will be discussed as a separate user story).

+

Assuming the additional requirements in regard to integrity and authentication are satisfied it should be sufficient to ensure the server is authorized to make changes to the target – hence, in a certificate based approach separate chains of trust can be used to determine whether a server is trusted and is authoritative for a given target. In other words, the certificate of the server can be treated as a capability (revocation is then possible via a certificate revocation list). The same applies for clients and relay servers, respectively.

+

Integrity

+

Due to the fact that authorization to provision a given version (i.e., a set of bundles) is mainly based on whether or not the current authenticated server is authoritative for a target it is of great importance that the actual deployment package has not been tampered with.

+

The deployment admin specification already defines a way to ensure integrity building upon the fact that deployment packages are Java JAR files (which can be signed). Therefore, it makes sense to only allow deployment packages that are signed by a certificate that the target has in a chain of trust.

+

Furthermore, taking into account relay servers the trusted certificates can be limited further to for example only allow the actual server certificate.

+

Deployment packages can be signed by any number of certificates so it is possible to sign a deployment package multiple times in order to make it available to different targets that follow non uniform certificate trust strategies. The same is possible for the object repositories and the auditlog.

+

Non Repudiation

+

Several entities can be responsible for changes in the system. The individual entities need to make sure they record in a non repudiation fashion who was doing what for any action taken. Conversely, the server and possibly the relay servers need a way to ensure that for example auditlog entries are really from the target they are claimed to be.

+

One way to tackle this is to use certificates to sign all data and to make sure that for all data accepted from a different entity, the signature (including the fingerprint of the signing certificate) is recorded. Taking the auditlog as an example, a target would use its certificate to sign all entries in the auditlog. Subsequently, a server or a client can be certain that a given auditlog is originating from the target it is claimed to come from (assuming the private key of the target certificate has not been exploited).

+

Furthermore, it will be easy to invalidate data from compromised entities by adding their certificates to the certificate revocation list.

+

Another, more involved example, can be a target that receives a deployment package and installs it. In this case, the manifest containing all the signatures of the content of the signed deployment package as well as all the fingerprints of the certificates that signed it need to be added to the targets auditlog and this entry would be signed by the target certificate. After the log is synchronized back to the server (possibly via several relay servers or even manually) the server can determine who signed the deployment package and where it has been installed. The same applies for clients.

+

Confidentiality

+

In most cases the software that needs to be provisioned as well as the configuration of the targets needs to be kept confidential since it may contain business secrets. This can only be ensured by means of encryption because we have to take scenarios into account where communication happens via a none secure channel like the internet.

+

One secure set-up would be to use asynchronous encryption which would furthermore not rely on a point-to-point protocol but rather enable all the way confidentiality. Alas, the deployment packages might be big and asynchronous encryption would be to slow in this case.

+

The alternative is to use SSL (most likely by means of HTTPS). The downside of SSL as for example in HTTPS is that it is often hard to set-up and relatively inconvenient and static to use if the possibility of a man in the middle attack needs to be ruled out.

+

Possibly the biggest problem, in our scenario, is that we can not assume that the common name of an entity reflects its IP/DNS name. Relay servers might be operating in networks not under the customers or our control and the same applies to targets and clients (which could have dynamic IP's and hostnames for example). This problem can be overcome by ignoring the common name in regard to authentication which might make it necessary to create some integration code for certain platforms and containers (e.g., the JVM, by default, assumes that it can resolve the common name as a host name). The downside is that such an approach would open the possibility for man in the middle attacks. Only in combination with client certificates this can be prevented (alas, this might need some more adaption on the server side).

+

Finally, the certificates on both, the server and the target side, respectively, would need to be in a chain of trust. Assuming this precondition holds, the only way to eavesdrop would then be to exploit one of the certificate's private key (e.g., via disassembling the target by an attacker that has physical access or by means of gaining access to the target via a different vulnerability). Such a key could be blacklisted by adding it to the certificate revocation list upon discovery of its exploitation.

+

Encryption

+

The physical access threat makes it possible that attackers might get hold of data (like installed bundles). Https and certificates can prevent eavesdropping while data is distributed but if an attacker can get hold of the target or a relay server it is still possible to access the data. As mentioned above, for the target the only way to prevent this would be hardware supported encryption but for relay servers it is sufficient to encrypt the data itself. We might need to support this eventually but it is not looked into further in this analysis.

+

Certificate based Flow Analysis

+

All entities (the server, the client, the relay server, and the target), have a CRL and a keystore; the former contains revoked certificates and the later the known and trusted certificate authorities. In general, for all involved certificates, for a certificate to be valid it has to be the case that it is in a chain-of-trust relation to at least one of the trusted certificate authorities and is not revoked. Furthermore, there exists a special trusted certificate known as the server authority and vice versa for the target and client. The interaction between the entities is via HTTPS and needs a valid server and client certificate. The common name of the certificate represents the target, client, or server id, respectively. As a further restriction the server certificate has to be in a chain of trust to the server certificate authority, the client certificate has to be in a chain of trust to the client certificate authority, and the target certificate has to be in a chain of trust to the target certificate authority. The data exchanged between the entities needs to be signed by the respective counterpart certificate authority. For example, a deployment package send from the server to the target needs to be signed by a valid certificate that is in a chain of trust to the server certificate authority and auditlog entries send from the target to the server must be signed by its target certificate. In other words, the signer needs to be the one that created the specific data. CLR and keystore can be treated as yet another object repository (because they need to be signed) – hence, they can be synced from a server to clients, relay servers, and subsequently, targets.

+

+

Conclusion

+

The set-up takes aforementioned countermeasure to the identified threat into account. The https connection ensures the confidentiality via encryption. Due to the server and client certificate connection authentication and authorization are addressed. The requirement of separately signed content provides integrity and non repudiation in the absence of compromised certificate private keys. Certificates with known exploited keys can be revoked by adding them to the CRLs. Authority derives from the chain of trust relation to the server and target certificate authority.

+
+
+

Copyright © 2012-2014 The Apache Software Foundation, Licensed under the Apache License, Version 2.0.
Apache ACE, the Apache ACE logo, Apache and the Apache feather logo are trademarks of The Apache Software Foundation. All other marks mentioned may be trademarks or registered trademarks of their respective owners.

+
+
+ + Added: websites/staging/ace/trunk/content/docs/design/src/security-analysis-flow.graffle ============================================================================== Binary file - no diff available. Propchange: websites/staging/ace/trunk/content/docs/design/src/security-analysis-flow.graffle ------------------------------------------------------------------------------ svn:mime-type = application/xml Added: websites/staging/ace/trunk/content/docs/design/template-mechanism.html ============================================================================== --- websites/staging/ace/trunk/content/docs/design/template-mechanism.html (added) +++ websites/staging/ace/trunk/content/docs/design/template-mechanism.html Tue Nov 25 11:48:22 2014 @@ -0,0 +1,133 @@ + + + + Template Mechanism + + + + + + + + + + + + + + + +
+

Home » Docs » Design

+

Template Mechanism

+
+

Some artifacts (see Object Graph in Client) can need some customization before being provisioned, e.g. configuration files might need some information that is managed by one of the distributions.

+

The customization will be done when a new version is created, i.e., on call of approve() on a StatefulTargetObject. A customized version of the artifact (which is located somewhere in an OBR, reachable using a URL) is uploaded to the same OBR, and the URL to the customized one is stored in the DeploymentVersionObject.

+

Proposed design

+

In addition to the interfaces ArtifactHelper and ArtifactRecognizer, we introduce a ArtifactPreprocessor, which has a single method preprocess(ArtifactObject object, Properties props), in which Properties contains customization information (see below), and the method returns the URL of the altered artifact (or, if nothing has changed, the original artifact, or, if this changed artifact is identical to one that has already been created before, that old URL). This ArtifactPreprocessor can be published as a service (see the section on remoting below), but for local purposes, the ArtifactHelper interface gets an extra method getPreprocessor(), which returns an instance of the preprocessor to be used for the type of artifact this helper helps.

+

As an added service, we could create a basic preprocessor, VelocityBasedPreprocessor which uses the Velocity template engine to process an artifact and store it in a configured OBR; this preprocessor can be instantiated and returned by each ArtifactHelper that needs a basic processor (if no processing can be done for some type of artifact, getPreprocessor should return null).

+

Customization information

+

For each template that has 'holes' to fill in, it can 'reach' all RepositoryObjects that are reachable from the TargetObject this template will be provisioned to, leading to a tree of data. Inspired by Velocity's way of finding contextual data, we propose to store the for each RepositoryObject in its own Properties object, adding its attributes and tags to it as two Properties objects using the keys "attributes" and "tags", and a List summing up all children (so, for a target, all its distributions) using the key "children"; in the end, this becomes a tree of Properties objects.

+

This way, the Velocity template can use syntax like

+
#foreach( $license in $gateway.children)
+    #if ($license.attributes.vendor=="luminis")
+        Default license by luminis
+    #else
+        Custom license by $license.attributes.vendor
+    #end
+#end
+
+ + +

Support for remoting

+

Some customers might want to keep all information hidden from us, only allowing us the metadata on the server. In this case, we can deploy a ArtifactPreprocessor on the customer's site, which is then responsible for doing everything a local ArtifactPreprocessor can do, and returning a URL to the altered artifact. Then, in stead of returning an instance of the ArtifactPreprocessor, the ArtifactHelper will return some RemoteArtifactPreprocessor which implements the ArtifactPreprocessor interface, but talks to a servlet on the customer's server.

+

On the 'needsApprove' state in the StatefulTargetObject

+

With the mechanism above, determineStoreState in StatefulTargetObject would need to create a full deployment version every time we need to know whether approval is necessary. This is undesirable, because, in a remoting scenario, it means we have to pass lots of data to a servlet, oftentimes only to find out that we created a version identical to the one we already had. +So, in stead of this rigid semantics, the 'needsApprove' state will become more of a 'tainted' state, which becomes true when something happens that could have an impact on this StatefulTargetObject. We can quite easily determine what targets are affected by a given change in the model by following the associations from that object to the targets.

+
+
+

Copyright © 2012-2014 The Apache Software Foundation, Licensed under the Apache License, Version 2.0.
Apache ACE, the Apache ACE logo, Apache and the Apache feather logo are trademarks of The Apache Software Foundation. All other marks mentioned may be trademarks or registered trademarks of their respective owners.

+
+
+ + Modified: websites/staging/ace/trunk/content/docs/index.html ============================================================================== --- websites/staging/ace/trunk/content/docs/index.html (original) +++ websites/staging/ace/trunk/content/docs/index.html Tue Nov 25 11:48:22 2014 @@ -130,8 +130,6 @@ background.

guide;
  • adding support for new types of artifacts is described in custom artifact type documentation;
  • -
  • if you are interested about the various roles and terminology used in Apache ACE, read - roles and terminology page;
  • to handle a large number of targets, you can make use of intermediate relay servers;
  • configuring HTTP Basic authentication in ACE is described in the authentication @@ -140,6 +138,8 @@ background.

    certificates;
  • various deployment strategies for Apache ACE are described in the ACE deployment strategies document;
  • +
  • if you are interested in performing load tests, or want to get started with automating + ACE deployments, read all about it in our test script document;
  • Developing for Apache ACE

    There are several resources available on extending and developing for Apache ACE, such as:

    @@ -157,20 +157,11 @@ background.

    Background information, designs and analysis