ace-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bree Van Oss <Bree.Van...@myfuelmaster.com>
Subject [feedback] hurdles when working with ACE
Date Wed, 03 Feb 2016 18:28:29 GMT
Hi Jan Willem,

Following are a list of issues we have encountered with ACE (2.0.1) and a few suggestions.

Before getting into that, I’m curious if you guys have considered providing plugins for
one or more Deployment/Configuration Management platforms (e.g. Chef, Ansible, Salt)? As I
see it, ACE's primary value add is the ability to perform incremental, live updates to an
OSGi container. Would be awesome to leverage the larger communities that have grown up around
these toolsets.

Issues

UI's lack of support for concurrent workspace edits leads to conflicts in high-use environment
like our internal ACE server used to provision Testers' environments.

Current UI is clunky and unpredictable.

We've had difficulty using GoSH as a scripting language. It's syntax is non-obvious and does
not seem to follow any "standard" syntax metaphors.

We need a well defined/documented way to hook into the deployment lifecycle in the agent to
support (at least) the following.
- Ability to execute code when deployment starts and/or completes (with success or failure).
- Programatically abort a deployment attempt.
- Define deployment windows (for example, to configure a deployment via the UI, but know that
the deployment won't start until the target is within the deployemnt window)

Managing all the various config files has been problemattic. Often the same values are replicated
in multiple config files.

We had to extend the ConnectionManager and ConnectionFactory (server and agent) to provide
support for HTTPS with PKCS#11

Certificate validation should be implemented using PKIX trustmanager via Jetty. No custom
code should be required to do basic validation, revocation-checking, etc.

We have to clean our internal Dev/QA ACE instance every few weeks because the server becomes
unresponsive. Restarting ACE does not resolve the issue.
- We're using CDS with around 150 artifacts in each distribution and around 10 and 20 targets
at any given time, so I'm sure there's a lot of metadata, but it's frustrating to deal with
these performance issues.
- We're very concerned that this issue will bite us in production. We will need to keep at
least two or tree releases in ACE and other than completely blowing ACE (OBR and bundle-cache)
away, we don't have a good mechanism to clean up old distributions.

Suggestions

UI/REST-API revamp
- support multiple users performing concurrent updates

Persistence revamp
- support multi-user scenarios
- support for Nexus as OBR

Rolling, file-based logging out of the box
- Log target activity (deployment started/completed, results, etc)

Deployment history
- Via the UI

Better support for config files when using CDS style deployment scenario
- When using CDS, config files that have not changed are _always_ uploaded
- Config files uploaded using CDS are not grouped like multiple versions of JARs are

Upgrade Jetty from 7 to latest (9.x)

Upgrade Felix Dependency Manager

Provide a secure (HTTPS) configuration out of the box.

Provide a way to "cascade delete" a distribution via the UI. Delete should remove the distribution,
related features and related artifacts, if they aren't being referenced by another distribution.

Migrate to Git from SVN to facilitate more community involvement

Consider developing ACE-based plugins for DevOps tools like Chef and Ansible.

Thanks,
Bree

P.S. I sent this to the mailing list before I was subscribed and I think the initial mailing
was dropped. Apologies if this results in a duplicate submission.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message