Return-Path: Delivered-To: apmail-geronimo-dev-archive@www.apache.org Received: (qmail 8956 invoked from network); 11 Feb 2009 19:08:44 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 11 Feb 2009 19:08:44 -0000 Received: (qmail 39148 invoked by uid 500); 11 Feb 2009 19:08:43 -0000 Delivered-To: apmail-geronimo-dev-archive@geronimo.apache.org Received: (qmail 39094 invoked by uid 500); 11 Feb 2009 19:08:43 -0000 Mailing-List: contact dev-help@geronimo.apache.org; run by ezmlm Precedence: bulk list-help: list-unsubscribe: List-Post: Reply-To: dev@geronimo.apache.org List-Id: Delivered-To: mailing list dev@geronimo.apache.org Received: (qmail 39085 invoked by uid 99); 11 Feb 2009 19:08:43 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 Feb 2009 11:08:43 -0800 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [209.174.123.75] (HELO dubrium.sys.ma.cait.org) (209.174.123.75) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 Feb 2009 19:08:32 +0000 Received: from vanadium.cait.org ([209.174.123.60] helo=zimbra.cait.org) by dubrium.sys.ma.cait.org with esmtp (Exim 4.68) (envelope-from ) id 1LXKRN-0007mF-Gf for dev@geronimo.apache.org; Wed, 11 Feb 2009 13:08:10 -0600 Date: Wed, 11 Feb 2009 13:08:09 -0600 (CST) From: Chance Yeoman To: dev@geronimo.apache.org Message-ID: <33120196.3301234379289212.JavaMail.root@vanadium.sys.ma.cait.org> In-Reply-To: <2799803.3261234378946293.JavaMail.root@vanadium.sys.ma.cait.org> Subject: Re: Pulling Geronimo Configuration MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [64.107.227.41] X-Mailer: Zimbra 5.0.11_GA_2696.RHEL4 (ZimbraWebClient - FF3.0 (Win)/5.0.11_GA_2696.RHEL4) X-Spam-Score-External: 0.0 (/) X-Virus-Checked: Checked by ClamAV on apache.org On Feb 11, 2009, at 9:32 AM, Chance Yeoman wrote: > > Thank you for information, > > My understanding of the plugin-based farming is incomplete as well. > From the Geronimo 2.1 documentation of plugin-based farming, it did > not seem as if node server configuration was checked and updated > upon startup. Is this new to Geronimo 2.2 farming? I think the plugin based farming is only in 2.2. IIRC the deployment based farming in 2.1 relies on pushing stuff. > > > As an alternative to multicast, our configuration could use a > configured admin server approach to node discovery by using a > virtual IP that would route to a master server. Unlike > multicasting, this approach would require configuration external to > Geronimo to remain flexible, like virtual IP routing or DNS > management. While multicast would be a better choice for most > configurations, would it be plausible to include admin server > hostname/IP based node discovery as a configuration option, with > multicast as the default? That sounds reasonable to me, although I don't know how virtual IP works. Since we're talking about a new feature it would be better to move the discussion to the dev list, and if you would open a jira that would also help. Depending on your requirements I can think of a couple of possible strategies: 1. when a node starts up it request the plugin list from the admin server. In this case the admin server doesn't track the node members and if you update a plugin list nodes won't know until they restart. 2. when a node starts up it starts pinging the admin server. The admin server tracks cluster members similarly to how it does now with multicast. Changes to plugin lists will be propagated quickly to running nodes. I think (2) would be easier to implement as it just replaces the multicast heartbeat with a more configured one. Would you be interested in contributing an implementation? thanks david jencks Absolutely. I'm taking a peek at the MulticastDiscoveryAgent code to get an idea of how a new implementation would plug in. I agree that strategy 2 would be a good place to start as the behavior of the server farm would be the same regardless of the node discovery mechanism. One requirement that this would not address is the ability to rotate plugin deployments to cluster member nodes one at a time or in separate batches rather than all at once. This functionality would probably belong elsewhere though. I will open a jira about optionally configuring an admin host for node discovery and begin work on an implementation. Thanks for your help, Chance > > > > Thank you, > > Chance > > -- > Center for the Application of Information Technologies > > ----- Original Message ----- > From: "David Jencks" > To: user@geronimo.apache.org > Sent: Tuesday, February 10, 2009 6:32:11 PM GMT -06:00 US/Canada > Central > Subject: Re: Pulling Geronimo Configuration > > > On Feb 10, 2009, at 1:52 PM, Russell E Glaue wrote: > >> David Jencks wrote: >>> >>> On Feb 10, 2009, at 8:09 AM, Chance Yeoman wrote: >>> >>>> >>>> Hello All, >>>> >>>> I am interested in setting up geronimo installations that can pull >>>> installed plugins and their dependencies exclusively from a >>>> repository >>>> within a master geronimo server. I hope to eventually have an >>>> automated process allowing cluster members to poll a cluster- >>>> specific >>>> geronimo server repository for available, locally uninstalled >>>> plugins. My goal is to be able to more easily manage >>>> geographically >>>> separated cluster members and to quickly add or reinitialize nodes. >>>> >>>> I've been having trouble getting started as I receive HTTP 401 >>>> responses when installing remote plugins using the admin interface, >>>> even with security turned off on the maven-repo URL. I can list >>>> the >>>> contents of the remote server's repository, but not install >>>> plugins. >>> >>> That's pretty odd. Can you show the urls being used? You should be >>> able to check that it's working with a browser. >>> >>>> >>>> >>>> My question is: Is using the GeronimoAsMavenServlet even the >>>> correct >>>> approach to pull-based configuration? How have others implemented >>>> configuration pulling? Any advice would be greatly appreciated. >>> >>> If you use a geronimo server as the plugin repo then >>> GeronimoAsMavenServlet is the correct approach. However, if I were >>> you >>> I would give significant consideration to using nexus as the plugin >>> repo. I think you will have a much easier time integrating this >>> with a >>> reasonable build/qa process. In particular, if you build the >>> plugins >>> using maven with the car-maven-plugin, you can set the distribution >>> management repos to be the nexus server and have mvn deploy or mvn >>> release make the plugin available to the appropriate production >>> servers. >> >> For satisfying this scenario, how does nexus compare to Archiva, or >> Artifactory? >> >> Archiva: http://archiva.apache.org/ >> Artifactory: http://www.jfrog.org/products.php > > I only have experience with nexus and it's worked great for me. I'm > not thrilled with the license. I haven't actually looked but have a > strong impression that it has a lot more/better features than the > older managers. > >> >> >> >>> >>> I hope you are also aware of the plugin-based clustering/farming >>> support >>> that may provide the features you need for easy rollout to mutliple >>> servers. If the existing features there don't exactly match your >>> needs >>> please work with us to improve this. For instance IIUC since you >>> indicate your cluster members are geographically separate the >>> current >>> multicast discovery of cluster members may not work for you... >>> however >>> changing this to a hardcoded set of servers should be pretty easy. >>> Or >>> perhaps you want a hybrid approach where a bunch of multicast- >>> connected >>> sub-clusters aggregate to a controller. >> >> I think the desire is to pull down the artifacts, initiated from the >> end >> geronimo server. So if Geronimo starts up, it can go to the central >> Maven repo >> and see if it needs to pull down anything for configuration. >> >> The plugin-based farming, from my understanding, does the opposite. > > Your understanding is incomplete. With plugin based farming the > actual artifacts are pulled by each cluster member from the > repository. > >> The central >> server pushes out the new artifacts to the end web servers. And >> perhaps this >> introduces a few possibly undesired circumstances: >> >> 1. Centrally pushed out, all servers receive the updates at one >> time, not >> staggering the updates. Unless you put the servers into multiple >> groups so that >> each group can receive updates at different times. But that is more >> administration. >> 2. If a server is offline when the push-out occurs, it is out of >> date when it >> comes back online. Some kind of re-sync has to happen. >> >> If the end geronimo server does a pull on start-up, then it will >> always be in >> sync at run time. If we know what triggers the pull, an >> administrator can >> program this into a distributed command (like Rio, or RHN Satellite >> command) to >> tell the server to sync itself. > > Plugin based farming does pretty much this administration step. The > admin server keeps (in a db) plugins, plugin lists, clusters and > plugin-list to plugin and cluster to plugin list associations. It > listens on a multicast address. When a cluster member starts up it > starts a heartbeat ping on that multicast address. When the admin > server recognizes a new cluster member it sends it a list of all the > plugins that are supposed to be installed on it. The cluster member > then installs all the missing plugins on the list. > > If you don't like multicast you have to figure out some other way for > the cluster members to find the admin server, such as by telling it. > Then when the admin server fails and you have to move it you need a > way to tell all the cluster members to look elsewhere. I know > multicast is often frowned on but I couldn't think of a plausible > alternative that seemed like it would actually work. If you have any > ideas I'd love to hear them. > > If you don't have any need for dynamic plugin administration but are > happy to kill, reinstall, and restart a cluster member whenever the > plugins change then you could do something pretty easily with gshell > to start the server and install a list of plugins on it.... you can > script this very easily so you'd only be shipping a script to the > cluster members. > > thanks > david jencks > >> >> >> -RG >> >> >>> >>> thanks >>> david jencks >>> >>>> >>>> Thank you, >>>> Chance >> >