geronimo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "chi runhua" <chirun...@gmail.com>
Subject Re: feature? ContainerGroup
Date Wed, 10 Dec 2008 06:01:29 GMT
Excellent discussion. Besides adding the WebContainer and relevant log via
plan.xml, I am wondering whether Geronimo would be more accessible if Admin
Console have a wizard about this great feature??


Jeff


On Tue, Dec 9, 2008 at 2:24 AM, Russell E Glaue <rglaue@cait.org> wrote:

> David Jencks wrote:
> > HI Russell,
> >
> > This is getting interesting :-)  Thanks for taking the time to explain
> > what you want in such detail.
>
> This is very good. This thread gives me more insight on geronimo operations
> than
> described in the wiki and other documentation.
> And I hope it does the same for other readers of this thread trying to
> deploy
> Geronimo on an enterprise level like I am.
>
> >
> > Some comments inline and more at the end.
> >
> > On Dec 5, 2008, at 12:03 PM, Russell E Glaue wrote:
> >
> >> David Jencks wrote:
> >>>
> >>> On Dec 5, 2008, at 7:46 AM, Russell E Glaue wrote:
> >>>
> >>>> Being that Geronimo only allows one statically configured Jetty
> >>>> instance,
> >>>> configured as a GBean. And to get more instances, you have to install
> >>>> additional
> >>>> Jetty gbeans under a new container name.
> >
> > I'm wondering why you say jetty is statically configured.  I look at
> > geronimo as a bunch of plugins and we happen to supply a server with one
> > particular jetty plugin installed.  You are free to include a different
> > jetty plugin or additional jetty plugins or set up more jetty servers in
> > your web app plans.  So I fear that we have really failed to communicate
> > the essence of geronimo :-(
>
> I saw it as "statically configured" because you could not add more Jetty
> containers through simple configuration change in var/config/config.xml.
> However, Geronimo has also not technically statically configured a Jetty
> Container.
>
> I now understand what was being described. I thought what was being told to
> me
> was that I had to copy the entire Jetty gbean code into a new gbean, and
> rename
> the actual class names.
> But this is not true.
> I understand now that one simple writes a new gbean plan (plan.xml) file,
> and
> packages only that single file into a gbean, and then deploy it to get the
> additional Jetty container.
>
> The examples you referenced below, particularly the app-per-port-jetty
> sample,
> finally made everything very clear.
>
>
> https://svn.apache.org/repos/asf/geronimo/samples/trunk/samples/app-per-port/app-per-port-jetty/src/main/plan/plan.xml
>
> As the example is written, this single plan file is all that is needed to
> deploy
> two separate Jetty containers with separate log files.
>
> Thank you for this example.
> All this time, I was looking to make these changes in the
> var/config/config.xml
> file. Now i understand that the config.xml file is only used to make
> localized
> changes to already deployed gbeans. And I cannot create a new instance of a
> gbean in this config.xml file. gbean instances must be deployed as new
> gbeans.
>
> Since the Jetty Container code already exists in Geronimo, the code does
> not
> need to be redeployed in a new gbean. All that is needed is to deploy a new
> plan
> file to create a new Jetty Container instance based on the existing Jetty
> Container code.
>
> >
> >>>>
> >>>>
> >>>> Plus, being that there is only one NCSARequestLog gbean, which has
> >>>> hard coded
> >>>> the logfile name and location, and we are not able to change the
> >>>> name and
> >>>> location without recompiling.
> >>>>
> >>>> Should Geronimo be modified to allow these to be dynamic?
> >>>>
> >
> > You can change the files etc for the request log pretty easily in
> > config.xml by including something like this inside the module element
> > for jetty:
> >
> >
> >    <gbean name="JettyRequestLog">
> >         <attribute name="filename">---YOUR FILE NAME---</attribute>
> >         <attribute name="logDateFormat">--YOUR DATE FORMAT---</attribute>
> >         <attribute name="logTimeZone">GMT</attribute>
> >     </gbean>
> >
> >
> > There are a bunch more things you can configure... "filename",
> > "logDateFormat", "logTimeZone",
> >                 "retainDays", "extended", "append", "ignorePaths",
> > "preferProxiedForAddress"
> > but I'm not sure what all of them mean.
>
> I understand what they all mean, and are documented on the Geronimo wiki
> for
> anyone looking for this information.
>
> Yes, I have verified this has worked. Thank you.
> Because of my realization I described above, I understand I am only
> changing the
> configuration of an existing "JettyRequestLog" gbean.
> All this time I was attempting to edit var/config/config.xml to create a
> new
> instance of a JettyRequestLog, and put that reference in the
> JettyWebContainer -
> which always failed.
> A new plan must be deployed that contains a new instance of the
> "JettyRequestLog" gbean. Then I can change that configuration.
>
> Again, the examples you referenced below, particularly the
> app-per-port-jetty
> sample, finally made everything very clear.
>
> >
> >>>>
> >>>> I think that being able to spawn multiple containers and reconfigure
> >>>> log files
> >>>> should be something dynamic, and configurable in the
> >>>> var/config/config.xml file
> >>>> without having to add additional gbeans/plugins.
> >>>>
> >>>> Something like the following could result for configuration:
> >>>> -
> >>>> <module name="JettyWebContainer2"
> >>>> gbean="org.apache.geronimo.jetty6.connector.JettyWebContainerFactory">
> >>>>   ...
> >>>> </module>
> >>>>
> >>>> <module name="NCSARequestLog2"
> >>>>
> gbeanInfo="org.apache.geronimo.jetty6.requestlog.NCSARequestLogFactory">
> >>>>
> >>>>   ...
> >>>> </module>
> >>>>
> >>>> <module name="ContainerGroup"
> >>>> gbean="org.apache.geronimo.jetty6.ContainerGroup">
> >>>>   <reference name="JettyContainer">
> >>>>       <name>JettyWebContainer2</name>
> >>>>   </reference>
> >>>>   <reference name="RequestLog">
> >>>>       <name>NCSARequestLog2</name>
> >>>>   </reference>
> >>>>   <reference name="ThreadPool">
> >>>>     <name>DefaultThreadPool</name>
> >>>>   </reference>
> >>>>   <reference name="ServerInfo">
> >>>>     <name>ServerInfo</name>
> >>>>   </reference>
> >>>> </module>
> >>>> -
> >>>>
> >>>> Where JettyWebContainerFactory and NCSARequestLogFactory produce
> >>>> separate
> >>>> running instances of JettyWebContainer and NCSARequestLog
> respectively.
> >>>>
> >>>>
> >>>> Or with the Factory option, perhaps all that is needed is:
> >>>> -
> >>>> <module name="NCSARequestLog2"
> >>>>
> gbeanInfo="org.apache.geronimo.jetty6.requestlog.NCSARequestLogFactory">
> >>>>
> >>>>   ...
> >>>> </module>
> >>>>
> >>>> <module name="JettyWebContainer2"
> >>>> gbean="org.apache.geronimo.jetty6.connector.JettyWebContainerFactory">
> >>>>   ...
> >>>>   <reference name="RequestLog">
> >>>>       <name>NCSARequestLog2</name>
> >>>>   </reference>
> >>>> </module>
> >>>> -
> >>>>
> >>>>
> >>>> But I think it would be a good idea to be able to group configurations
> >>>> together.
> >>>>
> >>>> <class1>
> >>>>   <listen socket/>
> >>>>   <virtual server 1>
> >>>>       <hostname alias/>
> >>>>       <log configuration/>
> >>>>       <web application 1/>
> >>>>   </virtual server 1>
> >>>>   <virtual server 2>
> >>>>       <hostname alias/>
> >>>>       <log configuration/>
> >>>>       <web application 1/>
> >>>>   </virtual server 2>
> >>>>   <log configuration/>
> >>>> </class1>
> >>>>
> >>>> <class2>
> >>>>   <listen socket/>
> >>>>   <virtual server 1>
> >>>>       <hostname alias/>
> >>>>       <log configuration/>
> >>>>       <web application 1/>
> >>>>       <web application 2/>
> >>>>       <web application 3/>
> >>>>   </virtual server 1>
> >>>>   <log configuration/>
> >>>> </class2>
> >>>>
> >>>> Where each class is a separated Web Container.
> >>>>
> >>>>
> >>>> What is the general opinion about this?
> >>>
> >>> I'm not sure I understand how what you are proposing actually differs
> >>> from what we have now in any but cosmetic aspects.  How does the
> >>> proposed WebContainerFactory differ from the existing WebContainer?
> >>
> >> Right now, as I understand it, Geronimo only has one running (default)
> >> instance
> >> of Jetty inside of it.
> >> And as I understand it, if I want multiple web applications separated
> >> out among
> >> two port numbers, I have to compile and add a gbean of the Jetty
> >> Container, and
> >> install it into Geronimo. This would allow me to have a second
> >> instance of Jetty
> >> inside of Geronimo, which appears to be necessary if I want to deploy
> >> webapps to
> >> two different ports.
> >
> > I'm not sure what you mean by "compile".  You need to write some kind of
> > geronimo plan for the new web server and deploy it.  Options are, in my
> > order of preference (based on decreasing reusability):
> > 1. write a geronimo plugin maven project, build the plugin in your build
> > system, and assemble a custom server including it. (I'd include
> > deploying your app as a plugin, with the jetty gbeans in the plan, under
> > this option -- this is similar to (3))
> > 2. Use the same plan as from (1) but just deploy it into an existing
> > geronimo server
> > 3. Include the gbeans from the same plan in the web app plan for the app
> > that will be using the server
> > 4. include the gbean defs in config.xml in the existing jetty module.
> > (I think this requires a slightly different name for the gbeans.)
> >
> > I really don't recommend (4) because there is no way to keep track of
> > what you've done or transfer the result to any other server.
>
> option #2 (and #1) is how the example sample app you provided,
> app-per-port-jetty, is accomplished.
> The app-per-port-jetty sample only has a single plan file.
> You build that in Maven as a gbean plugin, and deploy it to Geronimo.
>
> My error, as I said before, was because I thought I had to copy the actual
> JettyContainer code into a new gbean, renaming the class names, and
> recompiling.
> There is no code compilation needed, as no code is actually deployed in the
> gbean, only a single plan file is what is deployed.
>
>
> Below, I descibed something like a "factory" because I was looking at the
> perspective of creating the result of these gbeans in the config.xml file
> dynamically.
> So instead of building the single plan file in a gbean and deploying it in
> Geronimo to get the result, I was discussing that something like the
> configuration defined in the plan file was instead put in the
> var/config/config.xml file to get the same results.
>
> However the gbean plugin is the better approach. Putting this information
> in the
> config.xml file would require a restart/reload of Geronimo.
> Instead as a gbean, we simply deploy and undeploy.
> Also, as a gbean, we can do something very wonderful in managing a server
> farm,
> and you have illustrated that advantage below.
> By dynamically distributing the gbean to servers based on deployment needs.
>
> >
> >>
> >>
> >> Now say that I want to deploy 8 different ports, with lots of web
> >> applications
> >> distributed among those separate ports. As I continue to understand
> >> it, I use
> >> the default Jetty Container, and I must compile and add 7 more gbeans
> >> of the
> >> Jetty Container into Geronimo.
> >>
> >> This is not dynamic, because if I want to add a new port, I have to
> >> compile and
> >> add in a JettyContainer gbean. If I want to delete a port, I ideally
> >> will have
> >> to remove a JettyContainer gbean.
> >>
> >> By having a factory, like JettyContainerFactory, one can hopefully be
> >> able to
> >> eliminate ever having to compile and install a new JettyContainer gbean.
> >>
> >> So, instead of compiling and installing a JettyContainer gbean, as has
> >> been
> >> described in order to deploy onto another port, one would simple add a
> >> few lines
> >> of configuration to var/config/config.xml.
> >> This configuration calls something like a JettyContainerFactory, which
> >> creates,
> >> initializes and starts a new JettyContainer that can be used to serve
> >> webapps on
> >> a different port as has been discussed.
> >>
> >> As to my discussion, I am referring to my previous thread 2008-12-04
> >> "How to
> >> deploy web application to one port".
> >>
> >> So how would WebContainerFactory differ from the existing WebContainer?
> >> "WebContainerFactory" creates, and initializes "WebContainer" as
> >> called in the
> >> var/config/config.xml file.
> >> Without this type of feature, as I understand it, I have to compile,
> >> and install
> >> a new WebContainer (JettyWebContainer) gbean as a plugin to Geronimo,
> >> and call
> >> that container in my configuration (ref: "Structure.", "Tomcat
> >> configuration."
> >>
> http://cwiki.apache.org/GMOxDOC22/app-per-port-running-multiple-web-apps-on-different-ports.html
> ).
> >>
> >> So I would not have to install anything, but just add a configuration
> >> to call
> >> the factory to create a new instance dynamically instead.
> >>
> >>>
> >>> We could come up with an xml schema for configuring jetty containers --
> >>> this would map into the existing gbeans.  This is pretty much a
> cosmetic
> >>> change but would require a bunch of coding.
> >>
> >> I am not quite sure what you are illustrating, but I think that is
> >> what I am
> >> trying to illustrate with the "factory". However, I do not see it as a
> >> cosmetic
> >> change, because in my proposal, I see this feature as eliminating the
> >> need to
> >> ever compile and install a new JettyWebContainer plugin in order to be
> >> able to
> >> serve through Geronimo on two different ports.
> >>
> >>>
> >>> RIght now I'm suggesting you come up with a plugin modified from the
> >>> existing jetty plugin, build it with maven, and install it in your
> >>> server.... or, imitating the sample app, just add the gbeans for the
> new
> >>> container to your app plan.  I don't understand which of these steps
> you
> >>> consider onerous or why.
> >>
> >> If I only want to ever install one plugin modified from the existing
> >> jetty
> >> plugin, and never touch this again, then it is not onerous.
> >> However, I want to be able to do this dynamically. I have a server
> >> farm of 48
> >> servers. I support about 30 projects and 70 to 80 web applications.
> >> They are all
> >> clustered, and load balanced behind a controller and accelerator. I
> >> serve each
> >> of the 30 projects on their own port. We are adding more projects and
> >> more web
> >> applications about every 3 months.
> >> Having to compile and install this plugin on 48 servers everytime I
> >> want to add
> >> in support for another port becomes onerous. It is not onerous if all
> >> I have to
> >> do is add a few lines of configuration to do this for me.
> >
> > I'd like you to look at two samples:
> >
> > 1. I added a couple of jetty samples to the app-per-port trunk samples.
> > Samples are set up so they work better if you check out all of
> > https://svn.apache.org/repos/asf/geronimo/samples/trunk
> > and build everything first.  After that you can build the
> > samples/app-per-port separately if you want.
> >
> > The two jetty samples are..
> > a. app-per-port-war1-jetty.  This is a plugin that just includes one
> > sample war and a jetty server for it.
> > b. app-per-port-jetty.  This is a plugin that includes an ear with two
> > web apps on different ports and is set up so the ports are not available
> > until after the apps start.  This introduces a bit more complexity but
> > is how the tomcat sample works.
> >
> > Anyway I think you'll see that it is pretty easy to include the
> > configuration of a jetty server in the web app plan (cf (a)).
> >
> > This is set up to work against geronimo trunk (2.2-SNAPSHOT) but should
> > work just the same against earlier versions.
>
> Yes, this example sample explains how easy it is.
>
> >
> > 2. We've been working on clustering support recently and have some
> > things working. (this is only available in trunk).  We'd certainly
> > appreciate your comments on it.   There's some documentation here:
> >
> > http://cwiki.apache.org/GMOxDOC22/plugin-based-farming.html
>
> I reviewed this, shortly, and one missing item that comes immediately to
> mind is
> in the "Farming using Plugins" section.
> It is written:
> -
> When the administration server detects this as new it instructs the new
> node to
> install all the plugin lists associated with that farm. If the plugins are
> already installed, this is a no-op; otherwise the plugins are downloaded
> and
> installed from the plugin repository specified in the plugin list.
> -
>
> It says, -If the plugin is already installed, it is not reinstalled.-
>
> What needs to be taken into account is if the plugin is already previously
> installed, but the plugin has been updated since the last install.
>
> So example in case: if myFunPlugin, version 1.3, was installed in July, and
> the
> server has been ordered to installed myFunPlugin in August, the server
> should
> not simply say "I already have it installed, so I do not have to install it
> again."
>
> The server should also not simply do "I already have it installed, so I
> will
> uninstall it and install it again."
>
> The server should do "Hmmm. I already have the myFunPlugin plugin
> installed.
> Well, let's see, the version I have installed is 1.3, and the current
> version of
> this plugin is 1.4. Okay, I need to install the new version of this plugin.
>
> On the next days' server restart, the server should analyze that the
> current
> installed version is 1.4 and the current version is 1.4, and this in this
> case
> this would be a "no-op" and do nothing.
>
> >
> >  The sample I think is most useful is at
> >
> >
> https://svn.apache.org/repos/asf/geronimo/sandbox/djencks/samples/admin/failover
> >
> >
> > This particular  example is set up to use a nexus instance as the
> > geronimo plugin repository, so the idea is that your dev/qa processes
> > will end up deploying the plugin into a nexus instance.  The
> > cluster/farm controller will then tell the servers in the cluster to
> > install the plugin from that nexus server.  There are lots of other
> > choices for plugin repository: the failover sample in
> >
> > https://svn.apache.org/repos/asf/geronimo/sandbox/failover
> >
> > uses the cluster controller geronimo instance as the plugin repository:
> > this is set up so that you deploy the app to the controller server
> > (using an appropriate plan) and then distribute it to the cluster
> > members: thus it doesn't need a maven build for the plugin.
>
> I looked at the 5 sample projects in the first directory, and the 4 in the
> second directory.
> I can guess by their name what they might do, however there is nothing
> contained
> in them to explain enough to me their purpose and usage.
>
> I do see The Grinder version 3 in the samples failover-tomcat-demo and
> failover-jetty-demo.
> We use The Grinder here to load-test our servers.
> I do not see how they are used to implement failover.
>
> >
> > These samples use an ejb app as the clustered app, and also demonstrate
> > ejb client failover.  The cluster distirbution stuff works just the same
> > no matter what kind of plugin or app you are distributing: we have some
> > web clustering support (using wadi) but don't have a web failover
> solution.
>
> When you say "web failover" you mean a scenario where server A, B and C are
> running, and server C and D are on standy. Server B is detected to go down,
> and
> the apps on server B are redeployed to server C and/or D.
>
> Yes, we are researching this level of technology here in our center.
> We use virtual instances however, and a virt is paused and transferred to
> another server once problems are experienced. Or if the user demand goes
> beyond
> our deployed server capacity, new virt servers are brought online and added
> to a
> cluster.
>
> We have Red Hat Satellite Server, and we can "kick-start" new virts
> dynamically
> as needed.
>
> >
> >>
> >>
> >> On that same line, I also need to be able to configure logging for each
> >> JettyContainer on each port (project), configuring the log name and
> >> location. I
> >> need separated log files, and I also need syslog facility.
> >
> > What is the syslog facility?  Do you know if it is available in plain
> > jetty?
>
> Plain jetty has it, only as built in to log4j.
> Here is the sample:
>
> file: Jetty/extra/resources/log4j.xml
> -
> <?xml version="1.0" encoding="UTF-8"?>
> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/";
> debug="false">
> ...
> <!-- Syslog events
> <appender name="SYSLOG" class="org.apache.log4j.net.SyslogAppender">
> <param name="Facility" value="LOCAL7"/>
> <param name="FacilityPrinting" value="true"/>
> <param name="SyslogHost" value="localhost"/>
> </appender>
> -->
> ...
> </log4j:configuration>
> -
>
> However, We use Syslog to send all our access logs to a centralized server.
> Although geronimo.log, the "error" log, is log4j, configured in
> var/log/server-log4j.properties - It looks like NCSARequestLog, the
> "access"
> log, is not Log4j
>
> Do you know any more about this?
>
> Syslog is an operating system level logging mechanism.
> OS application send their logs to the syslog daemon in addition to, or
> instead
> of, to a file.
> The syslog manages the log information, sending it to disk or over the
> network
> to a centralized logging server.
>
> http://en.wikipedia.org/wiki/Syslog
>
> The syslog facility is something like a group. The group is configured to
> operate in some fashion, so you can do something like send this facility to
> this
> log file, and send this facility to this centralized log server.
>
> >
> >>
> >>
> >> I would like for us to migrate to Geronimo as our next platform. But I
> >> would
> >> need to have support in Geronimo like this.
> >>
> >> We current use the Sun Java Web Server 6, which does something like
> >> "container
> >> grouping". It allows us to say, create this group, and in this group
> >> start up a
> >> JVM Web Container, listen on this socket (host, port), send your logs
> >> to this
> >> file, and serve these web applications. I can create as many groups as
> >> I want.
> >> And each group is separate from each other.
> >>
> >
> > This sounds a lot like what you get in geronimo if you deploy additional
> > jetty servers as plugins and specify which jetty server each app will
> > use.  There's a good chance the sun server makes it easier to set up
> > these additional containers, but can you see a functional difference?
>
> Yes, I now see the functional similarities.
> Especially since I can configure in the plugin plan a particular web
> application, and what web container it is deployed to.
> -
> <application>
>    <module>
>        <web>war1.war</web>
>        <web-app>
>            <web-container>
>                <gbean-link>JettyWebContainer1</gbean-link>
>            </web-container>
>        </web-app>
>    </module>
>    ...
> </application>
> -
>
> The schema for this configuration is "schema/geronimo-application-2.0.xsd".
> <web></web> attribute is the URI path to the web archive relative to the
> enterprise application package main directory.
>
> So it would appear that this all has to be contained in a EAR or plugin.
> I cannot set the web application as being on disk with this configuration.
>
> However, it can be deployed from disk using the --inPlace option of the
> deploy
> commamd, as log as we define the web container (i.e. JettyWebContainer2) in
> the
> web application's geronimo-web.xml file.
>
>
> So, what is the functional difference?
>
> Sun Web Server allows this information, including expaned web application
> on
> disk, to be defined in the main server configuration, and the web
> application
> does not need to have something similar to "geronimo-web.xml"
>
> As a result, the developers can remain ignorant of the internal sun web
> server
> configuration, and there is not a chance of them messing this up, and/or
> accidentially deploying to the wrong web container.
> Also, the context-root is defined in this server configuration so that
> cannot be
> messed up by the developers too.
>
> With Geronimo, we can still deploy the web applications to the web
> container
> after the web container has been deployed in a plugin via the plugin's
> plan.
> We just use the `deploy --inPlace ...` command.
> However the developer must have a geronimo-web.xml file that defines the
> web
> container the web application is deployed to.
> Also the context-root must be defined in geronimo-web.xml by the developer
> as well.
>
>
> In result, more server-type administration must be controlled by the
> developer,
> and taken away from the web server farm administrators.
>
> This is not necessarily a bad idea. But will require more and specific
> training
> to the developers. And more errors are prone to pop up as a result.
>
> With the sun web server, server configuration, since the server farm
> administrators defined the on-disk location of the extracted web app war
> and the
> context-root of the web app, this configuration dictation was put into a
> central
> configuration system and administration tasks which made it impossible for
> anyone to mess up.
>
>
> Actually there is an argument for "best practices" in both methods. So one
> method may not be better than the other.
>
>
> >>
> >>>
> >>> I'll try to look into the NCSA request log configuration soon....
> >>
> >> Thanks.
> >> I have tried to configure this without recompiling. I cannot
> >> initialize a new
> >> NCSARequestLog in the configuration, nor can I change the log file
> >> name and
> >> location, and so it appears my only option is to recompile.
> >> And for multiple installed plugin WebContainers, as has been proposed
> >> to me, it
> >> would seem I have to also recompile and install a NCSARequestLog
> >> plugin too.
> >>
> >> So this will only add to the burden of compiling and installing
> >> additional
> >> plugins in order to get additional web containers so that Geronimo can
> >> serve web
> >> applications on different ports and log to different files for each
> >> container.
> >
> > The sample app hopefully convinced you that you can add a jetty server
> > for your web app by including something like this in the geronimo plan
> > for the app:
> >
> >    <gbean name="JettyWebConnector"
> > class="org.apache.geronimo.jetty6.connector.HTTPSelectChannelConnector">
> >         <attribute name="host">localhost</attribute>
> >         <attribute name="port">8082</attribute>
> >         <attribute name="headerBufferSizeBytes">8192</attribute>
> >         <reference name="JettyContainer">
> >             <name>JettyWebContainer</name>
> >         </reference>
> >         <reference name="ThreadPool">
> >             <name>DefaultThreadPool</name>
> >         </reference>
> >         <attribute name="maxThreads">50</attribute>
> >     </gbean>
> >
> >     <gbean name="JettyWebContainer"
> > class="org.apache.geronimo.jetty6.JettyContainerImpl">
> >         <attribute name="jettyHome">var/jetty1</attribute>
> >         <!--<reference name="WebManager">-->
> >             <!--<name>JettyWebManager</name>-->
> >         <!--</reference>-->
> >         <reference name="ServerInfo">
> >             <name>ServerInfo</name>
> >         </reference>
> >     </gbean>
> >
> >     <gbean name="JettyRequestLog"
> > class="org.apache.geronimo.jetty6.requestlog.NCSARequestLog">
> >         <reference name="JettyContainer">
> >             <name>JettyWebContainer</name>
> >         </reference>
> >         <reference name="ServerInfo">
> >             <name>ServerInfo</name>
> >         </reference>
> >         <attribute
> > name="filename">var/log/jetty1_yyyy_mm_dd.log</attribute>
> >         <attribute name="logDateFormat">dd/MMM/yyyy:HH:mm:ss
> > ZZZ</attribute>
> >         <attribute name="logTimeZone">GMT</attribute>
> >     </gbean>
> >
> >
> > We could come up with some custom xml that would look something like
> > this for the same configuration:
> >
> >     <jetty-web-container name="JettyWebContainer1">
> >         <jettyHome>var/jetty1</jettyHome >
> >         <ServerInfo>
> >             <name>ServerInfo</name>
> >         </ServerInfo >
> >    <HTTPSelectChannelConnector">
> >         <host>localhost</host >
> >         <port>8082</port >
> >         <headerBufferSizeBytes>8192</headerBufferSizeBytes >
> >         <ThreadPool>
> >             <name>DefaultThreadPool</name>
> >         </ThreadPool >
> >         <maxThreads>50</maxThreads >
> >     </HTTPSelectChannelConnector >
> >
> >     <NCSARequestLog>
> >         <filename>var/log/jetty1_yyyy_mm_dd.log</filename >
> >         <logDateFormat>dd/MMM/yyyy:HH:mm:ss ZZZ</logDateFormat >
> >         <logTimeZone>GMT</logTimeZone >
> >     </NCSARequestLog >
> >     </jetty-web-containee>
> >
> > (sorry about the indenting)
> >
> > Does this seem a lot easier to understand than the "gbean" version?
>
> I understand the gbean version better.
> Because in that version, I now see the two levels of configuration.
>
> In Sun Web Server, there is only one level of configuration. Everything is
> configured in the server configuration file, and web application specifics
> in
> the deployment plan, sun-web.xml.
>
> In Geronimo, there are two levels.
> At the first level, the underlying gbean plugin defines the instance of a
> gbean,
> plus default configuration.
> At the second level, the server configuration file allows localized
> configuration of those instances.
>
>
> The difference is that for Sun, when you define a new localized
> configuration
> for an instance, Sun Web Server dynamically spawns off a new instance with
> your
> localized configuration - like dynamically installing a new plugin and
> configuring it.
> Geronimo requires the instance to be defined on the plugin level first,
> before
> is can be configured in the server configuration.
>
> Also another difference is that Sun Web Server requires a restart after
> doing
> this type of configuration. However, geronimo does not, resorting to a
> simple
> deploy/undeploy of this configuration. But it would seem, however, if you
> wanted
> to make localized configuration of the plugin, then that would require a
> restart
> of Geronimo since you would be editing the main server configuration file.
>
> >
> > thanks
> > david jencks
> >
> >
> >>
> >>
> >>>
> >>> thanks
> >>> david jencks
> >>>
> >>>>
> >>>>
> >>>>
> >>>> -RG
> >>>
>
>
> Over all, I think the information revealed in this thread is not understood
> by
> the general users of Geronimo. And I think that it is information that is
> sought
> after.
> The wiki presents a lot of information, however there is no simple approach
> to
> geronimo terms and ideas.
>
> For example, I wrote a 3-part article for MySQL Dev Zone:
> http://dev.mysql.com/tech-resources/articles/failover-strategy-part1.html
>
> My approach was to provide a slow introduction to the topic of "fail-over"
> and
> "load-balancing", providing as much written examples as I could, starting
> from
> simple illustrations and building on them.
> I think information in the wiki could approach this subject we discuss in
> the
> same manner.
> The example sample apps you linked in this thread really is what cleared up
> any
> confusion I had.
>
> Additionally, I do not see any clear information in the wiki on how to set
> up
> something like we are discussing in this thread.
> For example, nowhere in the wiki can I find how to change the default name
> and
> location of the log files.
> There is no clear documentation for setting up multiple containers and
> deploying
> web applications to them.
>
> I think there could be scenario-based documentation that illustrates how to
> go
> from the packaged software to the resulting installation to satisfy the
> scenario.
>
> For example, pointing at my previous thread "GERONIMO_HOME vs.
> GERONIMO_BASE",
> which resulted in a bug fix, and eventual elimination of GERONIMO_BASE for
> Geronimo 2.1.4.
> Documentation on how Geronimo is intended to be configured is not complete.
> And
> documentation on how internals are to be used in deployment is not
> complete.
> Although very good, what we have a lot of documentation on is: The
> resulting
> changes the author has made to do achieve some desired function. Or in
> other
> words, 'this is what to configure to achieve point B.'
>
> It is all very good. However, I think some additional documentation with
> this
> approach I discuss would help people get from point A to point B.
> The users just need to understand where the starting point begins, then,
> like
> me, it will unlock any confusion to the remaining written documentation.
>
> If my organization can successfully move forward with deploying Geronimo in
> the
> enterprise, I am able to contribute this to the community, and can document
> my
> process as I move forward.
>
>
> Also, I am interested in working with the Geronimo farming technology. It
> seems
> like something that would be really useful to my organization.
>
>
> Thank you very much.
> -RG
>

Mime
View raw message