logging-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rpo...@apache.org
Subject svn commit: r1481255 [2/2] - in /logging/log4j/log4j2/trunk/src/site: site.xml xdoc/manual/appenders.xml
Date Sat, 11 May 2013 03:36:25 GMT
Modified: logging/log4j/log4j2/trunk/src/site/xdoc/manual/appenders.xml
URL: http://svn.apache.org/viewvc/logging/log4j/log4j2/trunk/src/site/xdoc/manual/appenders.xml?rev=1481255&r1=1481254&r2=1481255&view=diff
==============================================================================
--- logging/log4j/log4j2/trunk/src/site/xdoc/manual/appenders.xml (original)
+++ logging/log4j/log4j2/trunk/src/site/xdoc/manual/appenders.xml Sat May 11 03:36:25 2013
@@ -278,97 +278,108 @@
 </configuration>]]></pre>
           </p>
         </subsection>
-        <a name="FileAppender"/>
-        <subsection name="FileAppender">
-          <p>The FileAppender is an OutputStreamAppender that writes to the File named in the fileName parameter. The
-            FileAppender uses a FileManager (which extends OutputStreamManager) to actually perform the file I/O. While
-            FileAppenders from different Configurations cannot be shared, the FileManagers can be if the Manager is
-            accessible. For example, two webapps in a servlet container can have their own configuration and safely
-            write to the same file if Log4J is in a ClassLoader that is common to both of them.</p>
-          <table>
-            <tr>
-              <th>Parameter Name</th>
-              <th>Type</th>
-              <th>Description</th>
-            </tr>
-            <tr>
-              <td>append</td>
-              <td>boolean</td>
-              <td>When true - the default, records will be appended to the end of the file. When set to false,
-                the file will be cleared before new records are written.</td>
-            </tr>
-            <tr>
-              <td>bufferedIO</td>
-              <td>boolean</td>
-              <td>When true - the default, records will be written to a buffer and the data will be written to
-                disk when the buffer is full or, if immediateFlush is set, when the record is written.
-                File locking cannot be used with bufferedIO. Performance tests have shown that using buffered I/O
-                significantly improves performance, even if immediateFlush is enabled.</td>
-            </tr>
-            <tr>
-              <td>filter</td>
-              <td>Filter</td>
-              <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
-              may be used by using a CompositeFilter.</td>
-            </tr>
-            <tr>
-              <td>fileName</td>
-              <td>String</td>
-              <td>The name of the file to write to. If the file, or any of its parent directories, do not exist,
-                they will be created.</td>
-            </tr>
-            <tr>
-              <td>immediateFlush</td>
-              <td>boolean</td>
-              <td><p>When set to true - the default, each write will be followed by a flush.
-                This will guarantee the data is written
-                to disk but could impact performance.</p>
-                <p>Flushing after every write is only useful when using this
-				appender with synchronous loggers. Asynchronous loggers and
-				appenders will automatically flush at the end of a batch of events, 
-				even if immediateFlush is set to false. This also guarantees
-				the data is written to disk but is more efficient.</p>
-              </td>
-            </tr>
-            <tr>
-              <td>layout</td>
-              <td>Layout</td>
-              <td>The Layout to use to format the LogEvent</td>
-            </tr>
-            <tr>
-              <td>locking</td>
-              <td>boolean</td>
-              <td>When set to true, I/O operations will occur only while the file lock is held allowing FileAppenders
-                in multiple JVMs and potentially multiple hosts to write to the same file simultaneously. This
-                will significantly impact performance so should be used carefully. Furthermore, on many systems
-                the file lock is "advisory" meaning that other applications can perform operations on the file
-                without acquiring a lock. The default value is false.</td>
-            </tr>
-
-            <tr>
-              <td>name</td>
-              <td>String</td>
-              <td>The name of the Appender.</td>
-            </tr>
-            <tr>
-              <td>suppressExceptions</td>
-              <td>boolean</td>
-              <td>The default is true, causing exceptions to be internally logged and then ignored. When set to
-                false exceptions will be percolated to the caller.</td>
-            </tr>
-            <caption align="top">FileAppender Parameters</caption>
-          </table>
-           <p>
-            Here is a sample File configuration:
+			<a name="FastFileAppender" />
+			<subsection name="FastFileAppender">
+			<p><i>Experimental, may replace FileAppender in a future release.</i></p>
+				<p>
+					The FastFileAppender is similar to the standard
+					<a href="#FileAppender">FileAppender</a>
+					except it is always buffered (this cannot be switched off)
+					and internally it uses a
+					<tt>ByteBuffer + RandomAccessFile</tt>
+					instead of a
+					<tt>BufferedOutputStream</tt>.
+					We saw a 20-200% performance improvement compared to
+					FileAppender with "bufferedIO=true" in our
+					<a href="async.html#FastFileAppenderPerformance">measurements</a>.
+					Similar to the FileAppender,
+					FastFileAppender uses a FastFileManager to actually perform the
+					file I/O. While FastFileAppender
+					from different Configurations
+					cannot be shared, the FastFileManagers can be if the Manager is
+					accessible. For example, two webapps in a
+					servlet container can have
+					their own configuration and safely
+					write to the same file if Log4j
+					is in a ClassLoader that is common to
+					both of them.
+				</p>
+				<table>
+					<tr>
+						<th>Parameter Name</th>
+						<th>Type</th>
+						<th>Description</th>
+					</tr>
+					<tr>
+						<td>append</td>
+						<td>boolean</td>
+						<td>When true - the default, records will be appended to the end
+							of the file. When set to false,
+							the file will be cleared before
+							new records are written.
+						</td>
+					</tr>
+					<tr>
+						<td>fileName</td>
+						<td>String</td>
+						<td>The name of the file to write to. If the file, or any of its
+							parent directories, do not exist,
+							they will be created.
+						</td>
+					</tr>
+					<tr>
+						<td>filters</td>
+						<td>Filter</td>
+						<td>A Filter to determine if the event should be handled by this
+							Appender. More than one Filter
+							may be used by using a CompositeFilter.
+						</td>
+					</tr>
+					<tr>
+						<td>immediateFlush</td>
+						<td>boolean</td>
+		              <td><p>When set to true - the default, each write will be followed by a flush.
+		                This will guarantee the data is written
+		                to disk but could impact performance.</p>
+		                <p>Flushing after every write is only useful when using this
+						appender with synchronous loggers. Asynchronous loggers and
+						appenders will automatically flush at the end of a batch of events, 
+						even if immediateFlush is set to false. This also guarantees
+						the data is written to disk but is more efficient.</p>
+		              </td>
+					</tr>
+					<tr>
+						<td>layout</td>
+						<td>Layout</td>
+						<td>The Layout to use to format the LogEvent</td>
+					</tr>
+					<tr>
+						<td>name</td>
+						<td>String</td>
+						<td>The name of the Appender.</td>
+					</tr>
+					<tr>
+						<td>suppressExceptions</td>
+						<td>boolean</td>
+						<td>The default is true, causing exceptions to be internally
+							logged and then ignored. When set to
+							false exceptions will be
+							percolated to the caller.
+						</td>
+					</tr>
+					<caption align="top">FastFileAppender Parameters</caption>
+				</table>
+				<p>
+					Here is a sample FastFile configuration:
 
-            <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+					<pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
 <configuration status="warn" name="MyApp" packages="">
   <appenders>
-    <File name="MyFile" fileName="logs/app.log">
+    <FastFile name="MyFile" fileName="logs/app.log">
       <PatternLayout>
         <pattern>%d %p %c{1.} [%t] %m%n</pattern>
       </PatternLayout>
-    </File>
+    </FastFile>
   </appenders>
   <loggers>
     <root level="error">
@@ -376,279 +387,276 @@
     </root>
   </loggers>
 </configuration>]]></pre>
-          </p>
-        </subsection>
-        <a name="FlumeAppender"/>
-        <subsection name="FlumeAppender">
-          <p><i>This is an optional component supplied in a separate jar.</i></p>
-          <p><a href="http://flume.apache.org/index.html">Apache Flume</a> is a distributed, reliable,
-            and available system for efficiently collecting, aggregating, and moving large amounts of log data
-            from many different sources to a centralized data store. The FlumeAppender takes LogEvents and sends
-            them to a Flume agent as serialized Avro events for consumption.</p>
-          <p>
-            The Flume Appender supports three modes of operation.
-            <ol>
-              <li>It can act as a remote Flume client which sends Flume events via Avro to a Flume Agent configured
-              with an Avro Source.</li>
-              <li>It can act as an embedded Flume Agent where Flume events pass directly into Flume for processing.</li>
-              <li>It can persist events to a local BerkeleyDB datastore and then asynchronously send the events to
-              Flume, similar to the embedded Flume Agent but without most of the Flume dependencies.</li>
-            </ol>
-            Usage as an embedded agent will cause the messages to be directly passed to the Flume Channel and then
-            control will be immediately returned to the application. All interaction with remote agents will occur
-            asynchronously. Setting the "type" attribute to "Embedded" will force the use of the embedded agent. In
-            addition, configuring agent properties in the appender configuration will also cause the embedded agent
-            to be used.
-          </p>
-          <table>
-            <tr>
-              <th>Parameter Name</th>
-              <th>Type</th>
-              <th>Description</th>
-            </tr>
-            <tr>
-              <td>agents</td>
-              <td>Agent[]</td>
-              <td>An array of Agents to which the logging events should be sent. If more than one agent is specified
-                the first Agent will be the primary and subsequent Agents will be used in the order specified as
-                secondaries should the primary Agent fail. Each Agent definition supplies the Agents host and port.
-                The specification of agents and properties are mutually exclusive. If both are configured an
-                error will result.</td>
-            </tr>
-            <tr>
-              <td>agentRetries</td>
-              <td>integer</td>
-              <td>The number of times the agent should be retried before failing to a secondary. This paramenter is
-                ignored when type="persistent" is specified (agents are tried once before failing to the next).</td>
-            </tr>
-            <tr>
-              <td>batchSize</td>
-              <td>integer</td>
-              <td>Specifies the number of events that should be sent as a batch. The default is 1. <i>This
-                parameter only applies to the Flume NG Appender.</i></td>
-            </tr>
-            <tr>
-              <td>compress</td>
-              <td>boolean</td>
-              <td>When set to true the message body will be compressed using gzip</td>
-            </tr>
-            <tr>
-              <td>connectTimeout</td>
-              <td>integer</td>
-              <td>The number of milliseconds Flume will wait before timing out the connection.</td>
-            </tr>
-            <tr>
-              <td>dataDir</td>
-              <td>String</td>
-              <td>Directory where the Flume write ahead log should be written. Valid only when embedded is set
-                to true and Agent elements are used instead of Property elements.</td>
-            </tr>
-            <tr>
-              <td>filter</td>
-              <td>Filter</td>
-              <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
-              may be used by using a CompositeFilter.</td>
-            </tr>
-            <tr>
-              <td>eventPrefix</td>
-              <td>String</td>
-              <td>The character string to prepend to each event attribute in order to distinguish it from MDC attributes.
-                The default is an empty string.</td>
-            </tr>
-            <tr>
-              <td>flumeEventFactory</td>
-              <td>FlumeEventFactory</td>
-              <td>Factory that generates the Flume events from Log4j events. The default factory is the
-                FlumeAvroAppender itself.</td>
-            </tr>
-            <tr>
-              <td>layout</td>
-              <td>Layout</td>
-              <td>The Layout to use to format the LogEvent. If no layout is specified RFC5424Layout will be used.</td>
-            </tr>
-            <tr>
-              <td>maxDelay</td>
-              <td>integer</td>
-              <td>The maximum number of seconds to wait for batchSize events before publishing the batch.</td>
-            </tr>
-            <tr>
-              <td>mdcExcludes</td>
-              <td>String</td>
-              <td>A comma separated list of mdc keys that should be excluded from the FlumeEvent. This is mutually
-                exclusive with the mdcIncludes attribute.</td>
-            </tr>
-            <tr>
-              <td>mdcIncludes</td>
-              <td>String</td>
-              <td>A comma separated list of mdc keys that should be included in the FlumeEvent. Any keys in the MDC
-                not found in the list will be excluded. This option is mutually exclusive with the mdcExcludes
-                attribute.</td>
-            </tr>
-            <tr>
-              <td>mdcRequired</td>
-              <td>String</td>
-              <td>A comma separated list of mdc keys that must be present in the MDC. If a key is not present a
-                LoggingException will be thrown.</td>
-            </tr>
-            <tr>
-              <td>mdcPrefix</td>
-              <td>String</td>
-              <td>A string that should be prepended to each MDC key in order to distinguish it from event attributes.
-                The default string is "mdc:".</td>
-            </tr>
-            <tr>
-              <td>name</td>
-              <td>String</td>
-              <td>The name of the Appender.</td>
-            </tr>
-            <tr>
-              <td>properties</td>
-              <td>Property[]</td>
-              <td><p>One or more Property elements that are used to configure the Flume Agent. The properties must be
-                configured without the agent name (the appender name is used for this) and no sources can be
-                configured. All other Flume configuration properties are allowed. Specifying both Agent and Property
-                elements will result in an error.</p>
-                <p>When used to configure in Persistent mode the valid properties are:
-                  <ol>
-                  <li>"keyProvider" to specify the name of the plugin to provide the secret key for encryption.</li>
-                </ol></p>
-              </td>
-            </tr>
-            <tr>
-              <td>requestTimeout</td>
-              <td>integer</td>
-              <td>The number of milliseconds Flume will wait before timing out the request.</td>
-            </tr>
+				</p>
+			</subsection>
+			<a name="FastRollingFileAppender" />
+			<subsection name="FastRollingFileAppender">
+			<p><i>Experimental, may replace RollingFileAppender in a future release.</i></p>
+				<p>
+					The FastRollingFileAppender is similar to the standard
+					<a href="#RollingFileAppender">RollingFileAppender</a>
+					except it is always buffered (this cannot be switched off)
+					and
+					internally it uses a
+					<tt>ByteBuffer + RandomAccessFile</tt>
+					instead of a
+					<tt>BufferedOutputStream</tt>.
+					We saw a 20-200% performance improvement compared to
+					RollingFileAppender with "bufferedIO=true"
+					in our
+					<a href="async.html#FastFileAppenderPerformance">measurements</a>.
 
-            <tr>
-              <td>suppressExceptions</td>
-              <td>boolean</td>
-              <td>The default is true, causing exceptions to be internally logged and then ignored. When set to
-                false exceptions will be percolated to the caller.</td>
-            </tr>
-            <tr>
-              <td>type</td>
-              <td>enumeration</td>
-              <td>One of "Avro", "Embedded", or "Persistent" to indicate which variation of the Appender is desired.</td>
-            </tr>
-            <caption align="top">FlumeAppender Parameters</caption>
-          </table>
-            <p>
-              A sample FlumeAppender configuration that is configured with a primary and a secondary agent,
-              compresses the body, and formats the body using the RFC5424Layout:
+					The FastRollingFileAppender writes
+					to the File named in the
+					fileName parameter
+					and rolls the file over according the
+					TriggeringPolicy
+					and the RolloverPolicy.
 
-            <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="warn" name="MyApp" packages="">
-  <appenders>
-    <Flume name="eventLogger" suppressExceptions="false" compress="true">
-      <Agent host="192.168.10.101" port="8800"/>
-      <Agent host="192.168.10.102" port="8800"/>
-      <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
-    </Flume>
-  </appenders>
-  <loggers>
-    <root level="error">
-      <appender-ref ref="eventLogger"/>
-    </root>
-  </loggers>
-</configuration>]]></pre>
-          </p>
-          <p>
-            A sample FlumeAppender configuration that is configured with a primary and a secondary agent,
-            compresses the body, formats the body using the RFC5424Layout, and persists encrypted events to disk:
+					Similar to the RollingFileAppender,
+					FastRollingFileAppender uses a FastRollingFileManager
+					to actually perform the
+					file I/O and perform the rollover. While FastRollingFileAppender
+					from different Configurations cannot be
+					shared, the FastRollingFileManagers can be
+					if the Manager is accessible.
+					For example, two webapps in a servlet
+					container can have their own configuration and safely write to the
+					same file if Log4j is in a ClassLoader that is common to both of them.
+				</p>
+				<p>
+					A FastRollingFileAppender requires a
+					<a href="#TriggeringPolicies">TriggeringPolicy</a>
+					and a
+					<a href="#RolloverStrategies">RolloverStrategy</a>.
+					The triggering policy determines if a rollover should
+					be performed
+					while the RolloverStrategy defines how the rollover
+					should be done.
+					If no RolloverStrategy
+					is configured, FastRollingFileAppender will
+					use the
+					<a href="#DefaultRolloverStrategy">DefaultRolloverStrategy</a>.
+				</p>
+				<p>
+					File locking is not supported by the FastRollingFileAppender.
+				</p>
+				<table>
+					<tr>
+						<th>Parameter Name</th>
+						<th>Type</th>
+						<th>Description</th>
+					</tr>
+					<tr>
+						<td>append</td>
+						<td>boolean</td>
+						<td>When true - the default, records will be appended to the end
+							of the file. When set to false,
+							the file will be cleared before
+							new records are written.
+						</td>
+					</tr>
+					<tr>
+						<td>filter</td>
+						<td>Filter</td>
+						<td>A Filter to determine if the event should be handled by this
+							Appender. More than one Filter
+							may be used by using a
+							CompositeFilter.
+						</td>
+					</tr>
+					<tr>
+						<td>fileName</td>
+						<td>String</td>
+						<td>The name of the file to write to. If the file, or any of its
+							parent directories, do not exist,
+							they will be created.
+						</td>
+					</tr>
+					<tr>
+						<td>filePattern</td>
+						<td>String</td>
+						<td>
+							The pattern of the file name of the archived log file. The format
+							of the pattern should is
+							dependent on the RolloverPolicy that is
+							used. The DefaultRolloverPolicy
+							will accept both
+							a date/time
+							pattern compatible with
+							<a
+								href="http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html">
+								SimpleDateFormat</a>
 
-            <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="warn" name="MyApp" packages="">
-  <appenders>
-    <Flume name="eventLogger" suppressExceptions="false" compress="true" type="persistent" dataDir="./logData">
-      <Agent host="192.168.10.101" port="8800"/>
-      <Agent host="192.168.10.102" port="8800"/>
-      <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
-      <Property name="keyProvider">MySecretProvider</Property>
-    </Flume>
-  </appenders>
-  <loggers>
-    <root level="error">
-      <appender-ref ref="eventLogger"/>
-    </root>
-  </loggers>
-</configuration>]]></pre>
-          </p>
-          <p>
-            A sample FlumeAppender configuration that is configured with a primary and a secondary agent,
-            compresses the body, formats the body using RFC5424Layout and passes the events to an embedded Flume
-            Agent.
-          </p>
-          <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="warn" name="MyApp" packages="">
-  <appenders>
-    <Flume name="eventLogger" suppressExceptions="false" compress="true" type="Embedded">
-      <Agent host="192.168.10.101" port="8800"/>
-      <Agent host="192.168.10.102" port="8800"/>
-      <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
-    </Flume>
-    <Console name="STDOUT">
-      <PatternLayout pattern="%d [%p] %c %m%n"/>
-    </Console>
-  </appenders>
-  <loggers>
-    <logger name="EventLogger" level="info">
-      <appender-ref ref="eventLogger"/>
-    </logger>
-    <root level="warn">
-      <appender-ref ref="STDOUT"/>
-    </root>
-  </loggers>
-</configuration>]]></pre>
-          <p>
-            A sample FlumeAppender configuration that is configured with a primary and a secondary agent using
-            Flume configuration properties, compresses the body, formats the body using RFC5424Layout and passes the
-            events to an embedded Flume Agent.
-          </p>
-          <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="error" name="MyApp" packages="">
-  <appenders>
-    <Flume name="eventLogger" suppressExceptions="false" compress="true" type="Embedded">
-      <Property name="channels">file</Property>
-      <Property name="channels.file.type">file</Property>
-      <Property name="channels.file.checkpointDir">target/file-channel/checkpoint</Property>
-      <Property name="channels.file.dataDirs">target/file-channel/data</Property>
-      <Property name="sinks">agent1 agent2</Property>
-      <Property name="sinks.agent1.channel">file</Property>
-      <Property name="sinks.agent1.type">avro</Property>
-      <Property name="sinks.agent1.hostname">192.168.10.101</Property>
-      <Property name="sinks.agent1.port">8800</Property>
-      <Property name="sinks.agent1.batch-size">100</Property>
-      <Property name="sinks.agent2.channel">file</Property>
-      <Property name="sinks.agent2.type">avro</Property>
-      <Property name="sinks.agent2.hostname">192.168.10.102</Property>
-      <Property name="sinks.agent2.port">8800</Property>
-      <Property name="sinks.agent2.batch-size">100</Property>
-      <Property name="sinkgroups">group1</Property>
-      <Property name="sinkgroups.group1.sinks">agent1 agent2</Property>
-      <Property name="sinkgroups.group1.processor.type">failover</Property>
-      <Property name="sinkgroups.group1.processor.priority.agent1">10</Property>
-      <Property name="sinkgroups.group1.processor.priority.agent2">5</Property>
-      <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
-    </Flume>
-    <Console name="STDOUT">
-      <PatternLayout pattern="%d [%p] %c %m%n"/>
-    </Console>
+							and/or a %i which represents an integer counter. The pattern
+							also supports interpolation at
+							runtime so any of the Lookups (such
+							as the
+							<a href="./lookups.html#DateLookup">DateLookup</a>
+							can
+							be included in the pattern.
+						</td>
+					</tr>
+					<tr>
+						<td>immediateFlush</td>
+						<td>boolean</td>
+		              <td><p>When set to true - the default, each write will be followed by a flush.
+		                This will guarantee the data is written
+		                to disk but could impact performance.</p>
+		                <p>Flushing after every write is only useful when using this
+						appender with synchronous loggers. Asynchronous loggers and
+						appenders will automatically flush at the end of a batch of events, 
+						even if immediateFlush is set to false. This also guarantees
+						the data is written to disk but is more efficient.</p>
+		              </td>
+					</tr>
+					<tr>
+						<td>layout</td>
+						<td>Layout</td>
+						<td>The Layout to use to format the LogEvent</td>
+					</tr>
+
+					<tr>
+						<td>name</td>
+						<td>String</td>
+						<td>The name of the Appender.</td>
+					</tr>
+					<tr>
+						<td>policy</td>
+						<td>TriggeringPolicy</td>
+						<td>The policy to use to determine if a rollover should occur.
+						</td>
+					</tr>
+					<tr>
+						<td>strategy</td>
+						<td>RolloverStrategy</td>
+						<td>The strategy to use to determine the name and location of the
+							archive file.
+						</td>
+					</tr>
+					<tr>
+						<td>suppressExceptions</td>
+						<td>boolean</td>
+						<td>The default is true, causing exceptions to be internally
+							logged and then ignored. When set to
+							false exceptions will be
+							percolated to the caller.
+						</td>
+					</tr>
+					<caption align="top">FastRollingFileAppender Parameters</caption>
+				</table>
+				<a name="FRFA_TriggeringPolicies" />
+				<h4>Triggering Policies</h4>
+				<p>
+					See
+					<a href="#TriggeringPolicies">RollingFileAppender Triggering Policies</a>.
+				</p>
+				<a name="FRFA_RolloverStrategies" />
+				<h4>Rollover Strategies</h4>
+				<p>
+					See
+					<a href="#RolloverStrategies">RollingFileAppender Rollover Strategies</a>.
+				</p>
+
+				<p>
+					Below is a sample configuration that uses a FastRollingFileAppender
+					with both the time and size based
+					triggering policies, will create
+					up to 7 archives on the same day (1-7) that
+					are stored in a
+					directory
+					based on the current year and month, and will compress
+					each
+					archive using gzip:
+
+					<pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="warn" name="MyApp" packages="">
+  <appenders>
+    <FastRollingFile name="FastRollingFile" fileName="logs/app.log"
+                 filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
+      <PatternLayout>
+        <pattern>%d %p %c{1.} [%t] %m%n</pattern>
+      </PatternLayout>
+      <Policies>
+        <TimeBasedTriggeringPolicy />
+        <SizeBasedTriggeringPolicy size="250 MB"/>
+      </Policies>
+    </FastRollingFile>
   </appenders>
   <loggers>
-    <logger name="EventLogger" level="info">
-      <appender-ref ref="eventLogger"/>
-    </logger>
-    <root level="warn">
-      <appender-ref ref="STDOUT"/>
+    <root level="error">
+      <appender-ref ref="FastRollingFile"/>
     </root>
   </loggers>
 </configuration>]]></pre>
-        </subsection>
-        <a name="JDBCAppender"/>
-        <subsection name="JDBCAppender">
-          <p>The JDBCAppender writes log events to a relational database table using standard JDBC. It can be configured
-          to obtain JDBC connections using the DriverManager, a JNDI DataSource or a custom factory method.</p>
+				</p>
+				<p>
+					This second example shows a rollover strategy that will keep up to
+					20 files before removing them.
+					<pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="warn" name="MyApp" packages="">
+  <appenders>
+    <FastRollingFile name="FastRollingFile" fileName="logs/app.log"
+                 filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
+      <PatternLayout>
+        <pattern>%d %p %c{1.} [%t] %m%n</pattern>
+      </PatternLayout>
+      <Policies>
+        <TimeBasedTriggeringPolicy />
+        <SizeBasedTriggeringPolicy size="250 MB"/>
+      </Policies>
+      <DefaultRolloverStrategy max="20"/>
+    </FastRollingFile>
+  </appenders>
+  <loggers>
+    <root level="error">
+      <appender-ref ref="FastRollingFile"/>
+    </root>
+  </loggers>
+</configuration>]]></pre>
+				</p>
+				<p>
+					Below is a sample configuration that uses a FastRollingFileAppender
+					with both the time and size based
+					triggering policies, will create
+					up to 7 archives on the same day (1-7) that
+					are stored in a
+					directory
+					based on the current year and month, and will compress
+					each
+					archive using gzip and will roll every 6 hours when the hour is
+					divisible
+					by 6:
+
+					<pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="warn" name="MyApp" packages="">
+  <appenders>
+    <FastRollingFile name="FastRollingFile" fileName="logs/app.log"
+                 filePattern="logs/$${date:yyyy-MM}/app-%d{yyyy-MM-dd-HH}-%i.log.gz">
+      <PatternLayout>
+        <pattern>%d %p %c{1.} [%t] %m%n</pattern>
+      </PatternLayout>
+      <Policies>
+        <TimeBasedTriggeringPolicy interval="6" modulate="true"/>
+        <SizeBasedTriggeringPolicy size="250 MB"/>
+      </Policies>
+    </FastRollingFile>
+  </appenders>
+  <loggers>
+    <root level="error">
+      <appender-ref ref="FastRollingFile"/>
+    </root>
+  </loggers>
+</configuration>]]></pre>
+				</p>
+			</subsection>
+        <a name="FileAppender"/>
+        <subsection name="FileAppender">
+          <p>The FileAppender is an OutputStreamAppender that writes to the File named in the fileName parameter. The
+            FileAppender uses a FileManager (which extends OutputStreamManager) to actually perform the file I/O. While
+            FileAppenders from different Configurations cannot be shared, the FileManagers can be if the Manager is
+            accessible. For example, two webapps in a servlet container can have their own configuration and safely
+            write to the same file if Log4J is in a ClassLoader that is common to both of them.</p>
           <table>
             <tr>
               <th>Parameter Name</th>
@@ -656,110 +664,114 @@
               <th>Description</th>
             </tr>
             <tr>
-              <td>name</td>
-              <td>String</td>
-              <td>The name of the Appender.</td>
+              <td>append</td>
+              <td>boolean</td>
+              <td>When true - the default, records will be appended to the end of the file. When set to false,
+                the file will be cleared before new records are written.</td>
             </tr>
             <tr>
-              <td>suppressExceptions</td>
+              <td>bufferedIO</td>
               <td>boolean</td>
-              <td>The default is true, causing exceptions to be internally logged and then ignored. When set to false
-                exceptions will be percolated to the caller.</td>
+              <td>When true - the default, records will be written to a buffer and the data will be written to
+                disk when the buffer is full or, if immediateFlush is set, when the record is written.
+                File locking cannot be used with bufferedIO. Performance tests have shown that using buffered I/O
+                significantly improves performance, even if immediateFlush is enabled.</td>
             </tr>
             <tr>
               <td>filter</td>
               <td>Filter</td>
-              <td>A Filter to determine if the event should be handled by this Appender. More than one Filter may be
-                used by using a CompositeFilter.</td>
+              <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
+              may be used by using a CompositeFilter.</td>
             </tr>
             <tr>
-              <td>bufferSize</td>
-              <td>int</td>
-              <td>If an integer greater than 0, this causes the appender to buffer log events and flush whenever the
-                buffer reaches this size.</td>
+              <td>fileName</td>
+              <td>String</td>
+              <td>The name of the file to write to. If the file, or any of its parent directories, do not exist,
+                they will be created.</td>
             </tr>
             <tr>
-              <td>connectionSource</td>
-              <td>ConnectionSource</td>
-              <td>The connections source from which database connections should be retrieved.</td>
+              <td>immediateFlush</td>
+              <td>boolean</td>
+              <td><p>When set to true - the default, each write will be followed by a flush.
+                This will guarantee the data is written
+                to disk but could impact performance.</p>
+                <p>Flushing after every write is only useful when using this
+				appender with synchronous loggers. Asynchronous loggers and
+				appenders will automatically flush at the end of a batch of events, 
+				even if immediateFlush is set to false. This also guarantees
+				the data is written to disk but is more efficient.</p>
+              </td>
             </tr>
             <tr>
-              <td>tableName</td>
-              <td>String</td>
-              <td>The name of the database table to insert log events into.</td>
+              <td>layout</td>
+              <td>Layout</td>
+              <td>The Layout to use to format the LogEvent</td>
             </tr>
             <tr>
-              <td>columnConfigs</td>
-              <td>ColumnConfig[]</td>
-              <td>Information about the columns that log event data should be inserted into and how to insert that data.
-                This is represented with multiple &lt;Column /&gt; elements.</td>
+              <td>locking</td>
+              <td>boolean</td>
+              <td>When set to true, I/O operations will occur only while the file lock is held allowing FileAppenders
+                in multiple JVMs and potentially multiple hosts to write to the same file simultaneously. This
+                will significantly impact performance so should be used carefully. Furthermore, on many systems
+                the file lock is "advisory" meaning that other applications can perform operations on the file
+                without acquiring a lock. The default value is false.</td>
             </tr>
-          </table>
-          <p>
-            Here are a few sample configurations for the JDBCAppender:
-
-            <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="error">
-  <appenders>
-    <Jdbc name="databaseAppender" tableName="application_log">
-      <DriverManager jdbcUrl="jdbc:mysql://example.org:3306/exampleDb" username="logging" password="abc123" />
-      <Column name="eventDate" isEventTimestamp="true" />
-      <Column name="level" pattern="%level" />
-      <Column name="logger" pattern="%logger" />
-      <Column name="message" pattern="%message" />
-      <Column name="exception" pattern="%ex{full}" />
-    </Jdbc>
-  </appenders>
-  <loggers>
-    <root level="warn">
-      <appender-ref ref="databaseAppender"/>
-    </root>
-  </loggers>
-</configuration>]]></pre>
 
-            <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="error">
-  <appenders>
-    <Jdbc name="databaseAppender" tableName="dbo.application_log">
-      <DataSource jndiName="java:/comp/env/jdbc/ApplicationDataSource" />
-      <Column name="eventDate" isEventTimestamp="true" />
-      <Column name="level" pattern="%level" />
-      <Column name="logger" pattern="%logger" />
-      <Column name="message" pattern="%message" />
-      <Column name="exception" pattern="%ex{full}" />
-    </Jdbc>
-  </appenders>
-  <loggers>
-    <root level="warn">
-      <appender-ref ref="databaseAppender"/>
-    </root>
-  </loggers>
-</configuration>]]></pre>
+            <tr>
+              <td>name</td>
+              <td>String</td>
+              <td>The name of the Appender.</td>
+            </tr>
+            <tr>
+              <td>suppressExceptions</td>
+              <td>boolean</td>
+              <td>The default is true, causing exceptions to be internally logged and then ignored. When set to
+                false exceptions will be percolated to the caller.</td>
+            </tr>
+            <caption align="top">FileAppender Parameters</caption>
+          </table>
+           <p>
+            Here is a sample File configuration:
 
-            <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="error">
+            <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="warn" name="MyApp" packages="">
   <appenders>
-    <Jdbc name="databaseAppender" tableName="LOGGING.APPLICATION_LOG">
-      <ConnectionFactory class="net.example.db.ConnectionFactory" method="getNewDatabaseConnection" />
-      <Column name="EVENT_ID" literal="LOGGING.APPLICATION_LOG_SEQUENCE.NEXTVAL" />
-      <Column name="EVENT_DATE" isEventTimestamp="true" />
-      <Column name="LEVEL" pattern="%level" />
-      <Column name="LOGGER" pattern="%logger" />
-      <Column name="MESSAGE" pattern="%message" />
-      <Column name="THROWABLE" pattern="%ex{full}" />
-    </Jdbc>
+    <File name="MyFile" fileName="logs/app.log">
+      <PatternLayout>
+        <pattern>%d %p %c{1.} [%t] %m%n</pattern>
+      </PatternLayout>
+    </File>
   </appenders>
   <loggers>
-    <root level="warn">
-      <appender-ref ref="databaseAppender"/>
+    <root level="error">
+      <appender-ref ref="MyFile"/>
     </root>
   </loggers>
 </configuration>]]></pre>
           </p>
         </subsection>
-        <a name="JMSQueueAppender"/>
-        <subsection name="JMSQueueAppender">
-          <p>The JMSQueueAppender sends the formatted log event to a JMS Queue.</p>
+        <a name="FlumeAppender"/>
+        <subsection name="FlumeAppender">
+          <p><i>This is an optional component supplied in a separate jar.</i></p>
+          <p><a href="http://flume.apache.org/index.html">Apache Flume</a> is a distributed, reliable,
+            and available system for efficiently collecting, aggregating, and moving large amounts of log data
+            from many different sources to a centralized data store. The FlumeAppender takes LogEvents and sends
+            them to a Flume agent as serialized Avro events for consumption.</p>
+          <p>
+            The Flume Appender supports three modes of operation.
+            <ol>
+              <li>It can act as a remote Flume client which sends Flume events via Avro to a Flume Agent configured
+              with an Avro Source.</li>
+              <li>It can act as an embedded Flume Agent where Flume events pass directly into Flume for processing.</li>
+              <li>It can persist events to a local BerkeleyDB datastore and then asynchronously send the events to
+              Flume, similar to the embedded Flume Agent but without most of the Flume dependencies.</li>
+            </ol>
+            Usage as an embedded agent will cause the messages to be directly passed to the Flume Channel and then
+            control will be immediately returned to the application. All interaction with remote agents will occur
+            asynchronously. Setting the "type" attribute to "Embedded" will force the use of the embedded agent. In
+            addition, configuring agent properties in the appender configuration will also cause the embedded agent
+            to be used.
+          </p>
           <table>
             <tr>
               <th>Parameter Name</th>
@@ -767,19 +779,41 @@
               <th>Description</th>
             </tr>
             <tr>
-              <td>factoryBindingName</td>
-              <td>String</td>
-              <td>The name to locate in the Context that provides the
-                <a href="http://download.oracle.com/javaee/5/api/javax/jms/QueueConnectionFactory.html">QueueConnectionFactory</a>.</td>
+              <td>agents</td>
+              <td>Agent[]</td>
+              <td>An array of Agents to which the logging events should be sent. If more than one agent is specified
+                the first Agent will be the primary and subsequent Agents will be used in the order specified as
+                secondaries should the primary Agent fail. Each Agent definition supplies the Agents host and port.
+                The specification of agents and properties are mutually exclusive. If both are configured an
+                error will result.</td>
             </tr>
             <tr>
-              <td>factoryName</td>
+              <td>agentRetries</td>
+              <td>integer</td>
+              <td>The number of times the agent should be retried before failing to a secondary. This paramenter is
+                ignored when type="persistent" is specified (agents are tried once before failing to the next).</td>
+            </tr>
+            <tr>
+              <td>batchSize</td>
+              <td>integer</td>
+              <td>Specifies the number of events that should be sent as a batch. The default is 1. <i>This
+                parameter only applies to the Flume NG Appender.</i></td>
+            </tr>
+            <tr>
+              <td>compress</td>
+              <td>boolean</td>
+              <td>When set to true the message body will be compressed using gzip</td>
+            </tr>
+            <tr>
+              <td>connectTimeout</td>
+              <td>integer</td>
+              <td>The number of milliseconds Flume will wait before timing out the connection.</td>
+            </tr>
+            <tr>
+              <td>dataDir</td>
               <td>String</td>
-              <td>The fully qualified class name that should be used to define the Initial Context Factory as
-                defined in <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#INITIAL_CONTEXT_FACTORY">INITIAL_CONTEXT_FACTORY</a>.
-                If no value is provided the
-                default InitialContextFactory will be used. If a factoryName is specified without a providerURL
-                a warning message will be logged as this is likely to cause problems.</td>
+              <td>Directory where the Flume write ahead log should be written. Valid only when embedded is set
+                to true and Agent elements are used instead of Property elements.</td>
             </tr>
             <tr>
               <td>filter</td>
@@ -788,46 +822,76 @@
               may be used by using a CompositeFilter.</td>
             </tr>
             <tr>
+              <td>eventPrefix</td>
+              <td>String</td>
+              <td>The character string to prepend to each event attribute in order to distinguish it from MDC attributes.
+                The default is an empty string.</td>
+            </tr>
+            <tr>
+              <td>flumeEventFactory</td>
+              <td>FlumeEventFactory</td>
+              <td>Factory that generates the Flume events from Log4j events. The default factory is the
+                FlumeAvroAppender itself.</td>
+            </tr>
+            <tr>
               <td>layout</td>
               <td>Layout</td>
-              <td>The Layout to use to format the LogEvent. If no layout is specified SerializedLayout will be used.</td>
+              <td>The Layout to use to format the LogEvent. If no layout is specified RFC5424Layout will be used.</td>
             </tr>
             <tr>
-              <td>name</td>
-              <td>String</td>
-              <td>The name of the Appender.</td>
+              <td>maxDelay</td>
+              <td>integer</td>
+              <td>The maximum number of seconds to wait for batchSize events before publishing the batch.</td>
             </tr>
             <tr>
-              <td>password</td>
+              <td>mdcExcludes</td>
               <td>String</td>
-              <td>The password to use to create the queue connection.</td>
+              <td>A comma separated list of mdc keys that should be excluded from the FlumeEvent. This is mutually
+                exclusive with the mdcIncludes attribute.</td>
             </tr>
             <tr>
-              <td>providerURL</td>
+              <td>mdcIncludes</td>
               <td>String</td>
-              <td>The URL of the provider to use as defined by
-                <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#PROVIDER_URL">PROVIDER_URL</a>.
-                If this value is null the default system provider will be used.</td>
+              <td>A comma separated list of mdc keys that should be included in the FlumeEvent. Any keys in the MDC
+                not found in the list will be excluded. This option is mutually exclusive with the mdcExcludes
+                attribute.</td>
             </tr>
             <tr>
-              <td>queueBindingName</td>
+              <td>mdcRequired</td>
               <td>String</td>
-              <td>The name to use to locate the <a href="http://download.oracle.com/javaee/5/api/javax/jms/Queue.html">Queue</a>.</td>
+              <td>A comma separated list of mdc keys that must be present in the MDC. If a key is not present a
+                LoggingException will be thrown.</td>
             </tr>
             <tr>
-              <td>securityPrincipalName</td>
+              <td>mdcPrefix</td>
               <td>String</td>
-              <td>The name of the identity of the Principal as specified by
-                <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#SECURITY_PRINCIPAL">SECURITY_PRINCIPAL</a>.
-                If a securityPrincipalName is specified without securityCredentials a warning message will be
-                logged as this is likely to cause problems.</td>
+              <td>A string that should be prepended to each MDC key in order to distinguish it from event attributes.
+                The default string is "mdc:".</td>
             </tr>
             <tr>
-              <td>securityCredentials</td>
+              <td>name</td>
               <td>String</td>
-              <td>The security credentials for the principal as specified by
-                <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#SECURITY_CREDENTIALS">SECURITY_CREDENTIALS</a>.</td>
+              <td>The name of the Appender.</td>
+            </tr>
+            <tr>
+              <td>properties</td>
+              <td>Property[]</td>
+              <td><p>One or more Property elements that are used to configure the Flume Agent. The properties must be
+                configured without the agent name (the appender name is used for this) and no sources can be
+                configured. All other Flume configuration properties are allowed. Specifying both Agent and Property
+                elements will result in an error.</p>
+                <p>When used to configure in Persistent mode the valid properties are:
+                  <ol>
+                  <li>"keyProvider" to specify the name of the plugin to provide the secret key for encryption.</li>
+                </ol></p>
+              </td>
+            </tr>
+            <tr>
+              <td>requestTimeout</td>
+              <td>integer</td>
+              <td>The number of milliseconds Flume will wait before timing out the request.</td>
             </tr>
+
             <tr>
               <td>suppressExceptions</td>
               <td>boolean</td>
@@ -835,50 +899,250 @@
                 false exceptions will be percolated to the caller.</td>
             </tr>
             <tr>
-              <td>urlPkgPrefixes</td>
-              <td>String</td>
-              <td>A colon-separated list of package prefixes for the class name of the factory class that will create
-                a URL context factory as defined by
-                <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#URL_PKG_PREFIXES">URL_PKG_PREFIXES</a>.</td>
-            </tr>
-             <tr>
-              <td>userName</td>
-              <td>String</td>
-              <td>The user id used to create the queue connection.</td>
+              <td>type</td>
+              <td>enumeration</td>
+              <td>One of "Avro", "Embedded", or "Persistent" to indicate which variation of the Appender is desired.</td>
             </tr>
-            <caption align="top">JMSQueueAppender Parameters</caption>
+            <caption align="top">FlumeAppender Parameters</caption>
           </table>
-           <p>
-            Here is a sample JMSQueueAppender configuration:
+            <p>
+              A sample FlumeAppender configuration that is configured with a primary and a secondary agent,
+              compresses the body, and formats the body using the RFC5424Layout:
 
             <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
 <configuration status="warn" name="MyApp" packages="">
   <appenders>
-    <JMSQueue name="jmsQueue" queueBindingName="MyQueue"
-              factoryBindingName="MyQueueConnectionFactory"/>
+    <Flume name="eventLogger" suppressExceptions="false" compress="true">
+      <Agent host="192.168.10.101" port="8800"/>
+      <Agent host="192.168.10.102" port="8800"/>
+      <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
+    </Flume>
   </appenders>
   <loggers>
     <root level="error">
-      <appender-ref ref="jmsQueue"/>
+      <appender-ref ref="eventLogger"/>
     </root>
   </loggers>
 </configuration>]]></pre>
           </p>
-        </subsection>
-        <a name="JMSTopicAppender"/>
-        <subsection name="JMSTopicAppender">
-          <p>The JMSTopicAppender sends the formatted log event to a JMS Topic.</p>
-          <table>
-            <tr>
-              <th>Parameter Name</th>
-              <th>Type</th>
-              <th>Description</th>
-            </tr>
-            <tr>
-              <td>factoryBindingName</td>
-              <td>String</td>
-              <td>The name to locate in the Context that provides the
-                <a href="http://download.oracle.com/javaee/5/api/javax/jms/TopicConnectionFactory.html">TopicConnectionFactory</a>.</td>
+          <p>
+            A sample FlumeAppender configuration that is configured with a primary and a secondary agent,
+            compresses the body, formats the body using the RFC5424Layout, and persists encrypted events to disk:
+
+            <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="warn" name="MyApp" packages="">
+  <appenders>
+    <Flume name="eventLogger" suppressExceptions="false" compress="true" type="persistent" dataDir="./logData">
+      <Agent host="192.168.10.101" port="8800"/>
+      <Agent host="192.168.10.102" port="8800"/>
+      <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
+      <Property name="keyProvider">MySecretProvider</Property>
+    </Flume>
+  </appenders>
+  <loggers>
+    <root level="error">
+      <appender-ref ref="eventLogger"/>
+    </root>
+  </loggers>
+</configuration>]]></pre>
+          </p>
+          <p>
+            A sample FlumeAppender configuration that is configured with a primary and a secondary agent,
+            compresses the body, formats the body using RFC5424Layout and passes the events to an embedded Flume
+            Agent.
+          </p>
+          <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="warn" name="MyApp" packages="">
+  <appenders>
+    <Flume name="eventLogger" suppressExceptions="false" compress="true" type="Embedded">
+      <Agent host="192.168.10.101" port="8800"/>
+      <Agent host="192.168.10.102" port="8800"/>
+      <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
+    </Flume>
+    <Console name="STDOUT">
+      <PatternLayout pattern="%d [%p] %c %m%n"/>
+    </Console>
+  </appenders>
+  <loggers>
+    <logger name="EventLogger" level="info">
+      <appender-ref ref="eventLogger"/>
+    </logger>
+    <root level="warn">
+      <appender-ref ref="STDOUT"/>
+    </root>
+  </loggers>
+</configuration>]]></pre>
+          <p>
+            A sample FlumeAppender configuration that is configured with a primary and a secondary agent using
+            Flume configuration properties, compresses the body, formats the body using RFC5424Layout and passes the
+            events to an embedded Flume Agent.
+          </p>
+          <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="error" name="MyApp" packages="">
+  <appenders>
+    <Flume name="eventLogger" suppressExceptions="false" compress="true" type="Embedded">
+      <Property name="channels">file</Property>
+      <Property name="channels.file.type">file</Property>
+      <Property name="channels.file.checkpointDir">target/file-channel/checkpoint</Property>
+      <Property name="channels.file.dataDirs">target/file-channel/data</Property>
+      <Property name="sinks">agent1 agent2</Property>
+      <Property name="sinks.agent1.channel">file</Property>
+      <Property name="sinks.agent1.type">avro</Property>
+      <Property name="sinks.agent1.hostname">192.168.10.101</Property>
+      <Property name="sinks.agent1.port">8800</Property>
+      <Property name="sinks.agent1.batch-size">100</Property>
+      <Property name="sinks.agent2.channel">file</Property>
+      <Property name="sinks.agent2.type">avro</Property>
+      <Property name="sinks.agent2.hostname">192.168.10.102</Property>
+      <Property name="sinks.agent2.port">8800</Property>
+      <Property name="sinks.agent2.batch-size">100</Property>
+      <Property name="sinkgroups">group1</Property>
+      <Property name="sinkgroups.group1.sinks">agent1 agent2</Property>
+      <Property name="sinkgroups.group1.processor.type">failover</Property>
+      <Property name="sinkgroups.group1.processor.priority.agent1">10</Property>
+      <Property name="sinkgroups.group1.processor.priority.agent2">5</Property>
+      <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
+    </Flume>
+    <Console name="STDOUT">
+      <PatternLayout pattern="%d [%p] %c %m%n"/>
+    </Console>
+  </appenders>
+  <loggers>
+    <logger name="EventLogger" level="info">
+      <appender-ref ref="eventLogger"/>
+    </logger>
+    <root level="warn">
+      <appender-ref ref="STDOUT"/>
+    </root>
+  </loggers>
+</configuration>]]></pre>
+        </subsection>
+        <a name="JDBCAppender"/>
+        <subsection name="JDBCAppender">
+          <p>The JDBCAppender writes log events to a relational database table using standard JDBC. It can be configured
+          to obtain JDBC connections using the DriverManager, a JNDI DataSource or a custom factory method.</p>
+          <table>
+            <tr>
+              <th>Parameter Name</th>
+              <th>Type</th>
+              <th>Description</th>
+            </tr>
+            <tr>
+              <td>name</td>
+              <td>String</td>
+              <td>The name of the Appender.</td>
+            </tr>
+            <tr>
+              <td>suppressExceptions</td>
+              <td>boolean</td>
+              <td>The default is true, causing exceptions to be internally logged and then ignored. When set to false
+                exceptions will be percolated to the caller.</td>
+            </tr>
+            <tr>
+              <td>filter</td>
+              <td>Filter</td>
+              <td>A Filter to determine if the event should be handled by this Appender. More than one Filter may be
+                used by using a CompositeFilter.</td>
+            </tr>
+            <tr>
+              <td>bufferSize</td>
+              <td>int</td>
+              <td>If an integer greater than 0, this causes the appender to buffer log events and flush whenever the
+                buffer reaches this size.</td>
+            </tr>
+            <tr>
+              <td>connectionSource</td>
+              <td>ConnectionSource</td>
+              <td>The connections source from which database connections should be retrieved.</td>
+            </tr>
+            <tr>
+              <td>tableName</td>
+              <td>String</td>
+              <td>The name of the database table to insert log events into.</td>
+            </tr>
+            <tr>
+              <td>columnConfigs</td>
+              <td>ColumnConfig[]</td>
+              <td>Information about the columns that log event data should be inserted into and how to insert that data.
+                This is represented with multiple &lt;Column /&gt; elements.</td>
+            </tr>
+          </table>
+          <p>
+            Here are a few sample configurations for the JDBCAppender:
+
+            <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="error">
+  <appenders>
+    <Jdbc name="databaseAppender" tableName="application_log">
+      <DriverManager jdbcUrl="jdbc:mysql://example.org:3306/exampleDb" username="logging" password="abc123" />
+      <Column name="eventDate" isEventTimestamp="true" />
+      <Column name="level" pattern="%level" />
+      <Column name="logger" pattern="%logger" />
+      <Column name="message" pattern="%message" />
+      <Column name="exception" pattern="%ex{full}" />
+    </Jdbc>
+  </appenders>
+  <loggers>
+    <root level="warn">
+      <appender-ref ref="databaseAppender"/>
+    </root>
+  </loggers>
+</configuration>]]></pre>
+
+            <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="error">
+  <appenders>
+    <Jdbc name="databaseAppender" tableName="dbo.application_log">
+      <DataSource jndiName="java:/comp/env/jdbc/ApplicationDataSource" />
+      <Column name="eventDate" isEventTimestamp="true" />
+      <Column name="level" pattern="%level" />
+      <Column name="logger" pattern="%logger" />
+      <Column name="message" pattern="%message" />
+      <Column name="exception" pattern="%ex{full}" />
+    </Jdbc>
+  </appenders>
+  <loggers>
+    <root level="warn">
+      <appender-ref ref="databaseAppender"/>
+    </root>
+  </loggers>
+</configuration>]]></pre>
+
+            <pre class="prettyprint linenums lang-xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="error">
+  <appenders>
+    <Jdbc name="databaseAppender" tableName="LOGGING.APPLICATION_LOG">
+      <ConnectionFactory class="net.example.db.ConnectionFactory" method="getNewDatabaseConnection" />
+      <Column name="EVENT_ID" literal="LOGGING.APPLICATION_LOG_SEQUENCE.NEXTVAL" />
+      <Column name="EVENT_DATE" isEventTimestamp="true" />
+      <Column name="LEVEL" pattern="%level" />
+      <Column name="LOGGER" pattern="%logger" />
+      <Column name="MESSAGE" pattern="%message" />
+      <Column name="THROWABLE" pattern="%ex{full}" />
+    </Jdbc>
+  </appenders>
+  <loggers>
+    <root level="warn">
+      <appender-ref ref="databaseAppender"/>
+    </root>
+  </loggers>
+</configuration>]]></pre>
+          </p>
+        </subsection>
+        <a name="JMSQueueAppender"/>
+        <subsection name="JMSQueueAppender">
+          <p>The JMSQueueAppender sends the formatted log event to a JMS Queue.</p>
+          <table>
+            <tr>
+              <th>Parameter Name</th>
+              <th>Type</th>
+              <th>Description</th>
+            </tr>
+            <tr>
+              <td>factoryBindingName</td>
+              <td>String</td>
+              <td>The name to locate in the Context that provides the
+                <a href="http://download.oracle.com/javaee/5/api/javax/jms/QueueConnectionFactory.html">QueueConnectionFactory</a>.</td>
             </tr>
             <tr>
               <td>factoryName</td>
@@ -918,10 +1182,9 @@
                 If this value is null the default system provider will be used.</td>
             </tr>
             <tr>
-              <td>topicBindingName</td>
+              <td>queueBindingName</td>
               <td>String</td>
-              <td>The name to use to locate the
-                <a href="http://download.oracle.com/javaee/5/api/javax/jms/Topic.html">Topic</a>.</td>
+              <td>The name to use to locate the <a href="http://download.oracle.com/javaee/5/api/javax/jms/Queue.html">Queue</a>.</td>
             </tr>
             <tr>
               <td>securityPrincipalName</td>
@@ -955,16 +1218,16 @@
               <td>String</td>
               <td>The user id used to create the queue connection.</td>
             </tr>
-            <caption align="top">JMSTopicAppender Parameters</caption>
+            <caption align="top">JMSQueueAppender Parameters</caption>
           </table>
            <p>
-            Here is a sample JMSTopicAppender configuration:
+            Here is a sample JMSQueueAppender configuration:
 
             <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
 <configuration status="warn" name="MyApp" packages="">
   <appenders>
-    <JMSTopic name="jmsTopic" topicBindingName="MyTopic"
-              factoryBindingName="MyTopicConnectionFactory"/>
+    <JMSQueue name="jmsQueue" queueBindingName="MyQueue"
+              factoryBindingName="MyQueueConnectionFactory"/>
   </appenders>
   <loggers>
     <root level="error">
@@ -974,11 +1237,9 @@
 </configuration>]]></pre>
           </p>
         </subsection>
-        <a name="JPAAppender"/>
-        <subsection name="JPAAppender">
-          <p>The JPAAppender writes log events to a relational database table using the Java Persistence API.
-            It requires the API and a provider implementation be on the classpath. It also requires a decorated entity
-            configured to persist to the table desired.</p>
+        <a name="JMSTopicAppender"/>
+        <subsection name="JMSTopicAppender">
+          <p>The JMSTopicAppender sends the formatted log event to a JMS Topic.</p>
           <table>
             <tr>
               <th>Parameter Name</th>
@@ -986,20 +1247,131 @@
               <th>Description</th>
             </tr>
             <tr>
-              <td>name</td>
+              <td>factoryBindingName</td>
               <td>String</td>
-              <td>The name of the Appender.</td>
+              <td>The name to locate in the Context that provides the
+                <a href="http://download.oracle.com/javaee/5/api/javax/jms/TopicConnectionFactory.html">TopicConnectionFactory</a>.</td>
             </tr>
             <tr>
-              <td>suppressExceptions</td>
-              <td>boolean</td>
-              <td>The default is true, causing exceptions to be internally logged and then ignored. When set to false
-                exceptions will be percolated to the caller.</td>
+              <td>factoryName</td>
+              <td>String</td>
+              <td>The fully qualified class name that should be used to define the Initial Context Factory as
+                defined in <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#INITIAL_CONTEXT_FACTORY">INITIAL_CONTEXT_FACTORY</a>.
+                If no value is provided the
+                default InitialContextFactory will be used. If a factoryName is specified without a providerURL
+                a warning message will be logged as this is likely to cause problems.</td>
             </tr>
             <tr>
               <td>filter</td>
               <td>Filter</td>
-              <td>A Filter to determine if the event should be handled by this Appender. More than one Filter may be
+              <td>A Filter to determine if the event should be handled by this Appender. More than one Filter
+              may be used by using a CompositeFilter.</td>
+            </tr>
+            <tr>
+              <td>layout</td>
+              <td>Layout</td>
+              <td>The Layout to use to format the LogEvent. If no layout is specified SerializedLayout will be used.</td>
+            </tr>
+            <tr>
+              <td>name</td>
+              <td>String</td>
+              <td>The name of the Appender.</td>
+            </tr>
+            <tr>
+              <td>password</td>
+              <td>String</td>
+              <td>The password to use to create the queue connection.</td>
+            </tr>
+            <tr>
+              <td>providerURL</td>
+              <td>String</td>
+              <td>The URL of the provider to use as defined by
+                <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#PROVIDER_URL">PROVIDER_URL</a>.
+                If this value is null the default system provider will be used.</td>
+            </tr>
+            <tr>
+              <td>topicBindingName</td>
+              <td>String</td>
+              <td>The name to use to locate the
+                <a href="http://download.oracle.com/javaee/5/api/javax/jms/Topic.html">Topic</a>.</td>
+            </tr>
+            <tr>
+              <td>securityPrincipalName</td>
+              <td>String</td>
+              <td>The name of the identity of the Principal as specified by
+                <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#SECURITY_PRINCIPAL">SECURITY_PRINCIPAL</a>.
+                If a securityPrincipalName is specified without securityCredentials a warning message will be
+                logged as this is likely to cause problems.</td>
+            </tr>
+            <tr>
+              <td>securityCredentials</td>
+              <td>String</td>
+              <td>The security credentials for the principal as specified by
+                <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#SECURITY_CREDENTIALS">SECURITY_CREDENTIALS</a>.</td>
+            </tr>
+            <tr>
+              <td>suppressExceptions</td>
+              <td>boolean</td>
+              <td>The default is true, causing exceptions to be internally logged and then ignored. When set to
+                false exceptions will be percolated to the caller.</td>
+            </tr>
+            <tr>
+              <td>urlPkgPrefixes</td>
+              <td>String</td>
+              <td>A colon-separated list of package prefixes for the class name of the factory class that will create
+                a URL context factory as defined by
+                <a href="http://download.oracle.com/javase/6/docs/api/javax/naming/Context.html#URL_PKG_PREFIXES">URL_PKG_PREFIXES</a>.</td>
+            </tr>
+             <tr>
+              <td>userName</td>
+              <td>String</td>
+              <td>The user id used to create the queue connection.</td>
+            </tr>
+            <caption align="top">JMSTopicAppender Parameters</caption>
+          </table>
+           <p>
+            Here is a sample JMSTopicAppender configuration:
+
+            <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
+<configuration status="warn" name="MyApp" packages="">
+  <appenders>
+    <JMSTopic name="jmsTopic" topicBindingName="MyTopic"
+              factoryBindingName="MyTopicConnectionFactory"/>
+  </appenders>
+  <loggers>
+    <root level="error">
+      <appender-ref ref="jmsQueue"/>
+    </root>
+  </loggers>
+</configuration>]]></pre>
+          </p>
+        </subsection>
+        <a name="JPAAppender"/>
+        <subsection name="JPAAppender">
+          <p>The JPAAppender writes log events to a relational database table using the Java Persistence API.
+            It requires the API and a provider implementation be on the classpath. It also requires a decorated entity
+            configured to persist to the table desired.</p>
+          <table>
+            <tr>
+              <th>Parameter Name</th>
+              <th>Type</th>
+              <th>Description</th>
+            </tr>
+            <tr>
+              <td>name</td>
+              <td>String</td>
+              <td>The name of the Appender.</td>
+            </tr>
+            <tr>
+              <td>suppressExceptions</td>
+              <td>boolean</td>
+              <td>The default is true, causing exceptions to be internally logged and then ignored. When set to false
+                exceptions will be percolated to the caller.</td>
+            </tr>
+            <tr>
+              <td>filter</td>
+              <td>Filter</td>
+              <td>A Filter to determine if the event should be handled by this Appender. More than one Filter may be
                 used by using a CompositeFilter.</td>
             </tr>
             <tr>
@@ -1472,528 +1844,164 @@ public class JpaLogEntity extends LogEve
                     <td>integer</td>
                     <td>How often a rollover should occur based on the most specific time unit in the date pattern.
                       For example, with a date pattern with hours as the most specific item and and increment of 4 rollovers
-                      would occur every 4 hours.
-                      The default value is 1.</td>
-                  </tr>
-                  <tr>
-                    <td>modulate</td>
-                    <td>boolean</td>
-                    <td>Indicates whether the interval should be adjusted to cause the next rollover to occur on
-                      the interval boundary. For example, if the item is hours, the current hour is 3 am and the
-                      interval is 4 then then the first rollover will occur at 4 am and then next ones will occur at
-                      8 am, noon, 4pm, etc.</td>
-                  </tr>
-                  <caption align="top">TimeBasedTriggeringPolicy Parameters</caption>
-                </table>
-          <a name="RolloverStrategies"/>
-          <h4>Rollover Strategies</h4>
-            <a name="DefaultRolloverStrategy"/>
-            <h5>Default Rollover Strategy</h5>
-              <p>
-                The default rollover strategy accepts both a date/time pattern and an integer from the filePattern
-                attribute specified on the RollingFileAppender itself. If the date/time pattern
-                is present it will be replaced with the current date and time values. If the pattern contains an integer
-                it will be incremented on each rollover. If the pattern contains both a date/time and integer
-                in the pattern the integer will be incremented until the result of the date/time pattern changes. If
-                the file pattern ends with ".gz" or ".zip" the resulting archive will be compressed using the
-                compression scheme that matches the suffix. The pattern may also contain lookup references that
-                can be resolved at runtime such as is shown in the example below.
-              </p>
-              <p>The default rollover strategy supports two variations for incrementing the counter. The first is
-                the "fixed window" strategy. To illustrate how it works, suppose that the min attribute is set to 1,
-                the max attribute is set to 3, the file name is "foo.log", and the file name pattern is "foo-%i.log".
-              </p>
-
-              <table>
-                <tr>
-                  <th>Number of rollovers</th>
-                  <th>Active output target</th>
-                  <th>Archived log files</th>
-                  <th>Description</th>
-                </tr>
-                <tr>
-                  <td>0</td>
-                  <td>foo.log</td>
-                  <td>-</td>
-                  <td>All logging is going to the initial file.</td>
-                </tr>
-                <tr>
-                  <td>1</td>
-                  <td>foo.log</td>
-                  <td>foo-1.log</td>
-                  <td>During the first rollover foo.log is renamed to foo-1.log. A new foo.log file is created and
-                  starts being written to.</td>
-                </tr>
-                <tr>
-                  <td>2</td>
-                  <td>foo.log</td>
-                  <td>foo-1.log, foo-2.log</td>
-                  <td>During the second rollover foo-1.log is renamed to foo-2.log and foo.log is renamed to
-                    foo-1.log. A new foo.log file is created and starts being written to.</td>
-                </tr>
-                <tr>
-                  <td>3</td>
-                  <td>foo.log</td>
-                  <td>foo-1.log, foo-2.log, foo-3.log</td>
-                  <td>During the third rollover foo-2.log is renamed to foo-3.log, foo-1.log is renamed to foo-2.log and
-                    foo.log is renamed to foo-1.log. A new foo.log file is created and starts being written to.</td>
-                </tr>
-                <tr>
-                  <td>4</td>
-                  <td>foo.log</td>
-                  <td>foo-1.log, foo-2.log, foo-3.log</td>
-                  <td>In the fourth and subsequent rollovers, foo-3.log is deleted, foo-2.log is renamed to foo-3.log,
-                    foo-1.log is renamed to foo-2.log and foo.log is renamed to foo-1.log. A new foo.log file is
-                    created and starts being written to.</td>
-                </tr>
-              </table>
-              <p>By way of contrast, when the the fileIndex attribute is set to "max" but all the other settings
-                are the same the following actions will be performed.
-              </p>
-              <table>
-                <tr>
-                  <th>Number of rollovers</th>
-                  <th>Active output target</th>
-                  <th>Archived log files</th>
-                  <th>Description</th>
-                </tr>
-                <tr>
-                  <td>0</td>
-                  <td>foo.log</td>
-                  <td>-</td>
-                  <td>All logging is going to the initial file.</td>
-                </tr>
-                <tr>
-                  <td>1</td>
-                  <td>foo.log</td>
-                  <td>foo-1.log</td>
-                  <td>During the first rollover foo.log is renamed to foo-1.log. A new foo.log file is created and
-                    starts being written to.</td>
-                </tr>
-                <tr>
-                  <td>2</td>
-                  <td>foo.log</td>
-                  <td>foo-1.log, foo-2.log</td>
-                  <td>During the second rollover foo.log is renamed to foo-2.log. A new foo.log file is created
-                    and starts being written to.</td>
-                </tr>
-                <tr>
-                  <td>3</td>
-                  <td>foo.log</td>
-                  <td>foo-1.log, foo-2.log, foo-3.log</td>
-                  <td>During the third rollover foo.log is renamed to foo-3.log. A new foo.log file is created and
-                    starts being written to.</td>
-                </tr>
-                <tr>
-                  <td>4</td>
-                  <td>foo.log</td>
-                  <td>foo-1.log, foo-2.log, foo-3.log</td>
-                  <td>In the fourth and subsequent rollovers, foo-1.log is deleted, foo-2.log is renamed to foo-1.log,
-                    foo-3.log is renamed to foo-2.log and foo.log is renamed to foo-3.log. A new foo.log file is
-                    created and starts being written to.</td>
-                </tr>
-              </table>
-              <table>
-                <tr>
-                  <th>Parameter Name</th>
-                  <th>Type</th>
-                  <th>Description</th>
-                </tr>
-                <tr>
-                  <td>fileIndex</td>
-                  <td>String</td>
-                  <td>If set to "max" (the default), files with a higher index will be newer than files with a
-                    smaller index. If set to "min", file renaming and the counter will follow the Fixed Window strategy
-                    described above.</td>
-                </tr>
-                <tr>
-                  <td>min</td>
-                  <td>integer</td>
-                  <td>The minimum value of the counter. The default value is 1.</td>
-                </tr>
-                <tr>
-                  <td>max</td>
-                  <td>integer</td>
-                  <td>The maximum value of the counter. Once this values is reached older archives will be
-                    deleted on subsequent rollovers.</td>
-                </tr>
-                <caption align="top">DefaultRolloverStrategy Parameters</caption>
-              </table>
-
-          <p>
-            Below is a sample configuration that uses a RollingFileAppender with both the time and size based
-            triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory
-            based on the current year and month, and will compress each
-            archive using gzip:
-
-            <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="warn" name="MyApp" packages="">
-  <appenders>
-    <RollingFile name="RollingFile" fileName="logs/app.log"
-                 filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
-      <PatternLayout>
-        <pattern>%d %p %c{1.} [%t] %m%n</pattern>
-      </PatternLayout>
-      <Policies>
-        <TimeBasedTriggeringPolicy />
-        <SizeBasedTriggeringPolicy size="250 MB"/>
-      </Policies>
-    </RollingFile>
-  </appenders>
-  <loggers>
-    <root level="error">
-      <appender-ref ref="RollingFile"/>
-    </root>
-  </loggers>
-</configuration>]]></pre>
-          </p>
-          <p>
-            This second example shows a rollover strategy that will keep up to 20 files before removing them.
-          <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="warn" name="MyApp" packages="">
-  <appenders>
-    <RollingFile name="RollingFile" fileName="logs/app.log"
-                 filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
-      <PatternLayout>
-        <pattern>%d %p %c{1.} [%t] %m%n</pattern>
-      </PatternLayout>
-      <Policies>
-        <TimeBasedTriggeringPolicy />
-        <SizeBasedTriggeringPolicy size="250 MB"/>
-      </Policies>
-      <DefaultRolloverStrategy max="20"/>
-    </RollingFile>
-  </appenders>
-  <loggers>
-    <root level="error">
-      <appender-ref ref="RollingFile"/>
-    </root>
-  </loggers>
-</configuration>]]></pre>
-        </p>
-          <p>
-            Below is a sample configuration that uses a RollingFileAppender with both the time and size based
-            triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory
-            based on the current year and month, and will compress each
-            archive using gzip and will roll every 6 hours when the hour is divisible by 6:
-
-            <pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="warn" name="MyApp" packages="">
-  <appenders>
-    <RollingFile name="RollingFile" fileName="logs/app.log"
-                 filePattern="logs/$${date:yyyy-MM}/app-%d{yyyy-MM-dd-HH}-%i.log.gz">
-      <PatternLayout>
-        <pattern>%d %p %c{1.} [%t] %m%n</pattern>
-      </PatternLayout>
-      <Policies>
-        <TimeBasedTriggeringPolicy interval="6" modulate="true"/>
-        <SizeBasedTriggeringPolicy size="250 MB"/>
-      </Policies>
-    </RollingFile>
-  </appenders>
-  <loggers>
-    <root level="error">
-      <appender-ref ref="RollingFile"/>
-    </root>
-  </loggers>
-</configuration>]]></pre>
-          </p>
-        </subsection>
-			<a name="FastFileAppender" />
-			<subsection name="FastFileAppender">
-			<p><i>Experimental, may replace FileAppender in a future release.</i></p>
-				<p>
-					The FastFileAppender is similar to the standard
-					<a href="#FileAppender">FileAppender</a>
-					except it is always buffered (this cannot be switched off)
-					and internally it uses a
-					<tt>ByteBuffer + RandomAccessFile</tt>
-					instead of a
-					<tt>BufferedOutputStream</tt>.
-					We saw a 20-200% performance improvement compared to
-					FileAppender with "bufferedIO=true" in our
-					<a href="async.html#FastFileAppenderPerformance">measurements</a>.
-					Similar to the FileAppender,
-					FastFileAppender uses a FastFileManager to actually perform the
-					file I/O. While FastFileAppender
-					from different Configurations
-					cannot be shared, the FastFileManagers can be if the Manager is
-					accessible. For example, two webapps in a
-					servlet container can have
-					their own configuration and safely
-					write to the same file if Log4j
-					is in a ClassLoader that is common to
-					both of them.
-				</p>
-				<table>
-					<tr>
-						<th>Parameter Name</th>
-						<th>Type</th>
-						<th>Description</th>
-					</tr>
-					<tr>
-						<td>append</td>
-						<td>boolean</td>
-						<td>When true - the default, records will be appended to the end
-							of the file. When set to false,
-							the file will be cleared before
-							new records are written.
-						</td>
-					</tr>
-					<tr>
-						<td>fileName</td>
-						<td>String</td>
-						<td>The name of the file to write to. If the file, or any of its
-							parent directories, do not exist,
-							they will be created.
-						</td>
-					</tr>
-					<tr>
-						<td>filters</td>
-						<td>Filter</td>
-						<td>A Filter to determine if the event should be handled by this
-							Appender. More than one Filter
-							may be used by using a CompositeFilter.
-						</td>
-					</tr>
-					<tr>
-						<td>immediateFlush</td>
-						<td>boolean</td>
-		              <td><p>When set to true - the default, each write will be followed by a flush.
-		                This will guarantee the data is written
-		                to disk but could impact performance.</p>
-		                <p>Flushing after every write is only useful when using this
-						appender with synchronous loggers. Asynchronous loggers and
-						appenders will automatically flush at the end of a batch of events, 
-						even if immediateFlush is set to false. This also guarantees
-						the data is written to disk but is more efficient.</p>
-		              </td>
-					</tr>
-					<tr>
-						<td>layout</td>
-						<td>Layout</td>
-						<td>The Layout to use to format the LogEvent</td>
-					</tr>
-					<tr>
-						<td>name</td>
-						<td>String</td>
-						<td>The name of the Appender.</td>
-					</tr>
-					<tr>
-						<td>suppressExceptions</td>
-						<td>boolean</td>
-						<td>The default is true, causing exceptions to be internally
-							logged and then ignored. When set to
-							false exceptions will be
-							percolated to the caller.
-						</td>
-					</tr>
-					<caption align="top">FastFileAppender Parameters</caption>
-				</table>
-				<p>
-					Here is a sample FastFile configuration:
-
-					<pre class="prettyprint linenums"><![CDATA[<?xml version="1.0" encoding="UTF-8"?>
-<configuration status="warn" name="MyApp" packages="">
-  <appenders>
-    <FastFile name="MyFile" fileName="logs/app.log">
-      <PatternLayout>
-        <pattern>%d %p %c{1.} [%t] %m%n</pattern>
-      </PatternLayout>
-    </FastFile>
-  </appenders>
-  <loggers>
-    <root level="error">
-      <appender-ref ref="MyFile"/>
-    </root>
-  </loggers>
-</configuration>]]></pre>
-				</p>
-			</subsection>
-			<a name="FastRollingFileAppender" />
-			<subsection name="FastRollingFileAppender">
-			<p><i>Experimental, may replace RollingFileAppender in a future release.</i></p>
-				<p>
-					The FastRollingFileAppender is similar to the standard
-					<a href="#RollingFileAppender">RollingFileAppender</a>
-					except it is always buffered (this cannot be switched off)
-					and
-					internally it uses a
-					<tt>ByteBuffer + RandomAccessFile</tt>
-					instead of a
-					<tt>BufferedOutputStream</tt>.
-					We saw a 20-200% performance improvement compared to
-					RollingFileAppender with "bufferedIO=true"
-					in our
-					<a href="async.html#FastFileAppenderPerformance">measurements</a>.
-
-					The FastRollingFileAppender writes
-					to the File named in the
-					fileName parameter
-					and rolls the file over according the
-					TriggeringPolicy
-					and the RolloverPolicy.
-
-					Similar to the RollingFileAppender,
-					FastRollingFileAppender uses a FastRollingFileManager
-					to actually perform the
-					file I/O and perform the rollover. While FastRollingFileAppender
-					from different Configurations cannot be
-					shared, the FastRollingFileManagers can be
-					if the Manager is accessible.
-					For example, two webapps in a servlet
-					container can have their own configuration and safely write to the
-					same file if Log4j is in a ClassLoader that is common to both of them.
-				</p>
-				<p>
-					A FastRollingFileAppender requires a
-					<a href="#TriggeringPolicies">TriggeringPolicy</a>
-					and a
-					<a href="#RolloverStrategies">RolloverStrategy</a>.
-					The triggering policy determines if a rollover should
-					be performed
-					while the RolloverStrategy defines how the rollover
-					should be done.
-					If no RolloverStrategy
-					is configured, FastRollingFileAppender will
-					use the
-					<a href="#DefaultRolloverStrategy">DefaultRolloverStrategy</a>.
-				</p>
-				<p>
-					File locking is not supported by the FastRollingFileAppender.
-				</p>
-				<table>
-					<tr>
-						<th>Parameter Name</th>
-						<th>Type</th>
-						<th>Description</th>
-					</tr>
-					<tr>
-						<td>append</td>
-						<td>boolean</td>
-						<td>When true - the default, records will be appended to the end
-							of the file. When set to false,
-							the file will be cleared before
-							new records are written.
-						</td>
-					</tr>
-					<tr>
-						<td>filter</td>
-						<td>Filter</td>
-						<td>A Filter to determine if the event should be handled by this
-							Appender. More than one Filter
-							may be used by using a
-							CompositeFilter.
-						</td>
-					</tr>
-					<tr>
-						<td>fileName</td>
-						<td>String</td>
-						<td>The name of the file to write to. If the file, or any of its
-							parent directories, do not exist,
-							they will be created.
-						</td>
-					</tr>
-					<tr>
-						<td>filePattern</td>
-						<td>String</td>
-						<td>
-							The pattern of the file name of the archived log file. The format
-							of the pattern should is
-							dependent on the RolloverPolicy that is
-							used. The DefaultRolloverPolicy
-							will accept both
-							a date/time
-							pattern compatible with
-							<a
-								href="http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html">
-								SimpleDateFormat</a>
-
-							and/or a %i which represents an integer counter. The pattern
-							also supports interpolation at
-							runtime so any of the Lookups (such
-							as the
-							<a href="./lookups.html#DateLookup">DateLookup</a>
-							can
-							be included in the pattern.
-						</td>
-					</tr>
-					<tr>
-						<td>immediateFlush</td>
-						<td>boolean</td>
-		              <td><p>When set to true - the default, each write will be followed by a flush.
-		                This will guarantee the data is written
-		                to disk but could impact performance.</p>
-		                <p>Flushing after every write is only useful when using this
-						appender with synchronous loggers. Asynchronous loggers and
-						appenders will automatically flush at the end of a batch of events, 
-						even if immediateFlush is set to false. This also guarantees
-						the data is written to disk but is more efficient.</p>
-		              </td>
-					</tr>
-					<tr>
-						<td>layout</td>
-						<td>Layout</td>

[... 311 lines stripped ...]


Mime
View raw message