karaf-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jbono...@apache.org
Subject [11/15] karaf git commit: KARAF-3679 - Switch user guide to asciidoc
Date Tue, 05 Jan 2016 14:02:36 GMT
http://git-wip-us.apache.org/repos/asf/karaf/blob/9f08eb9e/manual/src/main/asciidoc/user-guide/jdbc.adoc
----------------------------------------------------------------------
diff --git a/manual/src/main/asciidoc/user-guide/jdbc.adoc b/manual/src/main/asciidoc/user-guide/jdbc.adoc
new file mode 100644
index 0000000..e3d9527
--- /dev/null
+++ b/manual/src/main/asciidoc/user-guide/jdbc.adoc
@@ -0,0 +1,227 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+==== DataSources (JDBC)
+
+The Apache Karaf DataSources (JDBC) is an optional enterprise feature.
+
+You have to install the `jdbc` feature first:
+
+----
+karaf@root()> feature:install jdbc
+----
+
+This feature provides an OSGi service to create/delete JDBC datasources in the container and perform database operations (SQL queries).
+
+This JDBC OSGi service can be manipulated programmatically (see the developer guide for details), using the `jdbc:*` commands, or using the JDBC MBean.
+
+===== Commands
+
+====== `jdbc:create`
+
+The `jdbc:create` command automatically creates a datasource definition file in the Apache Karaf `deploy` folder.
+
+The `jdbc:create` accepts a set of options and the name argument:
+
+----
+karaf@root()> jdbc:create --help
+DESCRIPTION
+        jdbc:create
+
+        Create a JDBC datasource
+
+SYNTAX
+        jdbc:create [options] name
+
+ARGUMENTS
+        name
+                The JDBC datasource name
+
+OPTIONS
+        -u, --username
+                The database username
+        -v, --version
+                The version of the driver to use
+        -t, --type
+                The JDBC datasource type (generic, MySQL, Oracle, Postgres, H2, HSQL, Derby, MSSQL)
+        -url
+                The JDBC URL to use
+        -p, --password
+                The database password
+        -i, --install-bundles
+                Try to install the bundles providing the JDBC driver
+        -d, --driver
+                The classname of the JDBC driver to use. NB: this option is used only the type generic
+        --help
+                Display this help message
+
+----
+
+* the `name` argument is required. It's the name of the datasource. The name is used to identify the datasource, and to create the datasource definition file (`deploy/datasource-[name].xml`).
+* the `-u` option is optional. It defines the database username.
+* the `-v` option is optional. It "forces" a given JDBC driver version (only used with the `-i` option).
+* the `-t` option is required. It defines the JDBC datasource type. Accepted values are: MySQL, Oracle, Postgres, Derby, H2, HSQL, MSSQL, Generic. Generic is a generic configuration file using DBCP to create a pooled datasource. When using generic, it's up to you to install the JDBC driver and configure the `deploy/datasource-[name].xml` datasource file.
+* the `-url` option is optional. It defines the JDBC URL to access to the database.
+* the `-p` option is optional. It defines the database password.
+* the `-d` option is optional. It defines the JDBC driver classname to use (only used with the generic type).
+* the `-i` option is optional. If specified, the command will try to automatically install the OSGi bundles providing the JDBC driver (depending of the datasource type specified by the `-t` option).
+
+For instance, to create an embedded Apache Derby database in Apache Karaf, you can do:
+
+----
+karaf@root()> jdbc:create -t derby -u test -i test
+----
+
+We can note that the Derby bundle has been installed automatically, and the datasource has been created:
+
+----
+karaf@root()> la
+...
+87 | Active   |  80 | 10.8.2000002.1181258  | Apache Derby 10.8
+88 | Active   |  80 | 0.0.0                 | datasource-test.xml
+----
+
+We can see the `deploy/datasource-test.xml` datasource file.
+
+===== `jdbc:delete`
+
+The `jdbc:delete` command deletes a datasource by removing the `deploy/datasource-[name].xml` datasource file:
+
+----
+karaf@root()> jdbc:delete test
+----
+
+[NOTE]
+====
+The `jdbc:delete` does not uninstall the JDBC driver bundles and does not remove the files created by the JDBC driver (or the database in case of embedded database).
+It's up to you to remove it.
+====
+
+===== `jdbc:datasources`
+
+The `jdbc:datasources` command lists the JDBC datasources:
+
+----
+karaf@root()> jdbc:datasources
+Name       | Product      | Version              | URL
+------------------------------------------------------------------
+/jdbc/test | Apache Derby | 10.8.2.2 - (1181258) | jdbc:derby:test
+----
+
+===== `jdbc:info`
+
+The `jdbc:info` command provides details about a JDBC datasource:
+
+----
+karaf@root()> jdbc:info /jdbc/test
+Property       | Value
+--------------------------------------------------
+driver.version | 10.8.2.2 - (1181258)
+username       | APP
+db.version     | 10.8.2.2 - (1181258)
+db.product     | Apache Derby
+driver.name    | Apache Derby Embedded JDBC Driver
+url            | jdbc:derby:test
+----
+
+===== `jdbc:execute`
+
+The `jdbc:execute` command executes a SQL query that doesn't return any result on a given JDBC datasource.
+
+Typically, you can use the `jdbc:execute` command to create tables, insert values into tables, etc.
+
+For instance, we can create a `person` table on our `test` datasource:
+
+----
+karaf@root()> jdbc:execute /jdbc/test "create table person(name varchar(100), nick varchar(100))"
+----
+
+And we can insert some records in the `person` table:
+
+----
+karaf@root()> jdbc:execute /jdbc/test "insert into person(name, nick) values('foo','bar')"
+karaf@root()> jdbc:execute /jdbc/test "insert into person(name, nick) values('test','test')"
+----
+
+===== `jdbc:query`
+
+The `jdbc:query` command is similar to the `jdbc:execute` one but it displays the query result.
+
+For instance, to display the content of the `person` table, we can do:
+
+----
+karaf@root()> jdbc:query /jdbc/test "select * from person"
+NICK       | NAME
+--------------------------------
+bar        | foo
+test       | test
+----
+
+===== `jdbc:tables`
+
+The `jdbc:tables` command displays all tables available on a given JDBC datasource:
+
+----
+karaf@root()> jdbc:tables /jdbc/test
+REF_GENERATION | TYPE_NAME | TABLE_NAME       | TYPE_CAT | REMARKS | TYPE_SCHEM | TABLE_TYPE   | TABLE_SCHEM | TABLE_CAT | SELF_REFERENCING_COL_NAME
+----------------------------------------------------------------------------------------------------------------------------------------------------
+               |           | SYSALIASES       |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSCHECKS        |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSCOLPERMS      |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSCOLUMNS       |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSCONGLOMERATES |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSCONSTRAINTS   |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSDEPENDS       |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSFILES         |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSFOREIGNKEYS   |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSKEYS          |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSPERMS         |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSROLES         |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSROUTINEPERMS  |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSSCHEMAS       |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSSEQUENCES     |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSSTATEMENTS    |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSSTATISTICS    |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSTABLEPERMS    |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSTABLES        |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSTRIGGERS      |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSVIEWS         |          |         |            | SYSTEM TABLE | SYS         |           |
+               |           | SYSDUMMY1        |          |         |            | SYSTEM TABLE | SYSIBM      |           |
+               |           | PERSON           |          |         |            | TABLE        | APP         |           |
+----
+
+===== JMX JDBC MBean
+
+The JMX JDBC MBean provides the JDBC datasources, and the operations to manipulate datasources and database.
+
+The object name to use is `org.apache.karaf:type=jdbc,name=*`.
+
+====== Attributes
+
+The `Datasources` attribute provides a tabular data of all JDBC datasource, containing:
+
+* `name` is the JDBC datasource name
+* `product` is the database product backend
+* `url` is the JDBC URL used by the datasource
+* `version` is the database version backend.
+
+====== Operations
+
+* `create(name, type, jdbcDriverClassName, version, url, user, password, installBundles)` creates a JDBC datasource (the arguments correspond to the options of the `jdbc:create` command).
+* `delete(name)` deletes a JDBC datasource.
+* `info(datasource)` returns a Map (String/String) of details about a JDBC `datasource`.
+* `tables(datasource)` returns a tabular data containing the tables available on a JDBC `datasource`.
+* `execute(datasource, command` executes a SQL command on the given JDBC `datasource`.
+* `query(datasource, query` executes a SQL query on the given JDBC `datasource` and return the execution result as tabular data.
+

http://git-wip-us.apache.org/repos/asf/karaf/blob/9f08eb9e/manual/src/main/asciidoc/user-guide/jms.adoc
----------------------------------------------------------------------
diff --git a/manual/src/main/asciidoc/user-guide/jms.adoc b/manual/src/main/asciidoc/user-guide/jms.adoc
new file mode 100644
index 0000000..e5227a3
--- /dev/null
+++ b/manual/src/main/asciidoc/user-guide/jms.adoc
@@ -0,0 +1,318 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+==== MOM (JMS)
+
+The Apache Karaf MOM (Messaging Oriented Middleware/JMS) is an optional enterprise feature.
+
+You have to install the `jms` feature first:
+
+----
+karaf@root()> feature:install jms
+----
+
+The `jms` feature doesn't install a JMS broker: it just installs the OSGi service, commands, and MBean to interact with
+a JMS broker (not the broker itself).
+
+It means that you have to install a JMS broker itself.
+
+This JMS broker can be available:
+
+* outside of Apache Karaf, as a standalone broker. In that case, Apache Karaf JMS will remotely connect to the JMS broker.
+ For instance, you can use this topology with Apache ActiveMQ or IBM WebsphereMQ.
+* embedded in Apache Karaf. With this topology, Apache Karaf itself provides a JMS broker service. Apache ActiveMQ provides
+ a native support in Apache Karaf.
+
+For instance, you can install Apache ActiveMQ directly in Apache Karaf:
+
+----
+karaf@root()> feature:repo-add activemq
+Adding feature url mvn:org.apache.activemq/activemq-karaf/LATEST/xml/features
+karaf@root()> feature:install activemq-broker
+----
+
+The `activemq-broker` feature installs:
+
+* a Apache ActiveMQ broker directly in Apache Karaf, bind to the `61616` port number by default.
+* the Apache ActiveMQ WebConsole bound to `http://0.0.0.0:8181/activemqweb` by default.
+
+The Apache Karaf `jms` feature provides an OSGi service to create/delete JMS connection factories in the container
+and perform JMS operations (send or consume messages, get information about a JMS broker, list the destinations, ...).
+
+This JMS OSGi service can be manipulated programmatically (see the developer guide for details), using the `jms:*` commands, or using the JMS MBean.
+
+===== Commands
+
+====== `jms:create`
+
+The `jms:create` command creates a JMS connection factory in the Apache Karaf container. It automatically creates a
+blueprint XML file in the `deploy` folder containing the JMS connection factory definition corresponding
+to the type that you mentioned.
+
+The `jms:create` command accepts different arguments and options:
+
+----
+karaf@root()> jms:create --help
+DESCRIPTION
+        jms:create
+
+        Create a JMS connection factory.
+
+SYNTAX
+        jms:create [options] name
+
+ARGUMENTS
+        name
+                The JMS connection factory name
+
+OPTIONS
+        -t, --type
+                The JMS connection factory type (ActiveMQ or WebsphereMQ)
+                (defaults to ActiveMQ)
+        -u, --username
+                Username to connect to the JMS broker
+                (defaults to karaf)
+        --help
+                Display this help message
+        --url
+                URL of the JMS broker. For WebsphereMQ type, the URL is hostname/port/queuemanager/channel
+                (defaults to tcp://localhost:61616)
+        -p, --password
+                Password to connect to the JMS broker
+                (defaults to karaf)
+
+----
+
+* the `name` argument is required. It's the name of the JMS connection factory. The name is used to identify the connection factory, and to create the connection factory definition file (`deploy/connectionfactory-[name].xml`).
+* the `-t` (`--type`) option is required. It's the type of the JMS connection factory. Currently on `activemq` and `webspheremq` type are supported. If you want to use another type of JMS connection factory, you can create the `deploy/connectionfactory-[name].xml` file by hand (using one as template).
+* the `--url` option is required. It's the URL used by the JMS connection factory to connect to the broker. If the type is `activemq`, the URL looks like `tcp://localhost:61616`. If the type is `webspheremq`, the URL looks like `host/port/queuemanager/channel`.
+* the `-u` (`--username`) option is optional (karaf by default). In the case of the broker requires authentication, it's the username used.
+* the `-p` (`--password`) option is optional (karaf by default). In the case of the broker requires authentication, it's the password used.
+
+For instance, to create a JMS connection factory for a Apache ActiveMQ broker, you can do:
+
+----
+karaf@root()> jms:create -t activemq --url tcp://localhost:61616 test
+----
+
+[NOTE]
+====
+The `jms:create` command doesn't install any feature or bundle providing the JMS connection factory classes (and dependencies).
+You have to install the required features (for instance `activemq-broker` feature for Apache ActiveMQ), or bundles (for IBM WebsphereMQ) providing the JMS connection factory packages and classes.
+====
+
+In the previous example, we assume that you previously installed the `activemq-broker` feature.
+
+We can see the created JMS connection factory:
+
+----
+karaf@root()> la
+...
+151 | Active   |  80 | 0.0.0                 | connectionfactory-test.xml
+----
+
+The `connectionfactory-test.xml` file has been created in the `deploy` folder.
+
+By default, the `jms:create` command constructs a JNDI name for the connection factory: `/jms/[name]`.
+
+It means that the connection factory name to use for the other `jms:*` commands is `/jms/[name]`.
+
+====== `jms:delete`
+
+The `jms:delete` command deletes a JMS connection factory. The `name` argument is the name that you used at creation time:
+
+----
+karaf@root()> jms:delete test
+----
+
+====== `jms:connectionfactories`
+
+The `jms:connectionfactories` command lists the JMS connection factories:
+
+----
+karaf@root()> jms:connectionfactories 
+JMS Connection Factory
+----------------------
+/jms/test     
+----
+
+====== `jms:info`
+
+The `jms:info` command provides details about the JMS connection factory:
+
+----
+karaf@root()> jms:info /jms/test
+Property | Value
+-------------------
+product  | ActiveMQ
+version  | 5.9.0
+----
+
+You can see the JMS broker product and version.
+
+If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+
+====== `jms:queues`
+
+The `jms:queues` command lists the JMS queues available on a JMS broker. For instance:
+
+----
+karaf@root()> jms:queues /jms/test
+JMS Queues
+----------
+MyQueue
+----
+
+where `/jms/test` is the name of the JMS connection factory.
+
+If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+
+[NOTE]
+====
+Depending of the JMS connection factory type, this command may not work.
+For now, the command works only with Apache ActiveMQ.
+====
+
+====== `jms:topics`
+
+The `jms:topics` command lists the JMS topics available on a JMS broker. For instance:
+
+----
+karaf@root()> jms:topics /jms/test
+JMS Topics
+----------
+MyTopic
+----
+
+where `/jms/test` is the name of the JMS connection factory.
+
+If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+
+[NOTE]
+====
+Depending of the JMS connection factory type, this command may not work.
+For now, the command works only with Apache ActiveMQ.
+====
+
+====== `jms:send`
+
+The `jms:send` command sends a message to a given JMS queue.
+
+For instance, to send a message containing `Hello World` in the `MyQueue` queue, you can do:
+
+----
+karaf@root()> jms:send /jms/test MyQueue "Hello World"
+----
+
+If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+
+====== `jms:consume`
+
+The `jms:consume` command consumes messages from a JMS queue.
+
+For instance, to consume all messages from `MyQueue`, you can do:
+
+----
+karaf@root()> jms:consume /jms/test MyQueue
+2 message(s) consumed
+----
+
+If you want to consume only some messages, you can define a selector using the `-s` (`--selector`) option.
+
+If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+
+[NOTE]
+====
+The `jms:consume` command just consumes (so removes) messages from a JMS queue. It doesn't display the messages.
+If you want to see the details of messages, you can use the `jms:browse` command.
+====
+
+====== `jms:count`
+
+The `jms:count` command counts the number of pending messages into a JMS queue.
+
+For instance, if you want to know the number of messages on `MyQueue`, you can do:
+
+----
+karaf@root()> jms:count /jms/test MyQueue
+Messages Count
+--------------
+8
+----
+
+If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+
+====== `jms:browse`
+
+The `jms:browse` command browses a JMS queue and display details about messages.
+
+For instance, to browse the `MyQueue` queue:
+
+----
+karaf@root()> jms:browse /jms/test MyQueue
+Message ID                              | Content        | Charset | Type | Correlation ID | Delivery Mode | Destination     | Expiration | Priority | Redelivered | ReplyTo | Timestamp
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ID:vostro-59602-1387462183019-3:1:1:1:1 | Hello World    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:10:12 CET 2013
+ID:vostro-59602-1387462183019-3:2:1:1:1 | Hello ActiveMQ | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:10:16 CET 2013
+ID:vostro-59602-1387462183019-3:3:1:1:1 | Hello Karaf    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:10:19 CET 2013
+----
+
+By default, the messages properties are not displayed. You can use the `-v` (`--verbose`) option to display the properties:
+
+----
+karaf@root()> jms:browse -v /jms/test MyQueue
+Message ID                              | Content        | Charset | Type | Correlation ID | Delivery Mode | Destination     | Expiration | Priority | Redelivered | ReplyTo | Timestamp                    | Properties
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ID:vostro-59602-1387462183019-3:1:1:1:1 | Hello World    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:10:12 CET 2013 |
+ID:vostro-59602-1387462183019-3:2:1:1:1 | Hello ActiveMQ | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:10:16 CET 2013 |
+ID:vostro-59602-1387462183019-3:3:1:1:1 | Hello Karaf    | UTF-8   |      |                | Persistent    | queue://MyQueue | Never      | 4        | false       |         | Thu Dec 19 15:10:19 CET 2013 |
+----
+
+If you want to browse only some messages, you can define a selector using the `-s` (`--selector`) option.
+
+If the JMS broker requires an authentication, you can use the `-u` (`--username`) and `-p` (`--password`) options.
+
+====== `jms:move`
+
+The `jms:move` command consumes all messages from a JMS queue and send it to another one.
+
+For instance, to move all messages from `MyQueue` queue to `AnotherQueue` queue, you can do:
+
+----
+karaf@root()> jms:move /jms/test MyQueue AnotherQueue
+3 message(s) moved
+----
+
+===== JMX JMS MBean
+
+The JMX JMS MBean provides the attributes and operations to manipulate the JMS connection factories and JMS messages.
+
+The object name to use is `org.apache.karaf:type=jms,name=*`.
+
+====== Attributes
+
+The `Connectionfactories` attribute provides the list of all JMS connection factories names.
+
+====== Operations
+
+* `create(name, type, url)` creates a JMS connection factory.
+* `delete(name)` deletes a JMS connection factory.
+* `Map<String, String> info(connectionFactory, username, password)` gets details about a JMS connection factory and broker.
+* `int count(connectionFactory, queue, username, password)` counts the number of pending messages on a JMS queue.
+* `List<String> queues(connectionFactory, username, password)` lists the JMS queues available on the JMS broker.
+* `List<String> topics(connectionFactory, username, password)` lists the JMS topics available on the JMS broker.
+* `TabularData browse(connectionFactory, queue, selector, username, password)` browses a JMS queue and provides a table of JMS messages.
+* `send(connectionFactory, queue, content, replyTo, username, password)` sends a JMS message to a target queue.
+* `int consume(connectionFactory, queue, selector, username, password)` consumes JMS messages from a JMS queue.
+* `int move(connectionFactory, source, destination, selector, username, password)` moves messages from a JMS queue to another.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/karaf/blob/9f08eb9e/manual/src/main/asciidoc/user-guide/jndi.adoc
----------------------------------------------------------------------
diff --git a/manual/src/main/asciidoc/user-guide/jndi.adoc b/manual/src/main/asciidoc/user-guide/jndi.adoc
new file mode 100644
index 0000000..d11d86b
--- /dev/null
+++ b/manual/src/main/asciidoc/user-guide/jndi.adoc
@@ -0,0 +1,224 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+==== Naming (JNDI)
+
+The Apache Karaf Naming (JNDI) is an optional enterprise feature.
+
+You have to install the `jndi` feature first:
+
+----
+karaf@root()> feature:install jndi
+----
+
+Apache Karaf provides a complete JNDI support.
+
+You have two parts in the Apache Karaf JNDI support:
+
+* a fully compliant implementation of the OSGi Alliance JNDI Service specification.
+* a more "regular" JNDI context, containing different names that you can administrate.
+
+===== OSGi Services Registry and JNDI
+
+The OSGi Service Registry provides a centralized register/query capabilities for OSGi services.
+
+A common pattern outside of OSGi is to make use of JNDI API to access services from a directory system.
+The OSGi service registry can be viewed as an example of such a system.
+
+Apache Karaf supports the `osgi:service` lookup scheme as defined by the JNDI Service Specification.
+
+The schema is:
+
+----
+osgi:service/<interface>[/<filter>]
+----
+
+For instance, you can directly use JNDI to get a OSGi service:
+
+----
+Context ctx = new InitialContext();
+Runnable r = (Runnable) ctx.lookup("osgi:service/java.lang.Runnable");
+----
+
+===== JNDI service
+
+Apache Karaf also supports regular JNDI, including a directoy system where you can register name bindings, sub-context, etc.
+
+It supports the standard JNDI API:
+
+----
+Context ctx = new InitialContext();
+Runnable r = (Runnable) ctx.lookup("this/is/the/name");
+----
+
+It also allows you to bind some OSGi services as "pure" JNDI name. In that case, you don't have to use the specific
+`osgi:service` scheme.
+
+===== Commands
+
+Apache Karaf provides specific commands to manipulate the JNDI service.
+
+====== `jndi:names`
+
+The `jndi:names` command lists all JNDI names. It groups both the JNDI names from the `osgi:service` scheme and the
+regular JNDI names:
+
+----
+karaf@root()> jndi:names
+JNDI Name         | Class Name
+------------------------------------------------------------------
+osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl
+jndi/service      | org.apache.karaf.jndi.internal.JndiServiceImpl
+----
+
+We can see here the `osgi:service/jndi` name (using the `osgi:service` scheme) and `jndi/service` name (using the
+regular JNDI service).
+
+The `jndi:names` command accepts an optional `context` argument to list names on the given context.
+
+For instance, you can list only names in the `jndi` sub-context:
+
+----
+karaf@root()> jndi:names jndi
+JNDI Name | Class Name
+----------------------------------------------------------
+service   | org.apache.karaf.jndi.internal.JndiServiceImpl
+----
+
+[NOTE]
+====
+The `jndi:names` lists only names (the full qualified name). It means that the empty JNDI sub-contexts are not displayed.
+To display all JNDI sub-contexts (empty or not), you can use the `jndi:contexts` command.
+====
+
+====== `jndi:contexts`
+
+The `jndi:contexts` command lists all JNDI sub-contexts:
+
+----
+karaf@root()> jndi:contexts
+JNDI Sub-Context
+----------------
+other/context
+foo/bar
+----
+
+====== `jndi:create`
+
+The `jndi:create` command creates a new JNDI sub-context:
+
+----
+karaf@root()> jndi:create my/company
+----
+
+====== `jndi:delete`
+
+The `jndi:delete` command deletes a JNDI sub-context:
+
+----
+karaf@root()> jndi:delete my/company
+----
+
+====== `jndi:alias`
+
+The `jndi:alias` command creates a new JNDI name (alias) with an existing one.
+
+The existing JNDI name can be a regular one:
+
+----
+karaf@root()> jndi:alias bean/services/jndi aliases/services/jndi
+karaf@root()> jndi:names
+JNDI Name             | Class Name
+----------------------------------------------------------------------
+osgi:service/jndi     | org.apache.karaf.jndi.internal.JndiServiceImpl
+bean/services/jndi    | org.apache.karaf.jndi.internal.JndiServiceImpl
+aliases/services/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl
+----
+
+or a name from the `osgi:service` schema:
+
+----
+karaf@root()> jndi:alias osgi:service/jndi alias/jndi/service
+karaf@root()> jndi:names
+JNDI Name          | Class Name
+-------------------------------------------------------------------
+osgi:service/jndi  | org.apache.karaf.jndi.internal.JndiServiceImpl
+alias/jndi/service | org.apache.karaf.jndi.internal.JndiServiceImpl
+----
+
+[NOTE]
+====
+The `jndi:alias` automatically creates all required JNDI sub-contexts.
+====
+
+====== `jndi:bind`
+
+The `jndi:bind` command binds an OSGi service with a JNDI name.
+
+The `jndi:bind` command requires an OSGi service ID and a JNDI name. The OSGi service ID can be found using the `service:list` command.
+
+For instance, we can bind the OSGi service with ID 344 with the JNDI name `services/kar`:
+
+----
+karaf@root()> jndi:bind 344 services/kar
+karaf@root()> jndi:names
+JNDI Name         | Class Name
+-------------------------------------------------------------------------------
+osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl
+services/kar      | org.apache.karaf.kar.internal.KarServiceImpl
+----
+
+====== `jndi:unbind`
+
+The `jndi:unbind` command unbind a given JNDI name:
+
+----
+karaf@root()> jndi:names
+JNDI Name         | Class Name
+-------------------------------------------------------------------------------
+osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl
+services/kar      | org.apache.karaf.kar.internal.KarServiceImpl
+karaf@root()> jndi:unbind services/kar
+karaf@root()> jndi:names
+JNDI Name         | Class Name
+-------------------------------------------------------------------------------
+osgi:service/jndi | org.apache.karaf.jndi.internal.JndiServiceImpl
+----
+
+[NOTE]
+====
+It's not possible to unbind a name from the `osgi:service` schema, as it's linked to a OSGi service.
+====
+
+===== JMX JndiMBean
+
+The JMX JndiMBean provides the JNDI names, and the operations to manipulate the JNDI service.
+
+The object name to use is `org.apache.karaf:type=jndi,name=*`.
+
+====== Attributes
+
+The `Names` attribute provides a map containing all JNDI names and class names from both `osgi:service` scheme
+and the regular JNDI service.
+
+The `Contexts` attribute provides a list containing all JNDI sub-contexts.
+
+====== Operations
+
+* `getNames(context)` provides a map containing JNDI names and class names in a given JNDI sub-context.
+* `create(context)` creates a new JNDI sub-context.
+* `delete(context)` deletes a JNDI sub-context.
+* `alias(name, alias` creates a JNDI name (alias) for a given one.
+* `bind(serviceId, name` binds a JNDI name using an OSGi service (identified by its ID).
+* `unbind(name)` unbinds a JNDI name.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/karaf/blob/9f08eb9e/manual/src/main/asciidoc/user-guide/jpa.adoc
----------------------------------------------------------------------
diff --git a/manual/src/main/asciidoc/user-guide/jpa.adoc b/manual/src/main/asciidoc/user-guide/jpa.adoc
new file mode 100644
index 0000000..f89bcde
--- /dev/null
+++ b/manual/src/main/asciidoc/user-guide/jpa.adoc
@@ -0,0 +1,40 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+==== Persistence (JPA)
+
+Apache Karaf provides JPA persistence providers (such as Apache OpenJPA) to be easy to use (in a OSGi way) and provide
+container managed persistence for applications (using Blueprint).
+
+Apache Karaf embeds Aries JPA, providing a very easy way to develop applications that use JPA persistence.
+
+See the developer guide for details about developing applications that use JPA.
+
+===== Persistence engine features
+
+Apache Karaf provides a set of ready to use persistence engine features:
+
+* Apache OpenJPA. The `openjpa` feature installs the `jpa` feature with the Apache OpenJPA as persistence engine:
+
+----
+karaf@root()> feature:install openjpa
+----
+
+* Hibernate. The `hibernate` feature installs the `jpa` feature with the Hibernate persistence engine:
+
+----
+karaf@root()> feature:install hibernate
+----
+
+* EclipseLink. The `eclipselink` feature will be available in the next Apache Karaf release.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/karaf/blob/9f08eb9e/manual/src/main/asciidoc/user-guide/jta.adoc
----------------------------------------------------------------------
diff --git a/manual/src/main/asciidoc/user-guide/jta.adoc b/manual/src/main/asciidoc/user-guide/jta.adoc
new file mode 100644
index 0000000..7b5f8ba
--- /dev/null
+++ b/manual/src/main/asciidoc/user-guide/jta.adoc
@@ -0,0 +1,120 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+==== Transaction (JTA)
+
+Apache Karaf provides container managed transactions, available as OSGi services.
+
+As most of the enterprise features, it's an optional feature that you can install with:
+
+----
+karaf@root()> feature:install transaction
+----
+
+However, the `transaction` feature is installed (as a transitive dependency) when installing enterprise features
+(like `jdbc` or `jms` features for instance).
+
+===== Apache Aries Transaction and ObjectWeb HOWL
+
+The `transaction` feature uses Apache Aries and ObjectWeb HOWL. Aapache Aries Transaction "exposes" the transaction
+manager as OSGi service. The actual implementation of the transaction manager is ObjectWeb HOWL.
+
+ObjectWeb HOWL is a logger implementation providing features required by the ObjectWeb JOTM project, with a public API
+that is generally usable by any Transaction Manager.
+ObjectWeb HOWL uses unformatted binary logs to maximize performance and specifies a journalization API with methods
+necessary to support JOTM recovery operations.
+
+ObjectWeb HOWL is intended to be used for logging of temporary data such as XA transaction events.
+It is not a replacement for traditional log kits such as LOG4J and Java SE Logging.
+
+In Apache Karaf, ObjectWeb HOWL (High-speed ObjectWeb Logger) is used to implement TransactionLog (in Aries Transaction),
+providing a very performant transaction manager in an OSGi way.
+
+===== Configuration
+
+The installation of the `transaction` feature installs a new configuration: `org.apache.aries.transaction`.
+
+You can see the configuration properties using:
+
+----
+karaf@root()> config:list "(service.pid=org.apache.aries.transaction)"
+----------------------------------------------------------------
+Pid:            org.apache.aries.transaction
+BundleLocation: mvn:org.apache.aries.transaction/org.apache.aries.transaction.manager/1.1.0
+Properties:
+   aries.transaction.recoverable = true
+   aries.transaction.timeout = 600
+   service.pid = org.apache.aries.transaction
+   org.apache.karaf.features.configKey = org.apache.aries.transaction
+   aries.transaction.howl.maxBlocksPerFile = 512
+   aries.transaction.howl.maxLogFiles = 2
+   aries.transaction.howl.logFileDir = /opt/apache-karaf-3.0.0/data/txlog
+   aries.transaction.howl.bufferSizeKBytes = 4
+----
+
+* `aries.transaction.recoverable` property is a flag to enable support of recoverable resource or not. A recoverable
+ resource is a transactional object whose state is saved to stable storage if the transaction is committed, and whose
+ state can be reset to what it was at the beginning of the transaction if the transaction is rolled back.
+ At commit time, the transaction manager uses the two-phase XA protocol when communicating with the recoverable resource
+ to ensure transactional integrity when more than one recoverable resource is involved in the transaction being committed.
+ Transactional databases and message brokers like Apache ActiveMQ are examples of recoverable resources.
+ A recoverable resource is represented using the javax.transaction.xa.XAResource interface in JTA.
+ Default is `true`.
+* `aries.transaction.timeout` property is the transaction timeout. If a transaction has a lifetime longer than this timeout
+ a transaction exception is raised and the transaction is rollbacked. Default is `600` (10 minutes).
+* `aries.transaction.howl.logFileDir` property is the directory where the transaction logs (journal) are stored.
+ Default is `KARAF_DATA/txlog`.
+* `aries.transaction.howl.maxLogFiles` property is the maximum number of transaction log files to retain. Combined with the
+ `aries.transaction.howl.maxBlocksPerFile`, it defines the transaction retention.
+
+You can change the configuration directly using the `config:*` commands, or the Config MBean.
+
+For instance, to increase the transaction timeout, you can do:
+
+----
+karaf@root()> config:edit org.apache.aries.transaction
+karaf@root()> config:property-set aries.transaction.timeout 1200
+karaf@root()> config:update
+karaf@root()> config:list "(service.pid=org.apache.aries.transaction)"
+----------------------------------------------------------------
+Pid:            org.apache.aries.transaction
+BundleLocation: mvn:org.apache.aries.transaction/org.apache.aries.transaction.manager/1.1.0
+Properties:
+   aries.transaction.recoverable = true
+   aries.transaction.timeout = 1200
+   service.pid = org.apache.aries.transaction
+   org.apache.karaf.features.configKey = org.apache.aries.transaction
+   aries.transaction.howl.maxBlocksPerFile = 512
+   aries.transaction.howl.maxLogFiles = 2
+   aries.transaction.howl.logFileDir = /opt/apache-karaf-3.0.0/data/txlog
+   aries.transaction.howl.bufferSizeKBytes = 4
+----
+
+[NOTE]
+====
+The `transaction` feature defines the configuration in memory by default. It means that changes that you can do will
+be lost in case of Apache Karaf restart.
+If you want to define your own transaction configuration at startup, you have to create a `etc/org.apache.aries.transaction.cfg`
+configuration file and set the properties and values in the file. For instance:
+
+----
+# etc/org.apache.aries.transaction.cfg
+aries.transaction.recoverable = true
+aries.transaction.timeout = 1200
+aries.transaction.howl.maxBlocksPerFile = 512
+aries.transaction.howl.maxLogFiles = 2
+aries.transaction.howl.logFileDir = /opt/apache-karaf-3.0.0/data/txlog
+aries.transaction.howl.bufferSizeKBytes = 4
+----
+====

http://git-wip-us.apache.org/repos/asf/karaf/blob/9f08eb9e/manual/src/main/asciidoc/user-guide/kar.adoc
----------------------------------------------------------------------
diff --git a/manual/src/main/asciidoc/user-guide/kar.adoc b/manual/src/main/asciidoc/user-guide/kar.adoc
new file mode 100644
index 0000000..59dcbc0
--- /dev/null
+++ b/manual/src/main/asciidoc/user-guide/kar.adoc
@@ -0,0 +1,313 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+=== KAR
+
+As described in the link:provisioning[Provisioning section], Apache Karaf features describe applications.
+
+A feature defines different resources to resolve using URL (for instance, bundles URLs, or configuration files URLs).
+As described in the [Artifacts repositories and URLs section|urls], Apache Karaf looks for artifacts (bundles,
+configuration files, ...) in the artifact repositories.
+Apache Karaf may require to download artifacts from remote repositories.
+
+Apache Karaf provides a special type of artifact that package a features XML and all resources described in the features
+of this XML. This artifact is named a KAR (KAraf aRchive).
+
+A KAR file is a zip archive containing the
+
+Basically, the kar format is a jar (so a zip file) which contains a set of feature descriptor and bundle jar files.
+
+A KAR file contains a `repository` folder containing:
+
+* a set of features XML files
+* the artifacts following the Maven directory structure (`groupId/artifactId/version/artifactId-version.type`).
+
+For instance, the `spring-3.0.0.kar` contains:
+
+----
+~$ unzip -l spring-3.0.0.kar
+Archive:  spring-3.0.0.kar
+  Length      Date    Time    Name
+---------  ---------- -----   ----
+      143  2013-12-06 10:52   META-INF/MANIFEST.MF
+    12186  2013-12-06 10:52   repository/org/apache/karaf/features/spring/3.0.0/spring-3.0.0-features.xml
+   575389  2013-12-06 10:52   repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar
+   232019  2013-12-06 10:52   repository/commons-beanutils/commons-beanutils/1.8.3/commons-beanutils-1.8.3.jar
+   673109  2013-12-06 10:52   repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.struts/1.3.10_1/org.apache.servicemix.bundles.struts-1.3.10_1.jar
+    37084  2013-12-06 10:52   repository/org/springframework/org.springframework.web.struts/3.2.4.RELEASE/org.springframework.web.struts-3.2.4.RELEASE.jar
+     7411  2013-12-06 10:52   repository/org/springframework/org.springframework.instrument/3.2.4.RELEASE/org.springframework.instrument-3.2.4.RELEASE.jar
+   246881  2013-12-06 10:52   repository/org/springframework/org.springframework.transaction/3.2.4.RELEASE/org.springframework.transaction-3.2.4.RELEASE.jar
+    16513  2013-12-06 10:52   repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.aopalliance/1.0_6/org.apache.servicemix.bundles.aopalliance-1.0_6.jar
+   881124  2013-12-06 10:52   repository/org/springframework/org.springframework.core/3.2.4.RELEASE/org.springframework.core-3.2.4.RELEASE.jar
+   199240  2013-12-06 10:52   repository/org/springframework/org.springframework.expression/3.2.4.RELEASE/org.springframework.expression-3.2.4.RELEASE.jar
+   614646  2013-12-06 10:52   repository/org/springframework/org.springframework.beans/3.2.4.RELEASE/org.springframework.beans-3.2.4.RELEASE.jar
+   340841  2013-12-06 10:52   repository/org/springframework/org.springframework.aop/3.2.4.RELEASE/org.springframework.aop-3.2.4.RELEASE.jar
+   877369  2013-12-06 10:52   repository/org/springframework/org.springframework.context/3.2.4.RELEASE/org.springframework.context-3.2.4.RELEASE.jar
+   130224  2013-12-06 10:52   repository/org/springframework/org.springframework.context.support/3.2.4.RELEASE/org.springframework.context.support-3.2.4.RELEASE.jar
+    30640  2013-12-06 10:52   repository/org/apache/karaf/deployer/org.apache.karaf.deployer.spring/3.0.0/org.apache.karaf.deployer.spring-3.0.0.jar
+    51951  2013-12-06 10:52   repository/org/springframework/org.springframework.aspects/3.2.4.RELEASE/org.springframework.aspects-3.2.4.RELEASE.jar
+   411175  2013-12-06 10:52   repository/org/springframework/org.springframework.jdbc/3.2.4.RELEASE/org.springframework.jdbc-3.2.4.RELEASE.jar
+    48049  2013-12-06 10:52   repository/javax/portlet/portlet-api/2.0/portlet-api-2.0.jar
+   190883  2013-12-06 10:52   repository/org/springframework/org.springframework.web.portlet/3.2.4.RELEASE/org.springframework.web.portlet-3.2.4.RELEASE.jar
+   635680  2013-12-06 10:52   repository/org/springframework/org.springframework.web/3.2.4.RELEASE/org.springframework.web-3.2.4.RELEASE.jar
+   645946  2013-12-06 10:52   repository/org/springframework/org.springframework.web.servlet/3.2.4.RELEASE/org.springframework.web.servlet-3.2.4.RELEASE.jar
+   464911  2013-12-06 10:52   repository/org/springframework/org.springframework.test/3.2.4.RELEASE/org.springframework.test-3.2.4.RELEASE.jar
+    69784  2013-12-06 10:52   repository/org/springframework/osgi/spring-osgi-web/1.2.1/spring-osgi-web-1.2.1.jar
+    16030  2013-12-06 10:52   repository/org/apache/geronimo/specs/geronimo-jta_1.1_spec/1.1.1/geronimo-jta_1.1_spec-1.1.1.jar
+    32359  2013-12-06 10:52   repository/org/apache/geronimo/specs/geronimo-jms_1.1_spec/1.1.1/geronimo-jms_1.1_spec-1.1.1.jar
+   208684  2013-12-06 10:52   repository/org/springframework/org.springframework.jms/3.2.4.RELEASE/org.springframework.jms-3.2.4.RELEASE.jar
+    75672  2013-12-06 10:52   repository/org/springframework/org.springframework.oxm/3.2.4.RELEASE/org.springframework.oxm-3.2.4.RELEASE.jar
+   393607  2013-12-06 10:52   repository/org/springframework/org.springframework.orm/3.2.4.RELEASE/org.springframework.orm-3.2.4.RELEASE.jar
+   338559  2013-12-06 10:52   repository/org/apache/servicemix/bundles/org.apache.servicemix.bundles.cglib/3.0_1/org.apache.servicemix.bundles.cglib-3.0_1.jar
+    35859  2013-12-06 10:52   repository/org/springframework/osgi/spring-osgi-io/1.2.1/spring-osgi-io-1.2.1.jar
+   362889  2013-12-06 10:52   repository/org/springframework/osgi/spring-osgi-core/1.2.1/spring-osgi-core-1.2.1.jar
+   120822  2013-12-06 10:52   repository/org/springframework/osgi/spring-osgi-extender/1.2.1/spring-osgi-extender-1.2.1.jar
+    24231  2013-12-06 10:52   repository/org/springframework/osgi/spring-osgi-annotation/1.2.1/spring-osgi-annotation-1.2.1.jar
+    12597  2013-12-06 10:52   repository/org/apache/karaf/bundle/org.apache.karaf.bundle.springstate/3.0.0/org.apache.karaf.bundle.springstate-3.0.0.jar
+    31903  2013-12-06 10:52   repository/org/eclipse/gemini/blueprint/gemini-blueprint-io/1.0.0.RELEASE/gemini-blueprint-io-1.0.0.RELEASE.jar
+   578205  2013-12-06 10:52   repository/org/eclipse/gemini/blueprint/gemini-blueprint-core/1.0.0.RELEASE/gemini-blueprint-core-1.0.0.RELEASE.jar
+   178525  2013-12-06 10:52   repository/org/eclipse/gemini/blueprint/gemini-blueprint-extender/1.0.0.RELEASE/gemini-blueprint-extender-1.0.0.RELEASE.jar
+---------                     -------
+  9803140                     38 files
+----
+
+As a KAR file is a simple zip file, you can create the KAR file by hand.
+
+For instance, the following Unix commands create a very simple KAR file:
+
+----
+~$ mkdir repository
+~$ cp /path/to/features.xml repository/features.xml
+~$ cp /path/to/my.jar repository/my/project/my/1.0.0/my-1.0.0.jar
+~$ zip -r my.kar repository
+updating: repository/ (stored 0%)
+  adding: repository/my/project/my/1.0.0/my-1.0.0.jar (deflated 0%)
+----
+
+You can create KAR files using Apache Maven, or directly in the Apache Karaf console.
+
+==== Maven
+
+Apache Karaf provides a Maven plugin: `karaf-maven-plugin`.
+
+The Apache Karaf Maven plugin provides the `features-create-kar` goal.
+
+The `features-create-kar` goal does:
+
+. Reads all features specified in the features XML.
+. For each feature described in the features XML, the goal resolves the bundles described in the feature.
+. The goal finally packages the features XML, and the resolved bundles in a zip file.
+
+You can also use the Karaf maven plugin. The features maven plugin provides an features-create-kar goal.
+
+The features-create-kar goal:
+
+. Reads all features specified in the features descriptor.
+. For each feature, it resolves the bundles defined in the feature.
+. All bundles are packaged into the kar archive.
+
+For instance, the following Maven POM create `my-kar.kar`
+
+For instance, you can use the following POM to create a kar:
+
+----
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+
+    <modelVersion>4.0.0</modelVersion>
+
+    <groupId>my.groupId</groupId>
+    <artifactId>my-kar</artifactId>
+    <version>1.0</version>
+    <packaging>pom</packaging>
+
+    <build>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.karaf.tooling</groupId>
+                <artifactId>karaf-maven-plugin</artifactId>
+                <version>3.0.0</version>
+                <executions>
+                    <execution>
+                        <id>features-create-kar</id>
+                        <goals>
+                            <goal>features-create-kar</goal>
+                        </goals>
+                        <configuration>
+                            <featuresFile>src/main/resources/features.xml</featuresFile>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+        </plugins>
+    </build>
+
+</project>
+----
+
+To create the KAR file, simply type:
+
+----
+~$ mvn install
+----
+
+Uou will have your kar in the `target` directory.
+
+==== Commands
+
+Apache Karaf provides `kar:*` commands to manage KAR archives.
+
+===== `kar:list`
+
+The `kar:list` command lists the installed KAR archives.
+
+----
+karaf@root()> kar:list
+KAR Name
+-------------------
+my-kar-1.0-SNAPSHOT
+----
+
+A KAR is identified by its name.
+
+===== `kar:create`
+
+Instead of using the `karaf-maven-plugin` or create the KAR archive by hand, you can use the `kar:create` command.
+
+The `kar:create` command creates a KAR file using a registered features repository.
+
+For instance, you want to create a KAR file for the Pax Web repository.
+
+The `feature:repo-list` command gives you the list of registered features repositories:
+
+----
+karaf@root()> feature:repo-list
+Repository                       | URL
+-------------------------------------------------------------------------------------------------------
+standard-3.0.0                   | mvn:org.apache.karaf.features/standard/3.0.0/xml/features
+enterprise-3.0.0                 | mvn:org.apache.karaf.features/enterprise/3.0.0/xml/features
+spring-3.0.0                     | mvn:org.apache.karaf.features/spring/3.0.0/xml/features
+org.ops4j.pax.web-3.0.5          | mvn:org.ops4j.pax.web/pax-web-features/3.0.5/xml/features
+----
+
+You can use one of these features repositories to create the kar file:
+
+----
+karaf@root()> kar:create org.ops4j.pax.web-3.0.5
+Adding feature pax-war
+Adding feature pax-http-whiteboard
+Adding feature pax-jetty
+Adding feature pax-tomcat
+Adding feature pax-http
+Kar file created : /opt/apache-karaf-3.0.0/data/kar/org.ops4j.pax.web-3.0.5.kar
+----
+
+You can see that the KAR file has been created in the `KARAF_DATA/kar` folder.
+
+By default, the `kar:create` command creates a KAR file, packaging all features in the features descriptor.
+
+You can provide the list of features that you want to package into the KAR file:
+
+----
+karaf@root()> kar:create org.ops4j.pax.web-3.0.5 pax-jetty pax-tomcat
+Adding feature pax-jetty
+Adding feature pax-tomcat
+Kar file created : /opt/apache-karaf-3.0.0/data/kar/org.ops4j.pax.web-3.0.5.kar
+----
+
+===== `kar:install`
+
+You can deploy a KAR file using `kar:install` command.
+
+The `kar:install` command expects the KAR URL. Any URL described in the [Artifacts repositories and URLs section|urls]
+is supported by the `kar:install` command:
+
+----
+karaf@root()> kar:install file:/tmp/my-kar-1.0-SNAPSHOT.kar
+----
+
+The KAR file is uncompressed and populated the `KARAF_BASE/system` folder.
+
+The Apache Karaf KAR service is looking for features XML files in the KAR file, registers the features XML and automatically
+installs all features described in the features repositories present in the KAR file.
+
+===== `kar:uninstall`
+
+The `kar:uninstall` command uninstall a KAR file (identified by a name).
+
+By uninstall, it means that:
+
+* the features previously installed by the KAR file are uninstalled
+* delete (from the `KARAF_DATA/system` repository) all files previously "populated" by the KAR file
+
+For instance, to uninstall the previously installed `my-kar-1.0-SNAPSHOT.kar` KAR file:
+
+----
+karaf@root()> kar:uninstall my-kar-1.0-SNAPSHOT
+----
+
+==== Deployer
+
+Apache Karaf also provides a KAR deployer. It means that you can drop a KAR file directly in the `deploy` folder.
+
+Apache Karaf will automatically install KAR files from the `deploy` folder.
+
+You can change the behaviours of the KAR deployer in the `etc/org.apache.karaf.kar.cfg`:
+
+----
+################################################################################
+#
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+################################################################################
+
+#
+# Enable or disable the refresh of the bundles when installing
+# the features contained in a KAR file
+#
+noAutoRefreshBundles=false
+----
+
+By default, when the KAR deployer install features, by default, it refresh the bundles already installed.
+You can disable the automatic bundles refresh by setting the `noAutoRefreshBundles` property to `false`.
+
+==== JMX KarMBean
+
+On the JMX layer, you have a MBean dedicated to the management of the KAR files.
+
+The ObjectName to use is `org.apache.karaf:type=kar,name=*`.
+
+===== Attributes
+
+The `Kars` attributes provides the list of KAR files (name) installed.
+
+===== Operations
+
+* `install(url)` installs the KAR file at the given `url`.
+* `create(repository, features)` creates a KAR file using the given features `repository` name, and optionally the
+list of `features` to include in the KAR file.
+* `uninstall(name)` uninstalls a KAR file with the given `name`.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/karaf/blob/9f08eb9e/manual/src/main/asciidoc/user-guide/log.adoc
----------------------------------------------------------------------
diff --git a/manual/src/main/asciidoc/user-guide/log.adoc b/manual/src/main/asciidoc/user-guide/log.adoc
new file mode 100644
index 0000000..59e14e6
--- /dev/null
+++ b/manual/src/main/asciidoc/user-guide/log.adoc
@@ -0,0 +1,565 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+=== Log
+
+Apache Karaf provides a very dynamic and powerful logging system.
+
+It supports:
+
+* the OSGi Log Service
+* the Apache Log4j framework
+* the Apache Commons Logging framework
+* the Logback framework
+* the SLF4J framework
+* the native Java Util Logging framework
+
+It means that the applications can use any logging framework, Apache Karaf will use the central log system to manage the
+loggers, appenders, etc.
+
+==== Configuration files
+
+The initial log configuration is loaded from `etc/org.ops4j.pax.logging.cfg`.
+
+This file is a http://logging.apache.org/log4j/1.2/manual.html[standard Log4j configuration file].
+
+You find the different Log4j element:
+
+* loggers
+* appenders
+* layouts
+
+You can add your own initial configuration directly in the file.
+
+The default configuration is the following:
+
+----
+################################################################################
+#
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+################################################################################
+
+# Root logger
+log4j.rootLogger=INFO, out, osgi:*
+log4j.throwableRenderer=org.apache.log4j.OsgiThrowableRenderer
+
+# CONSOLE appender not used by default
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
+
+# File appender
+log4j.appender.out=org.apache.log4j.RollingFileAppender
+log4j.appender.out.layout=org.apache.log4j.PatternLayout
+log4j.appender.out.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
+log4j.appender.out.file=${karaf.data}/log/karaf.log
+log4j.appender.out.append=true
+log4j.appender.out.maxFileSize=1MB
+log4j.appender.out.maxBackupIndex=10
+
+# Sift appender
+log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
+log4j.appender.sift.key=bundle.name
+log4j.appender.sift.default=karaf
+log4j.appender.sift.appender=org.apache.log4j.FileAppender
+log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
+log4j.appender.sift.appender.layout.ConversionPattern=%d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %m%n
+log4j.appender.sift.appender.file=${karaf.data}/log/$\\{bundle.name\\}.log
+log4j.appender.sift.appender.append=true
+----
+
+The default configuration only define the `ROOT` logger, with `INFO` log level, using the `out` file appender.
+You can change the log level to any Log4j valid values (from the most to less verbose): TRACE, DEBUG, INFO, WARN, ERROR, FATAL.
+
+The `osgi:*` appender is a special appender to send the log message to the OSGi Log Service.
+
+A `stdout` console appender is pre-configured, but not enabled by default. This appender allows you to display log
+messages directly to standard output. It's interesting if you plan to run Apache Karaf in server mode (without console).
+
+To enable it, you have to add the `stdout` appender to the `rootLogger`:
+
+----
+log4j.rootLogger=INFO, out, stdout, osgi:*
+----
+
+The `out` appender is the default one. It's rolling file appender that maintain and rotate 10 log files of 1MB each.
+The log files are located in `data/log/karaf.log` by default.
+
+The `sift` appender is not enabled by default. This appender allows you to have one log file per deployed bundle.
+By default, the log file name format uses the bundle symbolic name (in the `data/log` folder).
+
+You can edit this file at runtime: any change will be reloaded and be effective immediately (no need to restart Apache Karaf).
+
+Another configuration file is used by Apache Karaf: `etc/org.apache.karaf.log.cfg`. This files configures the Log Service
+used by the log commands (see later).
+
+==== Commands
+
+Instead of changing the `etc/org.ops4j.pax.logging.cfg` file, Apache Karaf provides a set of commands allowing to
+dynamically change the log configuration and see the log content:
+
+===== `log:clear`
+
+The `log:clear` command clears the log entries.
+
+===== `log:display`
+
+The `log:display` command displays the log entries.
+
+By default, it displays the log entries of the `rootLogger`:
+
+----
+karaf@root()> log:display
+2013-11-29 19:12:46,208 | INFO  | FelixStartLevel  | SecurityUtils                    | 16 - org.apache.sshd.core - 0.9.0 | BouncyCastle not registered, using the default JCE provider
+2013-11-29 19:12:47,368 | INFO  | FelixStartLevel  | core                             | 68 - org.apache.aries.jmx.core - 1.1.1 | Starting JMX OSGi agent
+----
+
+You can also display the log entries from a specific logger, using the `logger` argument:
+
+----
+karaf@root()> log:display ssh
+2013-11-29 19:12:46,208 | INFO  | FelixStartLevel  | SecurityUtils                    | 16 - org.apache.sshd.core - 0.9.0 | BouncyCastle not registered, using the default JCE provider
+----
+
+By default, all log entries will be displayed. It could be very long if your Apache Karaf container is running since a long time.
+You can limit the number of entries to display using the `-n` option:
+
+----
+karaf@root()> log:display -n 5
+2013-11-30 06:53:24,143 | INFO  | JMX OSGi Agent   | core                             | 68 - org.apache.aries.jmx.core - 1.1.1 | Registering org.osgi.jmx.framework.BundleStateMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@27cc75cb with name osgi.core:type=bundleState,version=1.7,framework=org.apache.felix.framework,uuid=5335370f-9dee-449f-9b1c-cabe74432ed1
+2013-11-30 06:53:24,150 | INFO  | JMX OSGi Agent   | core                             | 68 - org.apache.aries.jmx.core - 1.1.1 | Registering org.osgi.jmx.framework.PackageStateMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@27cc75cb with name osgi.core:type=packageState,version=1.5,framework=org.apache.felix.framework,uuid=5335370f-9dee-449f-9b1c-cabe74432ed1
+2013-11-30 06:53:24,150 | INFO  | JMX OSGi Agent   | core                             | 68 - org.apache.aries.jmx.core - 1.1.1 | Registering org.osgi.jmx.framework.ServiceStateMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@27cc75cb with name osgi.core:type=serviceState,version=1.7,framework=org.apache.felix.framework,uuid=5335370f-9dee-449f-9b1c-cabe74432ed1
+2013-11-30 06:53:24,152 | INFO  | JMX OSGi Agent   | core                             | 68 - org.apache.aries.jmx.core - 1.1.1 | Registering org.osgi.jmx.framework.wiring.BundleWiringStateMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@27cc75cb with name osgi.core:type=wiringState,version=1.1,framework=org.apache.felix.framework,uuid=5335370f-9dee-449f-9b1c-cabe74432ed1
+2013-11-30 06:53:24,530 | INFO  | FelixStartLevel  | RegionsPersistenceImpl           | 78 - org.apache.karaf.region.persist - 3.0.0 | Loading region digraph persistence
+----
+
+You can also limit the number of entries stored and retain using the `size` property in `etc/org.apache.karaf.log.cfg` file:
+
+----
+#
+# The number of log statements to be displayed using log:display. It also defines the number
+# of lines searched for exceptions using log:display exception. You can override this value
+# at runtime using -n in log:display.
+#
+size = 500
+----
+
+By default, each log level is displayed with a different color: ERROR/FATAL are in red, DEBUG in purple, INFO in cyan, etc.
+You can disable the coloring using the `--no-color` option.
+
+The log entries format pattern doesn't use the conversion pattern define in `etc/org.ops4j.pax.logging.cfg` file.
+By default, it uses the `pattern` property defined in `etc/org.apache.karaf.log.cfg`.
+
+----
+#
+# The pattern used to format the log statement when using log:display. This pattern is according
+# to the log4j layout. You can override this parameter at runtime using log:display with -p.
+#
+pattern = %d{ISO8601} | %-5.5p | %-16.16t | %-32.32c{1} | %X{bundle.id} - %X{bundle.name} - %X{bundle.version} | %m%n
+----
+
+You can also change the pattern dynamically (for one execution) using the `-p` option:
+
+----
+karaf@root()> log:display -p "%d - %c - %m%n"
+2013-11-30 07:01:58,007 - org.apache.sshd.common.util.SecurityUtils - BouncyCastle not registered, using the default JCE provider
+2013-11-30 07:01:58,725 - org.apache.aries.jmx.core - Starting JMX OSGi agent
+2013-11-30 07:01:58,744 - org.apache.aries.jmx.core - Registering MBean with ObjectName [osgi.compendium:service=cm,version=1.3,framework=org.apache.felix.framework,uuid=6361fc65-8df4-4886-b0a6-479df2d61c83] for service with service.id [13]
+2013-11-30 07:01:58,747 - org.apache.aries.jmx.core - Registering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@27cc75cb with name osgi.compendium:service=cm,version=1.3,framework=org.apache.felix.framework,uuid=6361fc65-8df4-4886-b0a6-479df2d61c83
+----
+
+The pattern is a regular Log4j pattern where you can use keywords like %d for the date, %c for the class, %m for the log
+message, etc.
+
+===== `log:exception-display`
+
+The `log:exception-display` command displays the last occurred exception.
+
+As for `log:display` command, the `log:exception-display` command uses the `rootLogger` by default, but you can
+specify a logger with the `logger` argument.
+
+===== `log:get`
+
+The `log:get` command show the current log level of a logger.
+
+By default, the log level showed is the one from the root logger:
+
+----
+karaf@root()> log:get
+Logger | Level
+--------------
+ROOT   | INFO
+----
+
+You can specify a particular logger using the `logger` argument:
+
+----
+karaf@root()> log:get ssh
+Logger | Level
+--------------
+ssh    | INFO
+----
+
+The `logger` argument accepts the `ALL` keyword to display the log level of all logger (as a list).
+
+For instance, if you have defined your own logger in `etc/org.ops4j.pax.logging.cfg` file like this:
+
+----
+log4j.logger.my.logger = DEBUG
+----
+
+you can see the list of loggers with the corresponding log level:
+
+----
+karaf@root()> log:get ALL
+Logger    | Level
+-----------------
+ROOT      | INFO
+my.logger | DEBUG
+----
+
+The `log:list` command is an alias to `log:get ALL`.
+
+===== `log:log`
+
+The `log:log` command allows you to manually add a message in the log. It's interesting when you create Apache Karaf
+scripts:
+
+----
+karaf@root()> log:log "Hello World"
+karaf@root()> log:display
+2013-11-30 07:20:16,544 | INFO  | Local user karaf | command                          | 59 - org.apache.karaf.log.command - 3.0.0 | Hello World
+----
+
+By default, the log level is INFO, but you can specify a different log level using the `-l` option:
+
+----
+karaf@root()> log:log -l ERROR "Hello World"
+karaf@root()> log:display
+2013-11-30 07:21:38,902 | ERROR | Local user karaf | command                          | 59 - org.apache.karaf.log.command - 3.0.0 | Hello World
+----
+
+===== `log:set`
+
+The `log:set` command sets the log level of a logger.
+
+By default, it changes the log level of the `rootLogger`:
+
+----
+karaf@root()> log:set DEBUG
+karaf@root()> log:get
+Logger | Level
+--------------
+ROOT   | DEBUG
+----
+
+You can specify a particular logger using the `logger` argument, after the `level` one:
+
+----
+karaf@root()> log:set INFO my.logger
+karaf@root()> log:get my.logger
+Logger    | Level
+-----------------
+my.logger | INFO
+----
+
+The `level` argument accepts any Log4j log level: TRACE, DEBUG, INFO, WARN, ERROR, FATAL.
+
+By it also accepts the DEFAULT special keyword.
+
+The purpose of the DEFAULT keyword is to delete the current level of the logger (and only the level, the other properties
+like appender are not deleted)
+in order to use the level of the logger parent (logger are hierarchical).
+
+For instance, you have defined the following loggers (in `etc/org.ops4j.pax.logging.cfg` file):
+
+----
+rootLogger=INFO,out,osgi:*
+my.logger=INFO,appender1
+my.logger.custom=DEBUG,appender2
+----
+
+You can change the level of `my.logger.custom` logger:
+
+----
+karaf@root()> log:set INFO my.logger.custom
+----
+
+Now we have:
+
+----
+rootLogger=INFO,out,osgi:*
+my.logger=INFO,appender1
+my.logger.custom=INFO,appender2
+----
+
+You can use the DEFAULT keyword on `my.logger.custom` logger to remove the level:
+
+----
+karaf@root()> log:set DEFAULT my.logger.custom
+----
+
+Now we have:
+
+----
+rootLogger=INFO,out,osgi:*
+my.logger=INFO,appender1
+my.logger.custom=appender2
+----
+
+It means that, at runtime, the `my.logger.custom` logger uses the level of its parent `my.logger`, so `INFO`.
+
+Now, if we use DEFAULT keyword with the `my.logger` logger:
+
+----
+karaf@root()> log:set DEFAULT my.logger
+----
+
+We have:
+
+----
+rootLogger=INFO,out,osgi:*
+my.logger=appender1
+my.logger.custom=appender2
+----
+
+So, both `my.logger.custom` and `my.logger` use the log level of the parent `rootLogger`.
+
+It's not possible to use DEFAULT keyword with the `rootLogger` and it doesn't have parent.
+
+===== `log:tail`
+
+The `log:tail` is exactly the same as `log:display` but it continuously displays the log entries.
+
+You can use the same options and arguments as for the `log:display` command.
+
+By default, it displays the entries from the `rootLogger`:
+
+----
+karaf@root()> log:tail
+2013-11-30 07:40:28,152 | INFO  | FelixStartLevel  | SecurityUtils                    | 16 - org.apache.sshd.core - 0.9.0 | BouncyCastle not registered, using the default JCE provider
+2013-11-30 07:40:28,909 | INFO  | FelixStartLevel  | core                             | 68 - org.apache.aries.jmx.core - 1.1.1 | Starting JMX OSGi agent
+2013-11-30 07:40:28,928 | INFO  | FelixStartLevel  | core                             | 68 - org.apache.aries.jmx.core - 1.1.1 | Registering MBean with ObjectName [osgi.compendium:service=cm,version=1.3,framework=org.apache.felix.framework,uuid=b44a44b7-41cd-498f-936d-3b12d7aafa7b] for service with service.id [13]
+2013-11-30 07:40:28,936 | INFO  | JMX OSGi Agent   | core                             | 68 - org.apache.aries.jmx.core - 1.1.1 | Registering org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer com.sun.jmx.mbeanserver.JmxMBeanServer@27cc75cb with name osgi.compendium:service=cm,version=1.3,framework=org.apache.felix.framework,uuid=b44a44b7-41cd-498f-936d-3b12d7aafa7b
+----
+
+To exit from the `log:tail` command, just type CTRL-C.
+
+==== JMX LogMBean
+
+All actions that you can perform with the `log:*` command can be performed using the LogMBean.
+
+The LogMBean object name is `org.apache.karaf:type=log,name=*`.
+
+===== Attributes
+
+* `Level` attribute is the level of the ROOT logger.
+
+===== Operations
+
+* `getLevel(logger)` to get the log level of a specific logger. As this operation supports the ALL keyword, it returns a Map with the level of each logger.
+* `setLevel(level, logger)` to set the log level of a specific logger. This operation supports the DEFAULT keyword as for the `log:set` command.
+
+==== Advanced configuration
+
+===== Filters
+
+You can use filters on appender. Filters allow log events to be evaluated to determine if or how they should be published.
+
+Log4j provides ready to use filters:
+
+* The DenyAllFilter (`org.apache.log4j.varia.DenyAllFilter`) drops all logging events.
+ You can add this filter to the end of a filter chain to switch from the default "accept all unless instructed otherwise"
+ filtering behaviour to a "deny all unless instructed otherwise" behaviour.
+* The LevelMatchFilter (`org.apache.log4j.varia.LevelMatchFilter` is a very simple filter based on level matching.
+ The filter admits two options `LevelToMatch` and `AcceptOnMatch`. If there is an exact match between the value of
+ the `LevelToMatch` option and the level of the logging event, then the event is accepted in case the `AcceptOnMatch`
+ option value is set to `true`. Else, if the `AcceptOnMatch` option value is set to `false`, the log event is rejected.
+* The LevelRangeFilter (`org.apache.log4j.varia.LevelRangeFilter` is a very simple filter based on level matching,
+ which can be used to reject messages with priorities outside a certain range. The filter admits three options `LevelMin`,
+ `LevelMax` and `AcceptOnMatch`. If the log event level is between `LevelMin` and `LevelMax`, the log event is
+ accepted if `AcceptOnMatch` is true, or rejected if `AcceptOnMatch` is false.
+* The StringMatchFilter (`org.apache.log4j.varia.StringMatchFilter`) is a very simple filter based on string matching.
+ The filter admits two options `StringToMatch` and `AcceptOnMatch`. If there is a match between the `StringToMatch`
+ and the log event message, the log event is accepted if `AcceptOnMatch` is true, or rejected if `AcceptOnMatch` is false.
+
+The filter is defined directly on the appender, in the `etc/org.ops4j.pax.logging.cfg` configuration file.
+
+The format to use it:
+
+----
+log4j.appender.[appender-name].filter.[filter-name]=[filter-class]
+log4j.appender.[appender-name].filter.[filter-name].[option]=[value]
+----
+
+For instance, you can use the `f1` LevelRangeFilter on the `out` default appender:
+
+----
+log4j.appender.out.filter.f1=org.apache.log4j.varia.LevelRangeFilter
+log4j.appender.out.filter.f1.LevelMax=FATAL
+log4j.appender.out.filter.f1.LevelMin=DEBUG
+----
+
+Thanks to this filter, the log files generated by the `out` appender will contain only log messages with a level
+between DEBUG and FATAL (the log events with TRACE as level are rejected).
+
+===== Nested appenders
+
+A nested appender is a special kind of appender that you use "inside" another appender.
+It allows you to create some kind of "routing" between a chain of appenders.
+
+The most used "nested compliant" appender are:
+
+* The AsyncAppender (`org.apache.log4j.AsyncAppender`) logs events asynchronously. This appender collects the events
+ and dispatch them to all the appenders that are attached to it.
+* The RewriteAppender (`org.apache.log4j.rewrite.RewriteAppender`) forwards log events to another appender after possibly
+ rewriting the log event.
+
+This kind of appender accepts an `appenders` property in the appender definition:
+
+----
+log4j.appender.[appender-name].appenders=[comma-separated-list-of-appender-names]
+----
+
+For instance, you can create a AsyncAppender named `async` and asynchronously dispatch the log events to a JMS appender:
+
+----
+log4j.appender.async=org.apache.log4j.AsyncAppender
+log4j.appender.async.appenders=jms
+
+log4j.appender.jms=org.apache.log4j.net.JMSAppender
+...
+----
+
+===== Error handlers
+
+Sometime, appenders can fail. For instance, a RollingFileAppender tries to write on the filesystem but the filesystem is full, or a JMS appender tries to send a message but the JMS broker is not there.
+
+As log can be very critical to you, you have to be inform that the log appender failed.
+
+It's the purpose of the error handlers. Appenders may delegate their error handling to error handlers, giving a chance to react to this appender errors.
+
+You have two error handlers available:
+
+* The OnlyOnceErrorHandler (`org.apache.log4j.helpers.OnlyOnceErrorHandler`) implements log4j's default error handling policy
+ which consists of emitting a message for the first error in an appender and ignoring all following errors. The error message
+ is printed on `System.err`.
+ This policy aims at protecting an otherwise working application from being flooded with error messages when logging fails.
+* The FallbackErrorHandler (`org.apache.log4j.varia.FallbackErrorHandler`) allows a secondary appender to take over if the primary appender fails.
+ The error message is printed on `System.err`, and logged in the secondary appender.
+
+You can define the error handler that you want to use for each appender using the `errorhandler` property on the appender definition itself:
+
+----
+log4j.appender.[appender-name].errorhandler=[error-handler-class]
+log4j.appender.[appender-name].errorhandler.root-ref=[true|false]
+log4j.appender.[appender-name].errorhandler.logger-ref=[logger-ref]
+log4j.appender.[appender-name].errorhandler.appender-ref=[appender-ref]
+----
+
+===== OSGi specific MDC attributes
+
+The `sift` appender is a OSGi oriented appender allowing you to split the log events based on MDC (Mapped Diagnostic Context) attributes.
+
+MDC allows you to distinguish the different source of log events.
+
+The `sift` appender provides OSGi oritend MDC attributes by default:
+
+* `bundle.id` is the bundle ID
+* `bundle.name` is the bundle symbolic name
+* `bundle.version` is the bundle version
+
+You can use these MDC properties to create a log file per bundle:
+
+----
+log4j.appender.sift=org.apache.log4j.sift.MDCSiftingAppender
+log4j.appender.sift.key=bundle.name
+log4j.appender.sift.default=karaf
+log4j.appender.sift.appender=org.apache.log4j.FileAppender
+log4j.appender.sift.appender.layout=org.apache.log4j.PatternLayout
+log4j.appender.sift.appender.layout.ConversionPattern=%d{ABSOLUTE} | %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
+log4j.appender.sift.appender.file=${karaf.data}/log/$\\{bundle.name\\}.log
+log4j.appender.sift.appender.append=true
+----
+
+===== Enhanced OSGi stack trace renderer
+
+By default, Apache Karaf provides a special stack trace renderer, adding some OSGi specific specific information.
+
+In the stack trace, in addition of the class throwing the exception, you can find a pattern `[id:name:version]` at the
+end of each stack trace line, where:
+
+* `id` is the bundle ID
+* `name` is the bundle name
+* `version` is the bundle version
+
+It's very helpful to diagnosing the source of an issue.
+
+For instance, in the following IllegalArgumentException stack trace, we can see the OSGi details about the source of the exception:
+
+----
+java.lang.IllegalArgumentException: Command not found:  *:foo
+	at org.apache.felix.gogo.runtime.shell.Closure.execute(Closure.java:225)[21:org.apache.karaf.shell.console:3.0.0]
+	at org.apache.felix.gogo.runtime.shell.Closure.executeStatement(Closure.java:162)[21:org.apache.karaf.shell.console:3.0.0]
+	at org.apache.felix.gogo.runtime.shell.Pipe.run(Pipe.java:101)[21:org.apache.karaf.shell.console:3.0.0]
+	at org.apache.felix.gogo.runtime.shell.Closure.execute(Closure.java:79)[21:org.apache.karaf.shell.console:3.0.0]
+	at org.apache.felix.gogo.runtime.shell.CommandSessionImpl.execute(CommandSessionImpl.java:71)[21:org.apache.karaf.shell.console:3.0.0]
+	at org.apache.karaf.shell.console.jline.Console.run(Console.java:169)[21:org.apache.karaf.shell.console:3.0.0]
+	at java.lang.Thread.run(Thread.java:637)[:1.7.0_21]
+----
+
+===== Custom appenders
+
+You can use your own appenders in Apache Karaf.
+
+The easiest way to do that is to package your appender as an OSGi bundle and attach it as a fragment of the
+`org.ops4j.pax.logging.pax-logging-service` bundle.
+
+For instance, you create `MyAppender`:
+
+----
+public class MyAppender extends AppenderSkeleton {
+...
+}
+----
+
+You compile and package as an OSGi bundle containing a MANIFEST looking like:
+
+----
+Manifest:
+Bundle-SymbolicName: org.mydomain.myappender       
+Fragment-Host: org.ops4j.pax.logging.pax-logging-service
+...
+----
+
+Copy your bundle in the Apache Karaf `system` folder. The `system` folder uses a standard Maven directory layout: groupId/artifactId/version.
+
+In the `etc/startup.properties` configuration file, you define your bundle in the list before the pax-logging-service bundle.
+
+You have to restart Apache Karaf with a clean run (purging the `data` folder) in order to reload the system bundles.
+You can now use your appender directly in `etc/org.ops4j.pax.logging.cfg` configuration file.


Mime
View raw message