camel-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From acosent...@apache.org
Subject camel git commit: Added camel-hdfs2 docs to gitbook
Date Thu, 14 Apr 2016 13:06:55 GMT
Repository: camel
Updated Branches:
  refs/heads/master 244868166 -> eb741ee4f


Added camel-hdfs2 docs to gitbook


Project: http://git-wip-us.apache.org/repos/asf/camel/repo
Commit: http://git-wip-us.apache.org/repos/asf/camel/commit/eb741ee4
Tree: http://git-wip-us.apache.org/repos/asf/camel/tree/eb741ee4
Diff: http://git-wip-us.apache.org/repos/asf/camel/diff/eb741ee4

Branch: refs/heads/master
Commit: eb741ee4f36548c7b065e68671b9f730146e4f8d
Parents: 2448681
Author: Andrea Cosentino <ancosen@gmail.com>
Authored: Thu Apr 14 15:06:12 2016 +0200
Committer: Andrea Cosentino <ancosen@gmail.com>
Committed: Thu Apr 14 15:06:37 2016 +0200

----------------------------------------------------------------------
 components/camel-hdfs2/src/main/docs/hdfs2.adoc | 280 +++++++++++++++++++
 docs/user-manual/en/SUMMARY.md                  |   1 +
 2 files changed, 281 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/camel/blob/eb741ee4/components/camel-hdfs2/src/main/docs/hdfs2.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hdfs2/src/main/docs/hdfs2.adoc b/components/camel-hdfs2/src/main/docs/hdfs2.adoc
new file mode 100644
index 0000000..2bfe7f7
--- /dev/null
+++ b/components/camel-hdfs2/src/main/docs/hdfs2.adoc
@@ -0,0 +1,280 @@
+[[HDFS2-HDFS2Component]]
+HDFS2 Component
+~~~~~~~~~~~~~~~
+
+*Available as of Camel 2.13*
+
+The *hdfs2* component enables you to read and write messages from/to an
+HDFS file system using Hadoop 2.x. HDFS is the distributed file system
+at the heart of http://hadoop.apache.org[Hadoop].
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+[source,xml]
+------------------------------------------------------------
+<dependency>
+    <groupId>org.apache.camel</groupId>
+    <artifactId>camel-hdfs2</artifactId>
+    <version>x.x.x</version>
+    <!-- use the same version as your Camel core version -->
+</dependency>
+------------------------------------------------------------
+
+[[HDFS2-URIformat]]
+URI format
+^^^^^^^^^^
+
+[source,java]
+----------------------------------------
+hdfs2://hostname[:port][/path][?options]
+----------------------------------------
+
+You can append query options to the URI in the following format,
+`?option=value&option=value&...` +
+ The path is treated in the following way:
+
+1.  as a consumer, if it's a file, it just reads the file, otherwise if
+it represents a directory it scans all the file under the path
+satisfying the configured pattern. All the files under that directory
+must be of the same type.
+2.  as a producer, if at least one split strategy is defined, the path
+is considered a directory and under that directory the producer creates
+a different file per split named using the configured
+link:uuidgenerator.html[UuidGenerator].
+
+
+When consuming from hdfs2 then in normal mode, a file is split into
+chunks, producing a message per chunk. You can configure the size of the
+chunk using the chunkSize option. If you want to read from hdfs and
+write to a regular file using the file component, then you can use the
+fileMode=Append to append each of the chunks together.
+
+[[HDFS2-Options]]
+Options
+^^^^^^^
+
+
+// component options: START
+The HDFS2 component supports 1 options which are listed below.
+
+[width="100%",cols="2s,1m,8",options="header"]
+|=======================================================================
+| Name | Java Type | Description
+| jAASConfiguration | Configuration | To use the given configuration for security with JAAS.
+|=======================================================================
+// component options: END
+
+// endpoint options: START
+The HDFS2 component supports 41 endpoint options which are listed below:
+
+[width="100%",cols="2s,1,1m,1m,5",options="header"]
+|=======================================================================
+| Name | Group | Default | Java Type | Description
+| hostName | common |  | String | *Required* HDFS host to use
+| port | common | 8020 | int | HDFS port to use
+| path | common |  | String | *Required* The directory path to use
+| blockSize | common | 67108864 | long | The size of the HDFS blocks
+| bufferSize | common | 4096 | int | The buffer size used by HDFS
+| checkIdleInterval | common | 500 | int | How often (time in millis) in to run the idle
checker background task. This option is only in use if the splitter strategy is IDLE.
+| chunkSize | common | 4096 | int | When reading a normal file this is split into chunks
producing a message per chunk.
+| compressionCodec | common | DEFAULT | HdfsCompressionCodec | The compression codec to use
+| compressionType | common | NONE | CompressionType | The compression type to use (is default
not in use)
+| connectOnStartup | common | true | boolean | Whether to connect to the HDFS file system
on starting the producer/consumer. If false then the connection is created on-demand. Notice
that HDFS may take up till 15 minutes to establish a connection as it has hardcoded 45 x 20
sec redelivery. By setting this option to false allows your application to startup and not
block for up till 15 minutes.
+| fileSystemType | common | HDFS | HdfsFileSystemType | Set to LOCAL to not use HDFS but
local java.io.File instead.
+| fileType | common | NORMAL_FILE | HdfsFileType | The file type to use. For more details
see Hadoop HDFS documentation about the various files types.
+| keyType | common | NULL | WritableType | The type for the key in case of sequence or map
files.
+| openedSuffix | common | opened | String | When a file is opened for reading/writing the
file is renamed with this suffix to avoid to read it during the writing phase.
+| owner | common |  | String | The file owner must match this owner for the consumer to pickup
the file. Otherwise the file is skipped.
+| readSuffix | common | read | String | Once the file has been read is renamed with this
suffix to avoid to read it again.
+| replication | common | 3 | short | The HDFS replication factor
+| splitStrategy | common |  | String | In the current version of Hadoop opening a file in
append mode is disabled since it's not very reliable. So for the moment it's only possible
to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the
split strategy option has been defined the hdfs path will be used as a directory and files
will be created using the configured UuidGenerator. Every time a splitting condition is met
a new file is created. The splitStrategy option is defined as a string with the following
syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is created and
the old is closed when the number of written bytes is more than value MESSAGES a new file
is created and the old is closed when the number of written messages is more than value IDLE
a new file is created and the old is closed when no writing happened in the last value milliseconds
+| valueType | common | BYTES | WritableType | The type for the key in case of sequence or
map files
+| bridgeErrorHandler | consumer | false | boolean | Allows for bridging the consumer to the
Camel routing Error Handler which mean any exceptions occurred while the consumer is trying
to pickup incoming messages or the likes will now be processed as a message and handled by
the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler
to deal with exceptions that will be logged at WARN/ERROR level and ignored.
+| delay | consumer | 1000 | long | The interval (milliseconds) between the directory scans.
+| initialDelay | consumer |  | long | For the consumer how much to wait (milliseconds) before
to start scanning the directory.
+| pattern | consumer | * | String | The pattern used for scanning the directory
+| sendEmptyMessageWhenIdle | consumer | false | boolean | If the polling consumer did not
poll any files you can enable this option to send an empty message (no body) instead.
+| exceptionHandler | consumer (advanced) |  | ExceptionHandler | To let the consumer use
a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options
is not in use. By default the consumer will deal with exceptions that will be logged at WARN/ERROR
level and ignored.
+| pollStrategy | consumer (advanced) |  | PollingConsumerPollStrategy | A pluggable org.apache.camel.PollingConsumerPollingStrategy
allowing you to provide your custom implementation to control error handling usually occurred
during the poll operation before an Exchange have been created and being routed in Camel.
+| append | producer | false | boolean | Append to existing file. Notice that not all HDFS
file systems support the append option.
+| overwrite | producer | true | boolean | Whether to overwrite existing files with the same
name
+| exchangePattern | advanced | InOnly | ExchangePattern | Sets the default exchange pattern
when creating an exchange
+| synchronous | advanced | false | boolean | Sets whether synchronous processing should be
strictly used or Camel is allowed to use asynchronous processing (if supported).
+| backoffErrorThreshold | scheduler |  | int | The number of subsequent error polls (failed
due some error) that should happen before the backoffMultipler should kick-in.
+| backoffIdleThreshold | scheduler |  | int | The number of subsequent idle polls that should
happen before the backoffMultipler should kick-in.
+| backoffMultiplier | scheduler |  | int | To let the scheduled polling consumer backoff
if there has been a number of subsequent idles/errors in a row. The multiplier is then the
number of polls that will be skipped before the next actual attempt is happening again. When
this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be
configured.
+| greedy | scheduler | false | boolean | If greedy is enabled then the ScheduledPollConsumer
will run immediately again if the previous run polled 1 or more messages.
+| runLoggingLevel | scheduler | TRACE | LoggingLevel | The consumer logs a start/complete
log line when it polls. This option allows you to configure the logging level for that.
+| scheduledExecutorService | scheduler |  | ScheduledExecutorService | Allows for configuring
a custom/shared thread pool to use for the consumer. By default each consumer has its own
single threaded thread pool.
+| scheduler | scheduler | none | ScheduledPollConsumerScheduler | To use a cron scheduler
from either camel-spring or camel-quartz2 component
+| schedulerProperties | scheduler |  | Map | To configure additional properties when using
a custom scheduler or any of the Quartz2 Spring based scheduler.
+| startScheduler | scheduler | true | boolean | Whether the scheduler should be auto started.
+| timeUnit | scheduler | MILLISECONDS | TimeUnit | Time unit for initialDelay and delay options.
+| useFixedDelay | scheduler | true | boolean | Controls if fixed delay or fixed rate is used.
See ScheduledExecutorService in JDK for details.
+|=======================================================================
+// endpoint options: END
+
+
+[[HDFS2-KeyTypeandValueType]]
+KeyType and ValueType
++++++++++++++++++++++
+
+* NULL it means that the key or the value is absent
+* BYTE for writing a byte, the java Byte class is mapped into a BYTE
+* BYTES for writing a sequence of bytes. It maps the java ByteBuffer
+class
+* INT for writing java integer
+* FLOAT for writing java float
+* LONG for writing java long
+* DOUBLE for writing java double
+* TEXT for writing java strings
+
+BYTES is also used with everything else, for example, in Camel a file is
+sent around as an InputStream, int this case is written in a sequence
+file or a map file as a sequence of bytes.
+
+[[HDFS2-SplittingStrategy]]
+Splitting Strategy
+^^^^^^^^^^^^^^^^^^
+
+In the current version of Hadoop opening a file in append mode is
+disabled since it's not very reliable. So, for the moment, it's only
+possible to create new files. The Camel HDFS endpoint tries to solve
+this problem in this way:
+
+* If the split strategy option has been defined, the hdfs path will be
+used as a directory and files will be created using the configured
+link:uuidgenerator.html[UuidGenerator]
+* Every time a splitting condition is met, a new file is created. +
+ The splitStrategy option is defined as a string with the following
+syntax: splitStrategy=<ST>:<value>,<ST>:<value>,*
+
+where <ST> can be:
+
+* BYTES a new file is created, and the old is closed when the number of
+written bytes is more than <value>
+* MESSAGES a new file is created, and the old is closed when the number
+of written messages is more than <value>
+* IDLE a new file is created, and the old is closed when no writing
+happened in the last <value> milliseconds
+
+note that this strategy currently requires either setting an IDLE value
+or setting the HdfsConstants.HDFS_CLOSE header to false to use the
+BYTES/MESSAGES configuration...otherwise, the file will be closed with
+each message
+
+for example:
+
+[source,java]
+-----------------------------------------------------------------
+hdfs2://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5
+-----------------------------------------------------------------
+
+it means: a new file is created either when it has been idle for more
+than 1 second or if more than 5 bytes have been written. So, running
+`hadoop fs -ls /tmp/simple-file` you'll see that multiple files have
+been created.
+
+[[HDFS2-MessageHeaders]]
+Message Headers
+^^^^^^^^^^^^^^^
+
+The following headers are supported by this component:
+
+[[HDFS2-Produceronly]]
+Producer only
++++++++++++++
+
+[width="100%",cols="10%,90%",options="header",]
+|=======================================================================
+|Header |Description
+
+|`CamelFileName` |*Camel 2.13:* Specifies the name of the file to write (relative to the
+endpoint path). The name can be a `String` or an
+link:expression.html[Expression] object. Only relevant when not using a
+split strategy.
+|=======================================================================
+
+[[HDFS2-Controllingtoclosefilestream]]
+Controlling to close file stream
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When using the link:hdfs2.html[HDFS2] producer *without* a split
+strategy, then the file output stream is by default closed after the
+write. However you may want to keep the stream open, and only explicitly
+close the stream later. For that you can use the header
+`HdfsConstants.HDFS_CLOSE` (value = `"CamelHdfsClose"`) to control this.
+Setting this value to a boolean allows you to explicit control whether
+the stream should be closed or not.
+
+Notice this does not apply if you use a split strategy, as there are
+various strategies that can control when the stream is closed.
+
+[[HDFS2-UsingthiscomponentinOSGi]]
+Using this component in OSGi
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There are some quirks when running this component in an OSGi environment
+related to the mechanism Hadoop 2.x uses to discover different
+`org.apache.hadoop.fs.FileSystem` implementations. Hadoop 2.x uses
+`java.util.ServiceLoader` which looks for
+`/META-INF/services/org.apache.hadoop.fs.FileSystem` files defining
+available filesystem types and implementations. These resources are not
+available when running inside OSGi.
+
+As with `camel-hdfs` component, the default configuration files need to
+be visible from the bundle class loader. A typical way to deal with it
+is to keep a copy of `core-default.xml` (and e.g., `hdfs-default.xml`)
+in your bundle root.
+
+[[HDFS2-Usingthiscomponentwithmanuallydefinedroutes]]
+Using this component with manually defined routes
++++++++++++++++++++++++++++++++++++++++++++++++++
+
+There are two options:
+
+1.  Package `/META-INF/services/org.apache.hadoop.fs.FileSystem`
+resource with bundle that defines the routes. This resource should list
+all the required Hadoop 2.x filesystem implementations.
+2.  Provide boilerplate initialization code which populates internal,
+static cache inside `org.apache.hadoop.fs.FileSystem` class:
+
+[source,java]
+----------------------------------------------------------------------------------------------------
+org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration();
+conf.setClass("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class, FileSystem.class);
+conf.setClass("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class, FileSystem.class);
+...
+FileSystem.get("file:///", conf);
+FileSystem.get("hdfs://localhost:9000/", conf);
+...
+----------------------------------------------------------------------------------------------------
+
+[[HDFS2-UsingthiscomponentwithBlueprintcontainer]]
+Using this component with Blueprint container
++++++++++++++++++++++++++++++++++++++++++++++
+
+Two options:
+
+1.  Package `/META-INF/services/org.apache.hadoop.fs.FileSystem`
+resource with bundle that contains blueprint definition.
+2.  Add the following to the blueprint definition file:
+
+[source,java]
+------------------------------------------------------------------------------------------------------
+<bean id="hdfsOsgiHelper" class="org.apache.camel.component.hdfs2.HdfsOsgiHelper">
+   <argument>
+      <map>
+         <entry key="file:///" value="org.apache.hadoop.fs.LocalFileSystem"  />
+         <entry key="hdfs://localhost:9000/" value="org.apache.hadoop.hdfs.DistributedFileSystem"
/>
+         ...
+      </map>
+   </argument>
+</bean>
+
+<bean id="hdfs2" class="org.apache.camel.component.hdfs2.HdfsComponent" depends-on="hdfsOsgiHelper"
/>
+------------------------------------------------------------------------------------------------------
+
+This way Hadoop 2.x will have correct mapping of URI schemes to
+filesystem implementations.

http://git-wip-us.apache.org/repos/asf/camel/blob/eb741ee4/docs/user-manual/en/SUMMARY.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/SUMMARY.md b/docs/user-manual/en/SUMMARY.md
index 8ed37bf..f9bfff1 100644
--- a/docs/user-manual/en/SUMMARY.md
+++ b/docs/user-manual/en/SUMMARY.md
@@ -149,6 +149,7 @@
     * [Hazelcast](hazelcast.adoc)
     * [Hbase](hbase.adoc)
     * [HDFS](hdfs.adoc)
+    * [HDFS2](hdfs2.adoc)
     * [Http4](http4.adoc)
     * [Ironmq](ironmq.adoc)
     * [JMS](jms.adoc)


Mime
View raw message