camel-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Luca Burgazzoli (JIRA)" <>
Subject [jira] [Commented] (CAMEL-8211) Camel commands - camel-component-info
Date Wed, 02 Mar 2016 12:32:18 GMT


Luca Burgazzoli commented on CAMEL-8211:

I've played a little bit around this and I've created/amended the following classes:

- [CamelController|]
- [AbstractCamelControlle|]
- [CatalogComponentInfoCommand|]

Running the command to get information about hdfs component it would produce:

HDFS :: For reading/writing from/to an HDFS filesystem using Hadoop 1.x.

label: hadoop,file
maven: org.apache.camel/camel-hdfs/2.17-SNAPSHOT


Key                      Description
---                      -----------
jAASConfiguration        To use the given configuration for security with JAAS.


Key                      Description
---                      -----------
hostName                 HDFS host to use
port                     HDFS port to use
path                     The directory path to use
blockSize                The size of the HDFS blocks
bufferSize               The buffer size used by HDFS
checkIdleInterval        How often (time in millis) in to run the idle checker background
task. This option is only in use if the splitter strategy is IDLE.
chunkSize                When reading a normal file this is split into chunks producing a
message per chunk.
compressionCodec         The compression codec to use
compressionType          The compression type to use (is default not in use)
connectOnStartup         Whether to connect to the HDFS file system on starting the producer/consumer.
If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes
to establish a connection as it has hardcoded 45 x 20 sec redelivery. By setting this option
to false allows your application to startup and not block for up till 15 minutes.
fileSystemType           Set to LOCAL to not use HDFS but local instead.
fileType                 The file type to use. For more details see Hadoop HDFS documentation
about the various files types.
keyType                  The type for the key in case of sequence or map files.
openedSuffix             When a file is opened for reading/writing the file is renamed with
this suffix to avoid to read it during the writing phase.
owner                    The file owner must match this owner for the consumer to pickup the
file. Otherwise the file is skipped.
readSuffix               Once the file has been read is renamed with this suffix to avoid
to read it again.
replication              The HDFS replication factor
splitStrategy            In the current version of Hadoop opening a file in append mode is
disabled since it's not very reliable. So for the moment it's only possible to create new
files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy
option has been defined the hdfs path will be used as a directory and files will be created
using the configured UuidGenerator. Every time a splitting condition is met a new file is
created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:valueST:value...
where ST can be: BYTES a new file is created and the old is closed when the number of written
bytes is more than value MESSAGES a new file is created and the old is closed when the number
of written messages is more than value IDLE a new file is created and the old is closed when
no writing happened in the last value milliseconds
valueType                The type for the key in case of sequence or map files
bridgeErrorHandler       Allows for bridging the consumer to the Camel routing Error Handler
which mean any exceptions occurred while the consumer is trying to pickup incoming messages
or the likes will now be processed as a message and handled by the routing Error Handler.
By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions
that will be logged at WARN/ERROR level and ignored.
delay                    The interval (milliseconds) between the directory scans.
initialDelay             For the consumer how much to wait (milliseconds) before to start
scanning the directory.
pattern                  The pattern used for scanning the directory
sendEmptyMessageWhenIdle If the polling consumer did not poll any files you can enable this
option to send an empty message (no body) instead.
exceptionHandler         To let the consumer use a custom ExceptionHandler. Notice if the
option bridgeErrorHandler is enabled then this options is not in use. By default the consumer
will deal with exceptions that will be logged at WARN/ERROR level and ignored.
pollStrategy             A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing
you to provide your custom implementation to control error handling usually occurred during
the poll operation before an Exchange have been created and being routed in Camel.
append                   Append to existing file. Notice that not all HDFS file systems support
the append option.
overwrite                Whether to overwrite existing files with the same name
exchangePattern          Sets the default exchange pattern when creating an exchange
synchronous              Sets whether synchronous processing should be strictly used or Camel
is allowed to use asynchronous processing (if supported).
backoffErrorThreshold    The number of subsequent error polls (failed due some error) that
should happen before the backoffMultipler should kick-in.
backoffIdleThreshold     The number of subsequent idle polls that should happen before the
backoffMultipler should kick-in.
backoffMultiplier        To let the scheduled polling consumer backoff if there has been a
number of subsequent idles/errors in a row. The multiplier is then the number of polls that
will be skipped before the next actual attempt is happening again. When this option is in
use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.
greedy                   If greedy is enabled then the ScheduledPollConsumer will run immediately
again if the previous run polled 1 or more messages.
runLoggingLevel          The consumer logs a start/complete log line when it polls. This option
allows you to configure the logging level for that.
scheduledExecutorService Allows for configuring a custom/shared thread pool to use for the
consumer. By default each consumer has its own single threaded thread pool.
scheduler                To use a cron scheduler from either camel-spring or camel-quartz2
schedulerProperties      To configure additional properties when using a custom scheduler
or any of the Quartz2 Spring based scheduler.
startScheduler           Whether the scheduler should be auto started.
timeUnit                 Time unit for initialDelay and delay options.
useFixedDelay            Controls if fixed delay or fixed rate is used. See ScheduledExecutorService
in JDK for details.

Is that what you'd expect ?

> Camel commands - camel-component-info
> -------------------------------------
>                 Key: CAMEL-8211
>                 URL:
>             Project: Camel
>          Issue Type: New Feature
>          Components: tooling
>    Affects Versions: 2.15.0
>            Reporter: Claus Ibsen
>            Priority: Minor
>             Fix For: Future
> A new camel-catalog-component-info command to display detailed information about the
> We should show
> component description
> label(s)
> maven coordinate
> list of all its options and description for those
> This allows users to use these commands in tooling to read the component documentation.
> In the future we may slurp in any files we have in the components so we can
do all component documentation in the source code and not use the confluence wiki which gets
out of sync etc.

This message was sent by Atlassian JIRA

View raw message