From commits-return-72976-archive-asf-public=cust-asf.ponee.io@camel.apache.org Fri May 31 04:24:26 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 09C80180638 for ; Fri, 31 May 2019 06:24:25 +0200 (CEST) Received: (qmail 25961 invoked by uid 500); 31 May 2019 04:24:25 -0000 Mailing-List: contact commits-help@camel.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@camel.apache.org Delivered-To: mailing list commits@camel.apache.org Received: (qmail 25952 invoked by uid 99); 31 May 2019 04:24:25 -0000 Received: from ec2-52-202-80-70.compute-1.amazonaws.com (HELO gitbox.apache.org) (52.202.80.70) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 31 May 2019 04:24:24 +0000 Received: by gitbox.apache.org (ASF Mail Server at gitbox.apache.org, from userid 33) id CE08C8A52B; Fri, 31 May 2019 04:24:19 +0000 (UTC) Date: Fri, 31 May 2019 04:24:19 +0000 To: "commits@camel.apache.org" Subject: [camel] branch master updated: Regen MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Message-ID: <155927665927.2010.11648411442772690096@gitbox.apache.org> From: davsclaus@apache.org X-Git-Host: gitbox.apache.org X-Git-Repo: camel X-Git-Refname: refs/heads/master X-Git-Reftype: branch X-Git-Oldrev: e92b84bf1ed14c6373eafd330ef97032583d4de6 X-Git-Newrev: b03eaca034696be2b2d91f5e26a3affae9568a91 X-Git-Rev: b03eaca034696be2b2d91f5e26a3affae9568a91 X-Git-NotificationType: ref_changed_plus_diff X-Git-Multimail-Version: 1.5.dev Auto-Submitted: auto-generated This is an automated email from the ASF dual-hosted git repository. davsclaus pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/camel.git The following commit(s) were added to refs/heads/master by this push: new b03eaca Regen b03eaca is described below commit b03eaca034696be2b2d91f5e26a3affae9568a91 Author: Claus Ibsen AuthorDate: Fri May 31 06:16:28 2019 +0200 Regen --- .../src/main/docs/tokenize-language.adoc | 3 +- .../modules/ROOT/pages/claimCheck-eip.adoc | 41 +++++++++++++++------- .../modules/ROOT/pages/tokenize-language.adoc | 3 +- 3 files changed, 32 insertions(+), 15 deletions(-) diff --git a/core/camel-base/src/main/docs/tokenize-language.adoc b/core/camel-base/src/main/docs/tokenize-language.adoc index 63937d7..b2cdb73 100644 --- a/core/camel-base/src/main/docs/tokenize-language.adoc +++ b/core/camel-base/src/main/docs/tokenize-language.adoc @@ -17,7 +17,7 @@ seeĀ Splitter. === Tokenize Options // language options: START -The Tokenize language supports 10 options, which are listed below. +The Tokenize language supports 11 options, which are listed below. @@ -32,6 +32,7 @@ The Tokenize language supports 10 options, which are listed below. | xml | false | Boolean | Whether the input is XML messages. This option must be set to true if working with XML payloads. | includeTokens | false | Boolean | Whether to include the tokens in the parts when using pairs The default value is false | group | | String | To group N parts together, for example to split big files into chunks of 1000 lines. You can use simple language as the group to support dynamic group sizes. +| groupDelimiter | | String | Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. | skipFirst | false | Boolean | To skip the very first element | trim | true | Boolean | Whether to trim the value to remove leading and trailing whitespaces and line breaks |=== diff --git a/docs/user-manual/modules/ROOT/pages/claimCheck-eip.adoc b/docs/user-manual/modules/ROOT/pages/claimCheck-eip.adoc index a3fa60a..2643bfb 100644 --- a/docs/user-manual/modules/ROOT/pages/claimCheck-eip.adoc +++ b/docs/user-manual/modules/ROOT/pages/claimCheck-eip.adoc @@ -20,7 +20,7 @@ The Claim Check EIP supports 5 options which are listed below: |=== | Name | Description | Default | Type | *operation* | *Required* The claim check operation to use. The following operations is supported: Get - Gets (does not remove) the claim check by the given key. GetAndRemove - Gets and remove the claim check by the given key. Set - Sets a new (will override if key already exists) claim check with the given key. Push - Sets a new claim check on the stack (does not use key). Pop - Gets the latest claim check from the stack (does not use key). | | ClaimCheckOperation -| *key* | To use a specific key for claim check id. | | String +| *key* | To use a specific key for claim check id (for dynamic keys use simple language syntax as the key). | | String | *filter* | Specified a filter to control what data gets merging data back from the claim check repository. The following syntax is supported: body - to aggregate the message body attachments - to aggregate all the message attachments headers - to aggregate all the message headers header:pattern - to aggregate all the message headers that matches the pattern. The pattern uses the following rules are applied in this order: exact match, returns true wildcard match (pattern ends with a and [...] | *strategyRef* | To use a custom AggregationStrategy instead of the default implementation. Notice you cannot use both custom aggregation strategy and configure data at the same time. | | String | *strategyMethodName* | This option can be used to explicit declare the method name to use, when using POJOs as the AggregationStrategy. | | String @@ -67,35 +67,30 @@ You can specify multiple rules separated by comma. For example to include the message body and all headers starting with _foo_: -[text] ---- body,header:foo* ---- To only merge back the message body: -[text] ---- body ---- To only merge back the message attachments: -[text] ---- attachments ---- To only merge back headers: -[text] ---- headers ---- To only merge back a header name foo: -[text] ---- header:foo ---- @@ -104,7 +99,7 @@ If the filter rule is specified as empty or as wildcard then everything is merge Notice that when merging back data, then any existing data is overwritten, and any other existing data is preserved. -==== Fine grained filtering with include and explude pattern +==== Fine grained filtering with include and exclude pattern The syntax also supports the following prefixes which can be used to specify include,exclude, or remove @@ -129,12 +124,32 @@ You can also instruct to remove headers when merging data back, for example to r Note you cannot have both include (`+`) and exclude (`-`) `header:pattern` at the same time. +=== Dynamic keys + +The claim check key are static, but you can use the `simple` language syntax to define dynamic keys, +for example to use a header from the message named `myKey`: + +[source,java] +---- +from("direct:start") + .to("mock:a") + .claimCheck(ClaimCheckOperation.Set, "${header.myKey}") + .transform().constant("Bye World") + .to("mock:b") + .claimCheck(ClaimCheckOperation.Get, "${header.myKey}") + .to("mock:c") + .transform().constant("Hi World") + .to("mock:d") + .claimCheck(ClaimCheckOperation.Get, "${header.myKey}") + .to("mock:e"); +---- + === Java Examples The following example shows the `Push` and `Pop` operations in action; -[java] +[source,java] ---- from("direct:start") .to("mock:a") @@ -151,7 +166,7 @@ then the original message body is retrieved and merged back so `mock:c` will ret Here is an example using `Get` and `Set` operations, which uses the key `foo`: -[java] +[source,java] ---- from("direct:start") .to("mock:a") @@ -171,7 +186,7 @@ to get the data once, you can use `GetAndRemove`. The last example shows how to use the `filter` option where we only want to get back header named `foo` or `bar`: -[java] +[source,java] ---- from("direct:start") .to("mock:a") @@ -189,7 +204,7 @@ from("direct:start") The following example shows the `Push` and `Pop` operations in action; -[xml] +[source,xml] ---- @@ -210,7 +225,7 @@ then the original message body is retrieved and merged back so `mock:c` will ret Here is an example using `Get` and `Set` operations, which uses the key `foo`: -[xml] +[source,xml] ---- @@ -236,7 +251,7 @@ to get the data once, you can use `GetAndRemove`. The last example shows how to use the `filter` option where we only want to get back header named `foo` or `bar`: -[xml] +[source,xml] ---- diff --git a/docs/user-manual/modules/ROOT/pages/tokenize-language.adoc b/docs/user-manual/modules/ROOT/pages/tokenize-language.adoc index 63937d7..b2cdb73 100644 --- a/docs/user-manual/modules/ROOT/pages/tokenize-language.adoc +++ b/docs/user-manual/modules/ROOT/pages/tokenize-language.adoc @@ -17,7 +17,7 @@ seeĀ Splitter. === Tokenize Options // language options: START -The Tokenize language supports 10 options, which are listed below. +The Tokenize language supports 11 options, which are listed below. @@ -32,6 +32,7 @@ The Tokenize language supports 10 options, which are listed below. | xml | false | Boolean | Whether the input is XML messages. This option must be set to true if working with XML payloads. | includeTokens | false | Boolean | Whether to include the tokens in the parts when using pairs The default value is false | group | | String | To group N parts together, for example to split big files into chunks of 1000 lines. You can use simple language as the group to support dynamic group sizes. +| groupDelimiter | | String | Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. | skipFirst | false | Boolean | To skip the very first element | trim | true | Boolean | Whether to trim the value to remove leading and trailing whitespaces and line breaks |===