camel-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From acosent...@apache.org
Subject [camel] 02/02: CAMEL-13806 - Regen and removed ejb-component from index
Date Wed, 31 Jul 2019 10:51:04 GMT
This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git

commit 765d7989e42399813b4b67df32bdb5b17a575c39
Author: Andrea Cosentino <ancosen@gmail.com>
AuthorDate: Wed Jul 31 12:42:53 2019 +0200

    CAMEL-13806 - Regen and removed ejb-component from index
---
 components/readme.adoc                             |   5 +-
 .../builder/endpoint/EndpointBuilderFactory.java   |   1 -
 .../endpoint/dsl/SparkEndpointBuilderFactory.java  | 244 ++++++++++++++-------
 docs/components/modules/ROOT/nav.adoc              |   1 -
 .../modules/ROOT/pages/jpa-component.adoc          |   6 +-
 docs/user-manual/modules/ROOT/pages/index.adoc     |   1 -
 6 files changed, 167 insertions(+), 91 deletions(-)

diff --git a/components/readme.adoc b/components/readme.adoc
index 194133d..f177460 100644
--- a/components/readme.adoc
+++ b/components/readme.adoc
@@ -1,7 +1,7 @@
 = Components
 
 // components: START
-Number of Components: 298 in 235 JAR artifacts (0 deprecated)
+Number of Components: 297 in 234 JAR artifacts (0 deprecated)
 
 [width="100%",cols="4,1,5",options="header"]
 |===
@@ -247,9 +247,6 @@ Number of Components: 298 in 235 JAR artifacts (0 deprecated)
 | link:camel-ehcache/src/main/docs/ehcache-component.adoc[Ehcache] (camel-ehcache) +
 `ehcache:cacheName` | 2.18 | The ehcache component enables you to perform caching operations
using Ehcache as cache implementation.
 
-| link:camel-ejb/src/main/docs/ejb-component.adoc[EJB] (camel-ejb) +
-`ejb:beanName` | 2.4 | The ejb component is for invoking EJB Java beans from Camel.
-
 | link:camel-elasticsearch-rest/src/main/docs/elasticsearch-rest-component.adoc[Elastichsearch
Rest] (camel-elasticsearch-rest) +
 `elasticsearch-rest:clusterName` | 2.21 | The elasticsearch component is used for interfacing
with ElasticSearch server using REST API.
 
diff --git a/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/EndpointBuilderFactory.java
b/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/EndpointBuilderFactory.java
index 9d3ec10..b50f932 100644
--- a/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/EndpointBuilderFactory.java
+++ b/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/EndpointBuilderFactory.java
@@ -93,7 +93,6 @@ public interface EndpointBuilderFactory extends
         ECSEndpointBuilderFactory,
         EKSEndpointBuilderFactory,
         EhcacheEndpointBuilderFactory,
-        EjbEndpointBuilderFactory,
         ElasticsearchEndpointBuilderFactory,
         ElsqlEndpointBuilderFactory,
         EtcdEndpointBuilderFactory,
diff --git a/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/dsl/SparkEndpointBuilderFactory.java
b/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/dsl/SparkEndpointBuilderFactory.java
index 6ef351c..a7bae49 100644
--- a/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/dsl/SparkEndpointBuilderFactory.java
+++ b/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/dsl/SparkEndpointBuilderFactory.java
@@ -22,8 +22,8 @@ import org.apache.camel.builder.EndpointProducerBuilder;
 import org.apache.camel.builder.endpoint.AbstractEndpointBuilder;
 
 /**
- * The spark component can be used to send RDD or DataFrame jobs to Apache Spark
- * cluster.
+ * The spark-rest component is used for hosting REST services which has been
+ * defined using Camel rest-dsl.
  * 
  * Generated by camel-package-maven-plugin - do not edit this file!
  */
@@ -32,138 +32,166 @@ public interface SparkEndpointBuilderFactory {
 
 
     /**
-     * Builder for endpoint for the Apache Spark component.
+     * Builder for endpoint for the Spark Rest component.
      */
-    public interface SparkEndpointBuilder extends EndpointProducerBuilder {
+    public interface SparkEndpointBuilder extends EndpointConsumerBuilder {
         default AdvancedSparkEndpointBuilder advanced() {
             return (AdvancedSparkEndpointBuilder) this;
         }
         /**
-         * Indicates if results should be collected or counted.
+         * Accept type such as: 'text/xml', or 'application/json'. By default we
+         * accept all kinds of types.
          * 
-         * The option is a: <code>boolean</code> type.
-         * 
-         * Group: producer
-         */
-        default SparkEndpointBuilder collect(boolean collect) {
-            setProperty("collect", collect);
-            return this;
-        }
-        /**
-         * Indicates if results should be collected or counted.
-         * 
-         * The option will be converted to a <code>boolean</code> type.
+         * The option is a: <code>java.lang.String</code> type.
          * 
-         * Group: producer
+         * Group: consumer
          */
-        default SparkEndpointBuilder collect(String collect) {
-            setProperty("collect", collect);
+        default SparkEndpointBuilder accept(String accept) {
+            setProperty("accept", accept);
             return this;
         }
         /**
-         * DataFrame to compute against.
+         * Determines whether or not the raw input stream from Spark
+         * HttpRequest#getContent() is cached or not (Camel will read the stream
+         * into a in light-weight memory based Stream caching) cache. By default
+         * Camel will cache the Netty input stream to support reading it
+         * multiple times to ensure Camel can retrieve all data from the stream.
+         * However you can set this option to true when you for example need to
+         * access the raw stream, such as streaming it directly to a file or
+         * other persistent store. Mind that if you enable this option, then you
+         * cannot read the Netty stream multiple times out of the box, and you
+         * would need manually to reset the reader index on the Spark raw
+         * stream.
          * 
-         * The option is a:
-         * <code>org.apache.spark.sql.Dataset&lt;org.apache.spark.sql.Row&gt;</code>
type.
+         * The option is a: <code>boolean</code> type.
          * 
-         * Group: producer
+         * Group: consumer
          */
-        default SparkEndpointBuilder dataFrame(Object dataFrame) {
-            setProperty("dataFrame", dataFrame);
+        default SparkEndpointBuilder disableStreamCache(
+                boolean disableStreamCache) {
+            setProperty("disableStreamCache", disableStreamCache);
             return this;
         }
         /**
-         * DataFrame to compute against.
+         * Determines whether or not the raw input stream from Spark
+         * HttpRequest#getContent() is cached or not (Camel will read the stream
+         * into a in light-weight memory based Stream caching) cache. By default
+         * Camel will cache the Netty input stream to support reading it
+         * multiple times to ensure Camel can retrieve all data from the stream.
+         * However you can set this option to true when you for example need to
+         * access the raw stream, such as streaming it directly to a file or
+         * other persistent store. Mind that if you enable this option, then you
+         * cannot read the Netty stream multiple times out of the box, and you
+         * would need manually to reset the reader index on the Spark raw
+         * stream.
          * 
-         * The option will be converted to a
-         * <code>org.apache.spark.sql.Dataset&lt;org.apache.spark.sql.Row&gt;</code>
type.
+         * The option will be converted to a <code>boolean</code> type.
          * 
-         * Group: producer
+         * Group: consumer
          */
-        default SparkEndpointBuilder dataFrame(String dataFrame) {
-            setProperty("dataFrame", dataFrame);
+        default SparkEndpointBuilder disableStreamCache(
+                String disableStreamCache) {
+            setProperty("disableStreamCache", disableStreamCache);
             return this;
         }
         /**
-         * Function performing action against an DataFrame.
+         * If this option is enabled, then during binding from Spark to Camel
+         * Message then the headers will be mapped as well (eg added as header
+         * to the Camel Message as well). You can turn off this option to
+         * disable this. The headers can still be accessed from the
+         * org.apache.camel.component.sparkrest.SparkMessage message with the
+         * method getRequest() that returns the Spark HTTP request instance.
          * 
-         * The option is a:
-         * <code>org.apache.camel.component.spark.DataFrameCallback</code> type.
+         * The option is a: <code>boolean</code> type.
          * 
-         * Group: producer
+         * Group: consumer
          */
-        default SparkEndpointBuilder dataFrameCallback(Object dataFrameCallback) {
-            setProperty("dataFrameCallback", dataFrameCallback);
+        default SparkEndpointBuilder mapHeaders(boolean mapHeaders) {
+            setProperty("mapHeaders", mapHeaders);
             return this;
         }
         /**
-         * Function performing action against an DataFrame.
+         * If this option is enabled, then during binding from Spark to Camel
+         * Message then the headers will be mapped as well (eg added as header
+         * to the Camel Message as well). You can turn off this option to
+         * disable this. The headers can still be accessed from the
+         * org.apache.camel.component.sparkrest.SparkMessage message with the
+         * method getRequest() that returns the Spark HTTP request instance.
          * 
-         * The option will be converted to a
-         * <code>org.apache.camel.component.spark.DataFrameCallback</code> type.
+         * The option will be converted to a <code>boolean</code> type.
          * 
-         * Group: producer
+         * Group: consumer
          */
-        default SparkEndpointBuilder dataFrameCallback(String dataFrameCallback) {
-            setProperty("dataFrameCallback", dataFrameCallback);
+        default SparkEndpointBuilder mapHeaders(String mapHeaders) {
+            setProperty("mapHeaders", mapHeaders);
             return this;
         }
         /**
-         * RDD to compute against.
+         * If enabled and an Exchange failed processing on the consumer side,
+         * and if the caused Exception was send back serialized in the response
+         * as a application/x-java-serialized-object content type. This is by
+         * default turned off. If you enable this then be aware that Java will
+         * deserialize the incoming data from the request to Java and that can
+         * be a potential security risk.
          * 
-         * The option is a: <code>org.apache.spark.api.java.JavaRDDLike</code>
-         * type.
+         * The option is a: <code>boolean</code> type.
          * 
-         * Group: producer
+         * Group: consumer
          */
-        default SparkEndpointBuilder rdd(Object rdd) {
-            setProperty("rdd", rdd);
+        default SparkEndpointBuilder transferException(boolean transferException) {
+            setProperty("transferException", transferException);
             return this;
         }
         /**
-         * RDD to compute against.
+         * If enabled and an Exchange failed processing on the consumer side,
+         * and if the caused Exception was send back serialized in the response
+         * as a application/x-java-serialized-object content type. This is by
+         * default turned off. If you enable this then be aware that Java will
+         * deserialize the incoming data from the request to Java and that can
+         * be a potential security risk.
          * 
-         * The option will be converted to a
-         * <code>org.apache.spark.api.java.JavaRDDLike</code> type.
+         * The option will be converted to a <code>boolean</code> type.
          * 
-         * Group: producer
+         * Group: consumer
          */
-        default SparkEndpointBuilder rdd(String rdd) {
-            setProperty("rdd", rdd);
+        default SparkEndpointBuilder transferException(String transferException) {
+            setProperty("transferException", transferException);
             return this;
         }
         /**
-         * Function performing action against an RDD.
+         * If this option is enabled, then during binding from Spark to Camel
+         * Message then the header values will be URL decoded (eg %20 will be a
+         * space character.).
          * 
-         * The option is a:
-         * <code>org.apache.camel.component.spark.RddCallback</code> type.
+         * The option is a: <code>boolean</code> type.
          * 
-         * Group: producer
+         * Group: consumer
          */
-        default SparkEndpointBuilder rddCallback(Object rddCallback) {
-            setProperty("rddCallback", rddCallback);
+        default SparkEndpointBuilder urlDecodeHeaders(boolean urlDecodeHeaders) {
+            setProperty("urlDecodeHeaders", urlDecodeHeaders);
             return this;
         }
         /**
-         * Function performing action against an RDD.
+         * If this option is enabled, then during binding from Spark to Camel
+         * Message then the header values will be URL decoded (eg %20 will be a
+         * space character.).
          * 
-         * The option will be converted to a
-         * <code>org.apache.camel.component.spark.RddCallback</code> type.
+         * The option will be converted to a <code>boolean</code> type.
          * 
-         * Group: producer
+         * Group: consumer
          */
-        default SparkEndpointBuilder rddCallback(String rddCallback) {
-            setProperty("rddCallback", rddCallback);
+        default SparkEndpointBuilder urlDecodeHeaders(String urlDecodeHeaders) {
+            setProperty("urlDecodeHeaders", urlDecodeHeaders);
             return this;
         }
     }
 
     /**
-     * Advanced builder for endpoint for the Apache Spark component.
+     * Advanced builder for endpoint for the Spark Rest component.
      */
     public interface AdvancedSparkEndpointBuilder
             extends
-                EndpointProducerBuilder {
+                EndpointConsumerBuilder {
         default SparkEndpointBuilder basic() {
             return (SparkEndpointBuilder) this;
         }
@@ -194,6 +222,56 @@ public interface SparkEndpointBuilderFactory {
             return this;
         }
         /**
+         * Whether or not the consumer should try to find a target consumer by
+         * matching the URI prefix if no exact match is found.
+         * 
+         * The option is a: <code>boolean</code> type.
+         * 
+         * Group: advanced
+         */
+        default AdvancedSparkEndpointBuilder matchOnUriPrefix(
+                boolean matchOnUriPrefix) {
+            setProperty("matchOnUriPrefix", matchOnUriPrefix);
+            return this;
+        }
+        /**
+         * Whether or not the consumer should try to find a target consumer by
+         * matching the URI prefix if no exact match is found.
+         * 
+         * The option will be converted to a <code>boolean</code> type.
+         * 
+         * Group: advanced
+         */
+        default AdvancedSparkEndpointBuilder matchOnUriPrefix(
+                String matchOnUriPrefix) {
+            setProperty("matchOnUriPrefix", matchOnUriPrefix);
+            return this;
+        }
+        /**
+         * To use a custom SparkBinding to map to/from Camel message.
+         * 
+         * The option is a:
+         * <code>org.apache.camel.component.sparkrest.SparkBinding</code> type.
+         * 
+         * Group: advanced
+         */
+        default AdvancedSparkEndpointBuilder sparkBinding(Object sparkBinding) {
+            setProperty("sparkBinding", sparkBinding);
+            return this;
+        }
+        /**
+         * To use a custom SparkBinding to map to/from Camel message.
+         * 
+         * The option will be converted to a
+         * <code>org.apache.camel.component.sparkrest.SparkBinding</code> type.
+         * 
+         * Group: advanced
+         */
+        default AdvancedSparkEndpointBuilder sparkBinding(String sparkBinding) {
+            setProperty("sparkBinding", sparkBinding);
+            return this;
+        }
+        /**
          * Sets whether synchronous processing should be strictly used, or Camel
          * is allowed to use asynchronous processing (if supported).
          * 
@@ -219,24 +297,28 @@ public interface SparkEndpointBuilderFactory {
         }
     }
     /**
-     * Apache Spark (camel-spark)
-     * The spark component can be used to send RDD or DataFrame jobs to Apache
-     * Spark cluster.
+     * Spark Rest (camel-spark-rest)
+     * The spark-rest component is used for hosting REST services which has been
+     * defined using Camel rest-dsl.
+     * 
+     * Category: rest
+     * Available as of version: 2.14
+     * Maven coordinates: org.apache.camel:camel-spark-rest
      * 
-     * Category: bigdata,iot
-     * Available as of version: 2.17
-     * Maven coordinates: org.apache.camel:camel-spark
+     * Syntax: <code>spark-rest:verb:path</code>
      * 
-     * Syntax: <code>spark:endpointType</code>
+     * Path parameter: verb (required)
+     * get, post, put, patch, delete, head, trace, connect, or options.
+     * The value can be one of: get, post, put, patch, delete, head, trace,
+     * connect, options
      * 
-     * Path parameter: endpointType (required)
-     * Type of the endpoint (rdd, dataframe, hive).
-     * The value can be one of: rdd, dataframe, hive
+     * Path parameter: path (required)
+     * The content path which support Spark syntax.
      */
     default SparkEndpointBuilder spark(String path) {
         class SparkEndpointBuilderImpl extends AbstractEndpointBuilder implements SparkEndpointBuilder,
AdvancedSparkEndpointBuilder {
             public SparkEndpointBuilderImpl(String path) {
-                super("spark", path);
+                super("spark-rest", path);
             }
         }
         return new SparkEndpointBuilderImpl(path);
diff --git a/docs/components/modules/ROOT/nav.adoc b/docs/components/modules/ROOT/nav.adoc
index 091c5b2..244647b 100644
--- a/docs/components/modules/ROOT/nav.adoc
+++ b/docs/components/modules/ROOT/nav.adoc
@@ -95,7 +95,6 @@
 * xref:drill-component.adoc[Drill Component]
 * xref:dropbox-component.adoc[Dropbox Component]
 * xref:ehcache-component.adoc[Ehcache Component]
-* xref:ejb-component.adoc[EJB Component]
 * xref:elasticsearch-rest-component.adoc[Elastichsearch Rest Component]
 * xref:elsql-component.adoc[ElSQL Component]
 * xref:etcd-component.adoc[etcd Component]
diff --git a/docs/components/modules/ROOT/pages/jpa-component.adoc b/docs/components/modules/ROOT/pages/jpa-component.adoc
index a4c43fb..628535e 100644
--- a/docs/components/modules/ROOT/pages/jpa-component.adoc
+++ b/docs/components/modules/ROOT/pages/jpa-component.adoc
@@ -146,11 +146,9 @@ with the following path and query parameters:
 | *maximumResults* (common) | Set the maximum number of results to retrieve on the Query.
| -1 | int
 | *namedQuery* (common) | To use a named query. |  | String
 | *nativeQuery* (common) | To use a custom native query. You may want to use the option resultClass
also when using native queries. |  | String
-| *parameters* (common) | This key/value mapping is used for building the query parameters.
It is expected to be of the generic type java.util.Map where the keys are the named parameters
of a given JPA query and the values are their corresponding effective values you want to select
for. When it's used for producer, Simple expression can be used as a parameter value. It allows
you to retrieve parameter values from the message body, header and etc. |  | Map
 | *persistenceUnit* (common) | *Required* The JPA persistence unit used by default. | camel
| String
 | *query* (common) | To use a custom query. |  | String
 | *resultClass* (common) | Defines the type of the returned payload (we will call entityManager.createNativeQuery(nativeQuery,
resultClass) instead of entityManager.createNativeQuery(nativeQuery)). Without this option,
we will return an object array. Only has an affect when using in conjunction with native query
when consuming data. |  | Class
-| *sharedEntityManager* (common) | Whether to use Spring's SharedEntityManager for the consumer/producer.
Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager.
| false | boolean
 | *bridgeErrorHandler* (consumer) | Allows for bridging the consumer to the Camel routing
Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming
messages, or the likes, will now be processed as a message and handled by the routing Error
Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal
with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean
 | *consumeDelete* (consumer) | If true, the entity is deleted after it is consumed; if false,
the entity is not deleted. | true | boolean
 | *consumeLockEntity* (consumer) | Specifies whether or not to set an exclusive lock on each
entity bean while processing the results from polling. | true | boolean
@@ -163,15 +161,17 @@ with the following path and query parameters:
 | *transacted* (consumer) | Whether to run the consumer in transacted mode, by which all
messages will either commit or rollback, when the entire batch has been processed. The default
behavior (false) is to commit all the previously successfully processed messages, and only
rollback the last failed message. | false | boolean
 | *exceptionHandler* (consumer) | To let the consumer use a custom ExceptionHandler. Notice
if the option bridgeErrorHandler is enabled then this option is not in use. By default the
consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.
|  | ExceptionHandler
 | *exchangePattern* (consumer) | Sets the exchange pattern when the consumer creates an exchange.
|  | ExchangePattern
+| *parameters* (consumer) | This key/value mapping is used for building the query parameters.
It is expected to be of the generic type java.util.Map where the keys are the named parameters
of a given JPA query and the values are their corresponding effective values you want to select
for. When it's used for producer, Simple expression can be used as a parameter value. It allows
you to retrieve parameter values from the message body, header and etc. |  | Map
 | *pollStrategy* (consumer) | A pluggable org.apache.camel.PollingConsumerPollingStrategy
allowing you to provide your custom implementation to control error handling usually occurred
during the poll operation before an Exchange have been created and being routed in Camel.
|  | PollingConsumerPoll Strategy
 | *flushOnSend* (producer) | Flushes the EntityManager after the entity bean has been persisted.
| true | boolean
 | *lazyStartProducer* (producer) | Whether the producer should be started lazy (on the first
message). By starting lazy you can use this to allow CamelContext and routes to startup in
situations where a producer may otherwise fail during starting and cause the route to fail
being started. By deferring this startup to be lazy then the startup failure can be handled
during routing messages via Camel's routing error handlers. Beware that when the first message
is processed then creating and [...]
 | *remove* (producer) | Indicates to use entityManager.remove(entity). | false | boolean
 | *useExecuteUpdate* (producer) | To configure whether to use executeUpdate() when producer
executes a query. When you use INSERT, UPDATE or DELETE statement as a named query, you need
to specify this option to 'true'. |  | Boolean
-| *usePassedInEntityManager* (producer) | If set to true, then Camel will use the EntityManager
from the header JpaConstants.ENTITY_MANAGER instead of the configured entity manager on the
component/endpoint. This allows end users to control which entity manager will be in use.
| false | boolean
 | *usePersist* (producer) | Indicates to use entityManager.persist(entity) instead of entityManager.merge(entity).
Note: entityManager.persist(entity) doesn't work for detached entities (where the EntityManager
has to execute an UPDATE instead of an INSERT query)! | false | boolean
+| *usePassedInEntityManager* (producer) | If set to true, then Camel will use the EntityManager
from the header JpaConstants.ENTITY_MANAGER instead of the configured entity manager on the
component/endpoint. This allows end users to control which entity manager will be in use.
| false | boolean
 | *basicPropertyBinding* (advanced) | Whether the endpoint should use basic property binding
(Camel 2.x) or the newer property binding with additional capabilities | false | boolean
 | *entityManagerProperties* (advanced) | Additional properties for the entity manager to
use. |  | Map
+| *sharedEntityManager* (advanced) | Whether to use Spring's SharedEntityManager for the
consumer/producer. Note in most cases joinTransaction should be set to false as this is not
an EXTENDED EntityManager. | false | boolean
 | *synchronous* (advanced) | Sets whether synchronous processing should be strictly used,
or Camel is allowed to use asynchronous processing (if supported). | false | boolean
 | *backoffErrorThreshold* (scheduler) | The number of subsequent error polls (failed due
some error) that should happen before the backoffMultipler should kick-in. |  | int
 | *backoffIdleThreshold* (scheduler) | The number of subsequent idle polls that should happen
before the backoffMultipler should kick-in. |  | int
diff --git a/docs/user-manual/modules/ROOT/pages/index.adoc b/docs/user-manual/modules/ROOT/pages/index.adoc
index 01e3796..a357c74 100644
--- a/docs/user-manual/modules/ROOT/pages/index.adoc
+++ b/docs/user-manual/modules/ROOT/pages/index.adoc
@@ -233,7 +233,6 @@ camel routes without them knowing
 ** xref:components::drill-component.adoc[Drill]
 ** xref:components::dropbox-component.adoc[Dropbox]
 ** xref:components::ehcache-component.adoc[Ehcache]
-** xref:components::ejb-component.adoc[EJB]
 ** xref:components::elasticsearch-rest-component.adoc[Elastichsearch Rest]
 ** xref:components::elsql-component.adoc[ElSQL]
 ** xref:components::etcd-component.adoc[etcd]


Mime
View raw message