camel-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r974825 - in /websites/production/camel/content: apache-spark.html cache/main.pageCache
Date Mon, 07 Dec 2015 10:19:20 GMT
Author: buildbot
Date: Mon Dec  7 10:19:19 2015
New Revision: 974825

Log:
Production update by buildbot for camel

Modified:
    websites/production/camel/content/apache-spark.html
    websites/production/camel/content/cache/main.pageCache

Modified: websites/production/camel/content/apache-spark.html
==============================================================================
--- websites/production/camel/content/apache-spark.html (original)
+++ websites/production/camel/content/apache-spark.html Mon Dec  7 10:19:19 2015
@@ -85,7 +85,7 @@
 	<tbody>
         <tr>
         <td valign="top" width="100%">
-<div class="wiki-content maincontent"><h2 id="ApacheSpark-ApacheSparkcomponent">Apache
Spark component</h2><div class="confluence-information-macro confluence-information-macro-information"><span
class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>&#160;Apache Spark component is
available starting from Camel <strong>2.17</strong>.</p></div></div><p>&#160;</p><p><span
style="line-height: 1.5625;font-size: 16.0px;">This documentation page covers the <a
shape="rect" class="external-link" href="http://spark.apache.org/">Apache Spark</a>
component for the Apache Camel. The main purpose of the Spark integration with Camel is to
provide a bridge between Camel connectors and Spark tasks. In particular Camel connector provides
a way to route message from various transports, dynamically choose a task to execute, use
incoming message as input data for that task and finally deliver the results of the execut
 ion back to the Camel pipeline.</span></p><h3 id="ApacheSpark-Supportedarchitecturalstyles"><span>Supported
architectural styles</span></h3><p><span style="line-height: 1.5625;font-size:
16.0px;">Spark component can be used as a driver application deployed into an application
server (or executed as a fat jar).</span></p><p><span style="line-height:
1.5625;font-size: 16.0px;"><span class="confluence-embedded-file-wrapper confluence-embedded-manual-size"><img
class="confluence-embedded-image" height="250" src="apache-spark.data/camel_spark_driver.png"
data-image-src="/confluence/download/attachments/61331559/camel_spark_driver.png?version=2&amp;modificationDate=1449478362672&amp;api=v2"
data-unresolved-comment-count="0" data-linked-resource-id="61331563" data-linked-resource-version="2"
data-linked-resource-type="attachment" data-linked-resource-default-alias="camel_spark_driver.png"
data-base-url="https://cwiki.apache.org/confluence" data-linked-resource-content-type="image/png"
data
 -linked-resource-container-id="61331559" data-linked-resource-container-version="3"></span><br
clear="none"></span></p><p><span style="line-height: 1.5625;font-size:
16.0px;">Spark component can also be submitted as a job directly into the Spark cluster.</span></p><p><span
style="line-height: 1.5625;font-size: 16.0px;"><span class="confluence-embedded-file-wrapper
confluence-embedded-manual-size"><img class="confluence-embedded-image" height="250"
src="apache-spark.data/camel_spark_cluster.png" data-image-src="/confluence/download/attachments/61331559/camel_spark_cluster.png?version=1&amp;modificationDate=1449478393084&amp;api=v2"
data-unresolved-comment-count="0" data-linked-resource-id="61331565" data-linked-resource-version="1"
data-linked-resource-type="attachment" data-linked-resource-default-alias="camel_spark_cluster.png"
data-base-url="https://cwiki.apache.org/confluence" data-linked-resource-content-type="image/png"
data-linked-resource-container-id="61331559" data-linked-r
 esource-container-version="3"></span><br clear="none"></span></p><p><span
style="line-height: 1.5625;font-size: 16.0px;">While Spark component is primary designed
to work as a <em>long running job</em>&#160;serving as an bridge between Spark
cluster and the other endpoints, you can also use it as a <em>fire-once</em> short
job. &#160;</span></p><div><span><br clear="none"></span></div><p><span
style="line-height: 1.5625;font-size: 16.0px;"><br clear="none"><br clear="none"></span></p><h3
id="ApacheSpark-KuraRouteractivator"><span style="line-height: 1.5625;font-size: 16.0px;">KuraRouter
activator</span></h3><p>Bundles deployed to the Eclipse&#160;Kura&#160;are
usually <a shape="rect" class="external-link" href="http://eclipse.github.io/kura/doc/hello-example.html#create-java-class"
rel="nofollow">developed as bundle activators</a>. So the easiest way to deploy Apache
Camel routes into the Kura is to create an OSGi bundle containing the class extending <code>org.apache.camel.kura.Kur
 aRouter</code> class:</p><div class="code panel pdl" style="border-width:
1px;"><div class="codeContent panelContent pdl">
+<div class="wiki-content maincontent"><h2 id="ApacheSpark-ApacheSparkcomponent">Apache
Spark component</h2><div class="confluence-information-macro confluence-information-macro-information"><span
class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>&#160;Apache Spark component is
available starting from Camel <strong>2.17</strong>.</p></div></div><p>&#160;</p><p><span
style="line-height: 1.5625;font-size: 16.0px;">This documentation page covers the <a
shape="rect" class="external-link" href="http://spark.apache.org/">Apache Spark</a>
component for the Apache Camel. The main purpose of the Spark integration with Camel is to
provide a bridge between Camel connectors and Spark tasks. In particular Camel connector provides
a way to route message from various transports, dynamically choose a task to execute, use
incoming message as input data for that task and finally deliver the results of the execut
 ion back to the Camel pipeline.</span></p><h3 id="ApacheSpark-Supportedarchitecturalstyles"><span>Supported
architectural styles</span></h3><p><span style="line-height: 1.5625;font-size:
16.0px;">Spark component can be used as a driver application deployed into an application
server (or executed as a fat jar).</span></p><p><span style="line-height:
1.5625;font-size: 16.0px;"><span class="confluence-embedded-file-wrapper confluence-embedded-manual-size"><img
class="confluence-embedded-image" height="250" src="apache-spark.data/camel_spark_driver.png"
data-image-src="/confluence/download/attachments/61331559/camel_spark_driver.png?version=2&amp;modificationDate=1449478362000&amp;api=v2"
data-unresolved-comment-count="0" data-linked-resource-id="61331563" data-linked-resource-version="2"
data-linked-resource-type="attachment" data-linked-resource-default-alias="camel_spark_driver.png"
data-base-url="https://cwiki.apache.org/confluence" data-linked-resource-content-type="image/png"
data
 -linked-resource-container-id="61331559" data-linked-resource-container-version="4"></span><br
clear="none"></span></p><p><span style="line-height: 1.5625;font-size:
16.0px;">Spark component can also be submitted as a job directly into the Spark cluster.</span></p><p><span
style="line-height: 1.5625;font-size: 16.0px;"><span class="confluence-embedded-file-wrapper
confluence-embedded-manual-size"><img class="confluence-embedded-image" height="250"
src="apache-spark.data/camel_spark_cluster.png" data-image-src="/confluence/download/attachments/61331559/camel_spark_cluster.png?version=1&amp;modificationDate=1449478393000&amp;api=v2"
data-unresolved-comment-count="0" data-linked-resource-id="61331565" data-linked-resource-version="1"
data-linked-resource-type="attachment" data-linked-resource-default-alias="camel_spark_cluster.png"
data-base-url="https://cwiki.apache.org/confluence" data-linked-resource-content-type="image/png"
data-linked-resource-container-id="61331559" data-linked-r
 esource-container-version="4"></span><br clear="none"></span></p><p><span
style="line-height: 1.5625;font-size: 16.0px;">While Spark component is primary designed
to work as a <em>long running job</em>&#160;serving as an bridge between Spark
cluster and the other endpoints, you can also use it as a <em>fire-once</em> short
job. &#160;</span></p><div><span><br clear="none"></span></div><p><span
style="line-height: 1.5625;font-size: 16.0px;">&#160;</span></p><h3
id="ApacheSpark-RunningSparkinOSGiservers"><span>Running Spark in OSGi servers</span></h3><p><span
style="line-height: 1.5625;font-size: 16.0px;">&#160;</span></p><p>Currently
the Spark component doesn't support execution in the OSGi container. Spark has been designed
to be executed as a fat jar, usually submitted as a job to a cluster. For those reasons running
Spark in an OSGi server is at least challenging and is not support by Camel as well.</p><p><span
style="line-height: 1.5625;font-size: 16.0px;"><br clear="none"><br cl
 ear="none"></span></p><h3 id="ApacheSpark-KuraRouteractivator"><span
style="line-height: 1.5625;font-size: 16.0px;">KuraRouter activator</span></h3><p>Bundles
deployed to the Eclipse&#160;Kura&#160;are usually <a shape="rect" class="external-link"
href="http://eclipse.github.io/kura/doc/hello-example.html#create-java-class" rel="nofollow">developed
as bundle activators</a>. So the easiest way to deploy Apache Camel routes into the
Kura is to create an OSGi bundle containing the class extending <code>org.apache.camel.kura.KuraRouter</code>
class:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent
panelContent pdl">
 <script class="brush: java; gutter: false; theme: Default" type="syntaxhighlighter"><![CDATA[public
class MyKuraRouter extends KuraRouter {
 
   @Override

Modified: websites/production/camel/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.



Mime
View raw message