falcon-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From srik...@apache.org
Subject svn commit: r1565098 [9/15] - in /incubator/falcon: site/ site/0.3-incubating/ site/0.3-incubating/docs/ site/0.3-incubating/docs/restapi/ site/0.4-incubating/ site/0.4-incubating/css/ site/0.4-incubating/docs/ site/0.4-incubating/docs/restapi/ site/0....
Date Thu, 06 Feb 2014 07:38:02 GMT
Modified: incubator/falcon/site/docs/EntitySpecification.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/EntitySpecification.html?rev=1565098&r1=1565097&r2=1565098&view=diff
==============================================================================
--- incubator/falcon/site/docs/EntitySpecification.html (original)
+++ incubator/falcon/site/docs/EntitySpecification.html Thu Feb  6 07:37:58 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Oct 28, 2013
+ | Generated by Apache Maven Doxia at Feb 6, 2014
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20131028" />
+    <meta name="Date-Revision-yyyymmdd" content="20140206" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - Contents</title>
     <link rel="stylesheet" href="../css/apache-maven-fluido-1.3.0.min.css" />
@@ -99,6 +99,9 @@
                       <li>      <a href="http://www.apache.org/dist/incubator/falcon/0.3-incubating"  title="0.3-incubating">0.3-incubating</a>
 </li>
                   
+                      <li>      <a href="http://www.apache.org/dist/incubator/falcon/0.4-incubating"  title="0.4-incubating">0.4-incubating</a>
+</li>
+                  
                       <li>      <a href="https://cwiki.apache.org/confluence/display/FALCON/Roadmap"  title="Roadmap">Roadmap</a>
 </li>
                           </ul>
@@ -112,6 +115,9 @@
                   
                       <li>      <a href="../0.3-incubating/index.html"  title="0.3-incubating">0.3-incubating</a>
 </li>
+                  
+                      <li>      <a href="../0.4-incubating/index.html"  title="0.4-incubating">0.4-incubating</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -144,6 +150,9 @@
                   
                       <li>      <a href="../docs/restapi/ResourceList.html"  title="Rest API">Rest API</a>
 </li>
+                  
+                      <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -227,7 +236,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2013-10-28</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
             
                             </ul>
       </div>
@@ -253,6 +262,9 @@
 &lt;interface type=&quot;workflow&quot; endpoint=&quot;http://localhost:11000/oozie/&quot; version=&quot;3.1&quot; /&gt;
 
 </pre></div><p>A workflow interface specifies the interface for workflow engine, example of its endpoint is the value for OOZIE_URL. Falcon uses this interface to schedule the processes referencing this cluster on workflow engine defined here.</p><div class="source"><pre class="prettyprint">
+&lt;interface type=&quot;registry&quot; endpoint=&quot;thrift://localhost:9083&quot; version=&quot;0.11.0&quot; /&gt;
+
+</pre></div><p>A registry interface specifies the interface for metadata catalog, such as Hive Metastore (or HCatalog). Falcon uses this interface to register/de-register partitions for a given database and table. Also, uses this information to schedule data availability events based on partitions in the workflow engine. Although Hive metastore supports both RPC and HTTP, Falcon comes with an implementation for RPC over thrift.</p><div class="source"><pre class="prettyprint">
 &lt;interface type=&quot;messaging&quot; endpoint=&quot;tcp://localhost:61616?daemon=true&quot; version=&quot;5.4.6&quot; /&gt;
 
 </pre></div><p>A messaging interface specifies the interface for sending feed availability messages, it's endpoint is broker url with tcp address.</p><p>A cluster has a list of locations defined:</p><div class="source"><pre class="prettyprint">
@@ -265,25 +277,13 @@
 &lt;feed description=&quot;clicks log&quot; name=&quot;clicks&quot; xmlns=&quot;uri:falcon:feed:0.1&quot;
 xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt;
 
-</pre></div><p>A feed should have a unique name and this name is referenced by processes as input or output feed.</p><div class="source"><pre class="prettyprint">
-   &lt;partitions&gt;
-        &lt;partition name=&quot;country&quot; /&gt;
-        &lt;partition name=&quot;cluster&quot; /&gt;
-    &lt;/partitions&gt;
-
-</pre></div><p>A feed can define multiple partitions, if a referenced cluster defines partitions then the number of partitions in feed has to be equal to or more than the cluster partitions.</p><div class="source"><pre class="prettyprint">
-    &lt;groups&gt;online,bi&lt;/groups&gt;
-
-</pre></div><p>A feed specifies a list of comma separated groups, a group is a logical grouping of feeds and a group is said to be available if all the feeds belonging to a group are available. The frequency of all the feed which belong to the same group must be same.</p><div class="source"><pre class="prettyprint">
-    &lt;availabilityFlag&gt;_SUCCESS&lt;/availabilityFlag&gt;
-
-</pre></div><p>An availabilityFlag specifies the name of a file which when present/created in a feeds data directory,  the feed is termed as available. ex: _SUCCESS, if this element is ignored then Falcon would consider the presence of feed's data directory as feed availability.</p><div class="source"><pre class="prettyprint">
-    &lt;frequency&gt;minutes(20)&lt;/frequency&gt;
-
-</pre></div><p>A feed has a frequency which specifies the frequency by which this feed is generated.  ex: it can be generated every hour, every 5 minutes, daily, weekly etc. valid frequency type for a feed are minutes, hours, days, months. The values can be negative, zero or positive.</p><div class="source"><pre class="prettyprint">
-    &lt;late-arrival cut-off=&quot;hours(6)&quot; /&gt;
+</pre></div><p>A feed should have a unique name and this name is referenced by processes as input or output feed.</p></div><div class="section"><h4>Storage<a name="Storage"></a></h4><p>Falcon introduces a new abstraction to encapsulate the storage for a given feed which can either be expressed as a path on the file system, File System Storage or a table in a catalog such as Hive, Catalog Storage.</p><div class="source"><pre class="prettyprint">
+    &lt;xs:choice minOccurs=&quot;1&quot; maxOccurs=&quot;1&quot;&gt;
+        &lt;xs:element type=&quot;locations&quot; name=&quot;locations&quot;/&gt;
+        &lt;xs:element type=&quot;catalog-table&quot; name=&quot;table&quot;/&gt;
+    &lt;/xs:choice&gt;
 
-</pre></div><p>A late-arrival specifies the cut-off period till which the feed is expected to arrive late and should be honored be processes referring to it as input feed by rerunning the instances in case the data arrives late with in a cut-off period. The cut-off period is specified by expression frequency(times), ex: if the feed can arrive late upto 8 hours then late-arrival's cut-off=&quot;hours(8)&quot;</p><div class="source"><pre class="prettyprint">
+</pre></div><p>Feed should contain one of the two storage options. Locations on File System or Table in a Catalog.</p></div><div class="section"><h5>File System Storage<a name="File_System_Storage"></a></h5><div class="source"><pre class="prettyprint">
         &lt;clusters&gt;
         &lt;cluster name=&quot;test-cluster&quot;&gt;
             &lt;validity start=&quot;2012-07-20T03:00Z&quot; end=&quot;2099-07-16T00:00Z&quot;/&gt;
@@ -301,15 +301,55 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
  &lt;location type=&quot;stats&quot; path=&quot;/projects/falcon/clicksStats&quot; /&gt;
  &lt;location type=&quot;meta&quot; path=&quot;/projects/falcon/clicksMetaData&quot; /&gt;
 
-</pre></div><p>A location tag specifies the type of location like data, meta, stats and the corresponding paths for them. A feed should at least define the location for type data, which specifies the HDFS path pattern where the feed is generated periodically. ex: type=&quot;data&quot; path=&quot;/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic&quot; The granularity of date pattern in the path should be atleast that of a frequency of a feed. Other location type which are supported are stats and meta paths, if a process references a feed then the meta and stats paths are available as a property in a process.</p><div class="source"><pre class="prettyprint">
+</pre></div><p>A location tag specifies the type of location like data, meta, stats and the corresponding paths for them. A feed should at least define the location for type data, which specifies the HDFS path pattern where the feed is generated periodically. ex: type=&quot;data&quot; path=&quot;/projects/TrafficHourly/${YEAR}-${MONTH}-${DAY}/traffic&quot; The granularity of date pattern in the path should be atleast that of a frequency of a feed. Other location type which are supported are stats and meta paths, if a process references a feed then the meta and stats paths are available as a property in a process.</p></div><div class="section"><h5>Catalog Storage (Table)<a name="Catalog_Storage_Table"></a></h5><p>A table tag specifies the table URI in the catalog registry as:</p><div class="source"><pre class="prettyprint">
+catalog:$database-name:$table-name#partition-key=partition-value);partition-key=partition-value);*
+
+</pre></div><p>This is modeled as a URI (similar to an ISBN URI). It does not have any reference to Hive or HCatalog. Its quite generic so it can be tied to other implementations of a catalog registry. The catalog implementation specified in the startup config provides implementation for the catalog URI.</p><p>Top-level partition has to be a dated pattern and the granularity of date pattern should be at least that of a frequency of a feed.</p><div class="source"><pre class="prettyprint">
+    &lt;xs:complexType name=&quot;catalog-table&quot;&gt;
+        &lt;xs:annotation&gt;
+            &lt;xs:documentation&gt;
+                catalog specifies the uri of a Hive table along with the partition spec.
+                uri=&quot;catalog:$database:$table#(partition-key=partition-value);+&quot;
+                Example: catalog:logs-db:clicks#ds=${YEAR}-${MONTH}-${DAY}
+            &lt;/xs:documentation&gt;
+        &lt;/xs:annotation&gt;
+        &lt;xs:attribute type=&quot;xs:string&quot; name=&quot;uri&quot; use=&quot;required&quot;/&gt;
+    &lt;/xs:complexType&gt;
+
+</pre></div><p>Examples:</p><div class="source"><pre class="prettyprint">
+&lt;table uri=&quot;catalog:default:clicks#ds=${YEAR}-${MONTH}-${DAY}-${HOUR};region=${region}&quot; /&gt;
+&lt;table uri=&quot;catalog:src_demo_db:customer_raw#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot; /&gt;
+&lt;table uri=&quot;catalog:tgt_demo_db:customer_bcp#ds=${YEAR}-${MONTH}-${DAY}-${HOUR}&quot; /&gt;
+
+</pre></div></div><div class="section"><h4>Partitions<a name="Partitions"></a></h4><div class="source"><pre class="prettyprint">
+   &lt;partitions&gt;
+        &lt;partition name=&quot;country&quot; /&gt;
+        &lt;partition name=&quot;cluster&quot; /&gt;
+    &lt;/partitions&gt;
+
+</pre></div><p>A feed can define multiple partitions, if a referenced cluster defines partitions then the number of partitions in feed has to be equal to or more than the cluster partitions.</p><p><b>Note:</b> This will only apply for <a href="./FileSystem.html">FileSystem</a> storage but not Table storage as partitions are defined and maintained in Hive (Hcatalog) registry.</p></div><div class="section"><h4>Groups<a name="Groups"></a></h4><div class="source"><pre class="prettyprint">
+    &lt;groups&gt;online,bi&lt;/groups&gt;
+
+</pre></div><p>A feed specifies a list of comma separated groups, a group is a logical grouping of feeds and a group is said to be available if all the feeds belonging to a group are available. The frequency of all the feed which belong to the same group must be same.</p></div><div class="section"><h4>Availability Flags<a name="Availability_Flags"></a></h4><div class="source"><pre class="prettyprint">
+    &lt;availabilityFlag&gt;_SUCCESS&lt;/availabilityFlag&gt;
+
+</pre></div><p>An availabilityFlag specifies the name of a file which when present/created in a feeds data directory,  the feed is termed as available. ex: _SUCCESS, if this element is ignored then Falcon would consider the presence of feed's data directory as feed availability.</p></div><div class="section"><h4>Frequency<a name="Frequency"></a></h4><div class="source"><pre class="prettyprint">
+    &lt;frequency&gt;minutes(20)&lt;/frequency&gt;
+
+</pre></div><p>A feed has a frequency which specifies the frequency by which this feed is generated.  ex: it can be generated every hour, every 5 minutes, daily, weekly etc. valid frequency type for a feed are minutes, hours, days, months. The values can be negative, zero or positive.</p></div><div class="section"><h4>Late Arrival<a name="Late_Arrival"></a></h4><div class="source"><pre class="prettyprint">
+    &lt;late-arrival cut-off=&quot;hours(6)&quot; /&gt;
+
+</pre></div><p>A late-arrival specifies the cut-off period till which the feed is expected to arrive late and should be honored be processes referring to it as input feed by rerunning the instances in case the data arrives late with in a cut-off period. The cut-off period is specified by expression frequency(times), ex: if the feed can arrive late upto 8 hours then late-arrival's cut-off=&quot;hours(8)&quot;</p><p><b>Note:</b> This will only apply for <a href="./FileSystem.html">FileSystem</a> storage but not Table storage until a future time.</p></div><div class="section"><h5>Custom Properties<a name="Custom_Properties"></a></h5><div class="source"><pre class="prettyprint">
     &lt;properties&gt;
         &lt;property name=&quot;tmpFeedPath&quot; value=&quot;tmpFeedPathValue&quot; /&gt;
         &lt;property name=&quot;field2&quot; value=&quot;value2&quot; /&gt;
         &lt;property name=&quot;queueName&quot; value=&quot;hadoopQueue&quot;/&gt;
         &lt;property name=&quot;jobPriority&quot; value=&quot;VERY_HIGH&quot;/&gt;
+        &lt;property name=&quot;timeout&quot; value=&quot;hours(1)&quot;/&gt;
+        &lt;property name=&quot;parallel&quot; value=&quot;3&quot;/&gt;
     &lt;/properties&gt;
 
-</pre></div><p>A key-value pair, which are propagated to the workflow engine. &quot;queueName&quot; and &quot;jobPriority&quot; are special properties available to user to specify the hadoop job queue and priority, the same value is used by Falcons launcher job.</p></div><div class="section"><h3>Process Specification<a name="Process_Specification"></a></h3><p>A process defines configuration for a workflow. A workflow is a directed acyclic graph(DAG) which defines the job for the workflow engine. A process definition defines  the configurations required to run the workflow job. For example, process defines the frequency at which the workflow should run, the clusters on which the workflow should run, the inputs and outputs for the workflow, how the workflow failures should be handled, how the late inputs should be handled and so on.</p><p>The different details of process are:</p></div><div class="section"><h5>Name<a name="Name"></a></h5><p>Each process is identified with a unique name
 . Syntax:</p><div class="source"><pre class="prettyprint">
+</pre></div><p>A key-value pair, which are propagated to the workflow engine. &quot;queueName&quot; and &quot;jobPriority&quot; are special properties available to user to specify the hadoop job queue and priority, the same value is used by Falcons launcher job. &quot;timeout&quot; and &quot;parallel&quot; are other special properties which decides replication instance's timeout value while waiting for the feed instance and parallel decides the concurrent replication instances that can run at any given time.</p></div><div class="section"><h3>Process Specification<a name="Process_Specification"></a></h3><p>A process defines configuration for a workflow. A workflow is a directed acyclic graph(DAG) which defines the job for the workflow engine. A process definition defines  the configurations required to run the workflow job. For example, process defines the frequency at which the workflow should run, the clusters on which the workflow should run, the inputs and outputs for the workflo
 w, how the workflow failures should be handled, how the late inputs should be handled and so on.</p><p>The different details of process are:</p></div><div class="section"><h5>Name<a name="Name"></a></h5><p>Each process is identified with a unique name. Syntax:</p><div class="source"><pre class="prettyprint">
 &lt;process name=&quot;[process name]&quot;&gt;
 ...
 &lt;/process&gt;
@@ -413,7 +453,53 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
 ...
 &lt;/process&gt;
 
-</pre></div><p>The input for the workflow is a hourly feed and takes 0th and 1st hour data of today(the day when the workflow runs). If the workflow is running for 2012-03-01T06:40Z, the inputs are /projects/bootcamp/feed1/2012-03-01-00/*/US and /projects/bootcamp/feed1/2012-03-01-01/*/US. The property for this input is input1=/projects/bootcamp/feed1/2012-03-01-00/*/US,/projects/bootcamp/feed1/2012-03-01-01/*/US</p></div><div class="section"><h5>Optional Inputs<a name="Optional_Inputs"></a></h5><p>User can metion one or more inputs as optional inputs. In such cases the job does not wait on those inputs which are mentioned as optional. If they are present it considers them otherwise continue with the comlpulsury ones.  Example:</p><div class="source"><pre class="prettyprint">
+</pre></div><p>The input for the workflow is a hourly feed and takes 0th and 1st hour data of today(the day when the workflow runs). If the workflow is running for 2012-03-01T06:40Z, the inputs are /projects/bootcamp/feed1/2012-03-01-00/*/US and /projects/bootcamp/feed1/2012-03-01-01/*/US. The property for this input is input1=/projects/bootcamp/feed1/2012-03-01-00/*/US,/projects/bootcamp/feed1/2012-03-01-01/*/US</p><p>Also, feeds with Hive table storage can be used as inputs to a process. Several parameters from inputs are passed as params to the user workflow or pig script.</p><div class="source"><pre class="prettyprint">
+    ${wf:conf('falcon_input_database')} - database name associated with the feed for a given input
+    ${wf:conf('falcon_input_table')} - table name associated with the feed for a given input
+    ${wf:conf('falcon_input_catalog_url')} - Hive metastore URI for this input feed
+    ${wf:conf('falcon_input_partition_filter_pig')} - value of ${coord:dataInPartitionFilter('$input', 'pig')}
+    ${wf:conf('falcon_input_partition_filter_hive')} - value of ${coord:dataInPartitionFilter('$input', 'hive')}
+    ${wf:conf('falcon_input_partition_filter_java')} - value of ${coord:dataInPartitionFilter('$input', 'java')}
+
+</pre></div><p><b>NOTE:</b> input is the name of the input configured in the process, which is input.getName().</p><div class="source"><pre class="prettyprint">&lt;input name=&quot;input&quot; feed=&quot;clicks-raw-table&quot; start=&quot;yesterday(0,0)&quot; end=&quot;yesterday(20,0)&quot;/&gt;
+</pre></div><p>Example workflow configuration:</p><div class="source"><pre class="prettyprint">
+&lt;configuration&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_database&lt;/name&gt;
+    &lt;value&gt;falcon_db&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_table&lt;/name&gt;
+    &lt;value&gt;input_table&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_catalog_url&lt;/name&gt;
+    &lt;value&gt;thrift://localhost:29083&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_storage_type&lt;/name&gt;
+    &lt;value&gt;TABLE&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;feedInstancePaths&lt;/name&gt;
+    &lt;value&gt;hcat://localhost:29083/falcon_db/output_table/ds=2012-04-21-00&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_partition_filter_java&lt;/name&gt;
+    &lt;value&gt;(ds='2012-04-21-00')&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_partition_filter_hive&lt;/name&gt;
+    &lt;value&gt;(ds='2012-04-21-00')&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_input_partition_filter_pig&lt;/name&gt;
+    &lt;value&gt;(ds=='2012-04-21-00')&lt;/value&gt;
+  &lt;/property&gt;
+  ...
+&lt;/configuration&gt;
+
+</pre></div></div><div class="section"><h5>Optional Inputs<a name="Optional_Inputs"></a></h5><p>User can mention one or more inputs as optional inputs. In such cases the job does not wait on those inputs which are mentioned as optional. If they are present it considers them otherwise continue with the compulsory ones. Example:</p><div class="source"><pre class="prettyprint">
 &lt;feed name=&quot;feed1&quot;&gt;
 ...
     &lt;partition name=&quot;isFraud&quot;/&gt;
@@ -435,7 +521,7 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
 ...
 &lt;/process&gt;
 
-</pre></div></div><div class="section"><h5>Outputs<a name="Outputs"></a></h5><p>Outputs define the output data that is generated by the workflow. A process can define 0 or more outputs. Each output is mapped to a feed and the output path is picked up from feed definition. The output instance that should be generated is specified in terms of <a href="./FalconDocumentation.html">EL expression</a>.</p><p>For each output, Falcon creates a property with output name that contains the path of output data. This can be used in workflows to store in the path. Syntax:</p><div class="source"><pre class="prettyprint">
+</pre></div><p><b>Note:</b> This is only supported for <a href="./FileSystem.html">FileSystem</a> storage but not Table storage at this point.</p></div><div class="section"><h5>Outputs<a name="Outputs"></a></h5><p>Outputs define the output data that is generated by the workflow. A process can define 0 or more outputs. Each output is mapped to a feed and the output path is picked up from feed definition. The output instance that should be generated is specified in terms of <a href="./FalconDocumentation.html">EL expression</a>.</p><p>For each output, Falcon creates a property with output name that contains the path of output data. This can be used in workflows to store in the path. Syntax:</p><div class="source"><pre class="prettyprint">
 &lt;process name=&quot;[process name]&quot;&gt;
 ...
     &lt;outputs&gt;
@@ -464,7 +550,43 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
 ...
 &lt;/process&gt;
 
-</pre></div><p>The output of the workflow is feed instance for today. If the workflow is running for 2012-03-01T06:40Z, the workflow generates output /projects/bootcamp/feed2/2012-03-01. The property for this output that is available for workflow is: output1=/projects/bootcamp/feed2/2012-03-01</p></div><div class="section"><h5>Properties<a name="Properties"></a></h5><p>The properties are key value pairs that are passed to the workflow. These properties are optional and can be used in workflow to parameterize the workflow. Synatx:</p><div class="source"><pre class="prettyprint">
+</pre></div><p>The output of the workflow is feed instance for today. If the workflow is running for 2012-03-01T06:40Z, the workflow generates output /projects/bootcamp/feed2/2012-03-01. The property for this output that is available for workflow is: output1=/projects/bootcamp/feed2/2012-03-01</p><p>Also, feeds with Hive table storage can be used as outputs to a process. Several parameters from outputs are passed as params to the user workflow or pig script.</p><div class="source"><pre class="prettyprint">
+    ${wf:conf('falcon_output_database')} - database name associated with the feed for a given output
+    ${wf:conf('falcon_output_table')} - table name associated with the feed for a given output
+    ${wf:conf('falcon_output_catalog_url')} - Hive metastore URI for the given output feed
+    ${wf:conf('falcon_output_dataout_partitions')} - value of ${coord:dataOutPartitions('$output')}
+
+</pre></div><p><b>NOTE:</b> output is the name of the output configured in the process, which is output.getName().</p><div class="source"><pre class="prettyprint">&lt;output name=&quot;output&quot; feed=&quot;clicks-summary-table&quot; instance=&quot;today(0,0)&quot;/&gt;
+</pre></div><p>Example workflow configuration:</p><div class="source"><pre class="prettyprint">
+&lt;configuration&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_database&lt;/name&gt;
+    &lt;value&gt;falcon_db&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_table&lt;/name&gt;
+    &lt;value&gt;output_table&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_catalog_url&lt;/name&gt;
+    &lt;value&gt;thrift://localhost:29083&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_storage_type&lt;/name&gt;
+    &lt;value&gt;TABLE&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;feedInstancePaths&lt;/name&gt;
+    &lt;value&gt;hcat://localhost:29083/falcon_db/output_table/ds=2012-04-21-00&lt;/value&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;falcon_output_dataout_partitions&lt;/name&gt;
+    &lt;value&gt;'ds=2012-04-21-00'&lt;/value&gt;
+  &lt;/property&gt;
+  ....
+&lt;/configuration&gt;
+
+</pre></div></div><div class="section"><h5>Properties<a name="Properties"></a></h5><p>The properties are key value pairs that are passed to the workflow. These properties are optional and can be used in workflow to parameterize the workflow. Synatx:</p><div class="source"><pre class="prettyprint">
 &lt;process name=&quot;[process name]&quot;&gt;
 ...
     &lt;properties&gt;
@@ -478,7 +600,7 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
         &lt;property name=&quot;queueName&quot; value=&quot;hadoopQueue&quot;/&gt;
         &lt;property name=&quot;jobPriority&quot; value=&quot;VERY_HIGH&quot;/&gt;
 
-</pre></div></div><div class="section"><h5>Workflow<a name="Workflow"></a></h5><p>The workflow defines the workflow engine that should be used and the path to the workflow on hdfs. The workflow definition on hdfs contains the actual job that should run and it should confirm to the workflow specification of the engine specified. The libraries required by the workflow should be in lib folder inside the workflow path.</p><p>The properties defined in the cluster and cluster properties(nameNode and jobTracker) will also be available for the workflow.</p><p>As of now, only oozie workflow engine is supported. Refer to oozie <a class="externalLink" href="http://incubator.apache.org/oozie/overview.html">workflow overview</a> and <a class="externalLink" href="http://incubator.apache.org/oozie/docs/3.1.3/docs/WorkflowFunctionalSpec.html">workflow specification</a> for details.   Syntax:</p><div class="source"><pre class="prettyprint">
+</pre></div></div><div class="section"><h5>Workflow<a name="Workflow"></a></h5><p>The workflow defines the workflow engine that should be used and the path to the workflow on hdfs. The workflow definition on hdfs contains the actual job that should run and it should confirm to the workflow specification of the engine specified. The libraries required by the workflow should be in lib folder inside the workflow path.</p><p>The properties defined in the cluster and cluster properties(nameNode and jobTracker) will also be available for the workflow.</p><p>There are 2 engines supported today.</p></div><div class="section"><h6>Oozie<a name="Oozie"></a></h6><p>As part of oozie workflow engine support, users can embed a oozie workflow. Refer to oozie <a class="externalLink" href="http://incubator.apache.org/oozie/overview.html">workflow overview</a> and <a class="externalLink" href="http://incubator.apache.org/oozie/docs/3.1.3/docs/WorkflowFunctionalSpec.html">workflow specification</a> for
  details.</p><p>Syntax:</p><div class="source"><pre class="prettyprint">
 &lt;process name=&quot;[process name]&quot;&gt;
 ...
     &lt;workflow engine=[workflow engine] path=[workflow path]/&gt;
@@ -492,7 +614,23 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
 ...
 &lt;/process&gt;
 
-</pre></div><p>This defines the workflow engine to be oozie and the workflow xml is defined at /projects/bootcamp/workflow/workflow.xml. The libraries are at /projects/bootcamp/workflow/lib.</p></div><div class="section"><h5>Retry<a name="Retry"></a></h5><p>Retry policy defines how the workflow failures should be handled. Two retry policies are defined: backoff and exp-backoff(exponential backoff). Depending on the delay and number of attempts, the workflow is re-tried after specific intervals. Syntax:</p><div class="source"><pre class="prettyprint">
+</pre></div><p>This defines the workflow engine to be oozie and the workflow xml is defined at /projects/bootcamp/workflow/workflow.xml. The libraries are at /projects/bootcamp/workflow/lib.</p></div><div class="section"><h6>Pig<a name="Pig"></a></h6><p>Falcon also adds the Pig engine which enables users to embed a Pig script as a process.</p><p>Example:</p><div class="source"><pre class="prettyprint">
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;workflow engine=&quot;pig&quot; path=&quot;/projects/bootcamp/pig.script&quot;/&gt;
+...
+&lt;/process&gt;
+
+</pre></div><p>This defines the workflow engine to be pig and the pig script is defined at /projects/bootcamp/pig.script.</p><p>Feeds with Hive table storage will send one more parameter apart from the general ones:</p><div class="source"><pre class="prettyprint">$input_filter
+</pre></div></div><div class="section"><h6>Hive<a name="Hive"></a></h6><p>Falcon also adds the Hive engine as part of Hive Integration which enables users to embed a Hive script as a process. This would enable users to create materialized queries in a declarative way.</p><p>Example:</p><div class="source"><pre class="prettyprint">
+&lt;process name=&quot;sample-process&quot;&gt;
+...
+    &lt;workflow engine=&quot;hive&quot; path=&quot;/projects/bootcamp/hive-script.hql&quot;/&gt;
+...
+&lt;/process&gt;
+
+</pre></div><p>This defines the workflow engine to be hive and the hive script is defined at /projects/bootcamp/hive-script.hql.</p><p>Feeds with Hive table storage will send one more parameter apart from the general ones:</p><div class="source"><pre class="prettyprint">$input_filter
+</pre></div></div><div class="section"><h5>Retry<a name="Retry"></a></h5><p>Retry policy defines how the workflow failures should be handled. Two retry policies are defined: backoff and exp-backoff(exponential backoff). Depending on the delay and number of attempts, the workflow is re-tried after specific intervals. Syntax:</p><div class="source"><pre class="prettyprint">
 &lt;process name=&quot;[process name]&quot;&gt;
 ...
     &lt;retry policy=[retry policy] delay=[retry delay] attempts=[retry attempts]/&gt;
@@ -536,7 +674,7 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
 ...
 &lt;/process&gt;
 
-</pre></div><p>This late handling specifies that late data detection should run at feed's late cut-off which is 6 hours in this case. If there is late data, Falcon should run the workflow specified at /projects/bootcamp/workflow/lateinput1/workflow.xml</p></div>
+</pre></div><p>This late handling specifies that late data detection should run at feed's late cut-off which is 6 hours in this case. If there is late data, Falcon should run the workflow specified at /projects/bootcamp/workflow/lateinput1/workflow.xml</p><p><b>Note:</b> This is only supported for <a href="./FileSystem.html">FileSystem</a> storage but not Table storage at this point.</p></div>
                   </div>
           </div>
 
@@ -544,7 +682,7 @@ xmlns:xsi=&quot;http://www.w3.org/2001/X
 
     <footer>
             <div class="container">
-              <div class="row span12">Copyright &copy;                    2013
+              <div class="row span12">Copyright &copy;                    2013-2014
                         <a href="http://www.apache.org">Apache Software Foundation</a>.
             All Rights Reserved.      
                     

Modified: incubator/falcon/site/docs/FalconArchitecture.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/FalconArchitecture.html?rev=1565098&r1=1565097&r2=1565098&view=diff
==============================================================================
--- incubator/falcon/site/docs/FalconArchitecture.html (original)
+++ incubator/falcon/site/docs/FalconArchitecture.html Thu Feb  6 07:37:58 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Oct 28, 2013
+ | Generated by Apache Maven Doxia at Feb 6, 2014
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20131028" />
+    <meta name="Date-Revision-yyyymmdd" content="20140206" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - Contents</title>
     <link rel="stylesheet" href="../css/apache-maven-fluido-1.3.0.min.css" />
@@ -99,6 +99,9 @@
                       <li>      <a href="http://www.apache.org/dist/incubator/falcon/0.3-incubating"  title="0.3-incubating">0.3-incubating</a>
 </li>
                   
+                      <li>      <a href="http://www.apache.org/dist/incubator/falcon/0.4-incubating"  title="0.4-incubating">0.4-incubating</a>
+</li>
+                  
                       <li>      <a href="https://cwiki.apache.org/confluence/display/FALCON/Roadmap"  title="Roadmap">Roadmap</a>
 </li>
                           </ul>
@@ -112,6 +115,9 @@
                   
                       <li>      <a href="../0.3-incubating/index.html"  title="0.3-incubating">0.3-incubating</a>
 </li>
+                  
+                      <li>      <a href="../0.4-incubating/index.html"  title="0.4-incubating">0.4-incubating</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -144,6 +150,9 @@
                   
                       <li>      <a href="../docs/restapi/ResourceList.html"  title="Rest API">Rest API</a>
 </li>
+                  
+                      <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -227,7 +236,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2013-10-28</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
             
                             </ul>
       </div>
@@ -356,7 +365,7 @@ validity start=&quot;2009-01-01T00:00Z&q
 
     <footer>
             <div class="container">
-              <div class="row span12">Copyright &copy;                    2013
+              <div class="row span12">Copyright &copy;                    2013-2014
                         <a href="http://www.apache.org">Apache Software Foundation</a>.
             All Rights Reserved.      
                     

Modified: incubator/falcon/site/docs/FalconCLI.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/FalconCLI.html?rev=1565098&r1=1565097&r2=1565098&view=diff
==============================================================================
--- incubator/falcon/site/docs/FalconCLI.html (original)
+++ incubator/falcon/site/docs/FalconCLI.html Thu Feb  6 07:37:58 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Oct 28, 2013
+ | Generated by Apache Maven Doxia at Feb 6, 2014
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20131028" />
+    <meta name="Date-Revision-yyyymmdd" content="20140206" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - FalconCLI</title>
     <link rel="stylesheet" href="../css/apache-maven-fluido-1.3.0.min.css" />
@@ -99,6 +99,9 @@
                       <li>      <a href="http://www.apache.org/dist/incubator/falcon/0.3-incubating"  title="0.3-incubating">0.3-incubating</a>
 </li>
                   
+                      <li>      <a href="http://www.apache.org/dist/incubator/falcon/0.4-incubating"  title="0.4-incubating">0.4-incubating</a>
+</li>
+                  
                       <li>      <a href="https://cwiki.apache.org/confluence/display/FALCON/Roadmap"  title="Roadmap">Roadmap</a>
 </li>
                           </ul>
@@ -112,6 +115,9 @@
                   
                       <li>      <a href="../0.3-incubating/index.html"  title="0.3-incubating">0.3-incubating</a>
 </li>
+                  
+                      <li>      <a href="../0.4-incubating/index.html"  title="0.4-incubating">0.4-incubating</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -144,6 +150,9 @@
                   
                       <li>      <a href="../docs/restapi/ResourceList.html"  title="Rest API">Rest API</a>
 </li>
+                  
+                      <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -227,7 +236,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2013-10-28</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
             
                             </ul>
       </div>
@@ -236,90 +245,7 @@
                         
         <div id="bodyColumn" >
                                   
-            <div class="section"><h2>FalconCLI<a name="FalconCLI"></a></h2><p>FalconCLI is a interface between user and Falcon. It is a command line utility provided by Falcon. FalconCLI supports Entity Management, Instance Management and Admin operations.There is a set of web services that are used by FalconCLI to interact with Falcon.</p></div><div class="section"><h3>Entity Management Operations<a name="Entity_Management_Operations"></a></h3></div><div class="section"><h4>Submit<a name="Submit"></a></h4><p>Entity submit action allows a new cluster/feed/process to be setup within Falcon. Submitted entity is not scheduled, meaning it would simply be in the configuration store within Falcon. Besides validating against the schema for the corresponding entity being added, the Falcon system would also perform inter-field validations within the configuration file and validations across dependent entities.</p><div class="source"><pre class="prettyprint">
-Example: 
-$FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml
-
-</pre></div><p>Note: The url option in the above and all subsequent commands is optional. If not mentioned it will be picked from client.properties file. If the option is not provided and also not set in client.properties, Falcon CLI will fail.</p></div><div class="section"><h4>Schedule<a name="Schedule"></a></h4><p>Feeds or Processes that are already submitted and present in the config store can be scheduled. Upon schedule, Falcon system wraps the required repeatable action as a bundle of oozie coordinators and executes them on the Oozie scheduler. (It is possible to extend Falcon to use an alternate workflow engine other than Oozie). Falcon overrides the workflow instance's external id in Oozie to reflect the process/feed and the nominal time. This external Id can then be used for instance management functions.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon entity  -type [process|feed] -name &lt;&lt;name&gt;&gt; -schedule
-
-Example:
-$FALCON_HOME/bin/falcon entity  -type process -name sampleProcess -schedule
-
-</pre></div></div><div class="section"><h4>Suspend<a name="Suspend"></a></h4><p>This action is applicable only on scheduled entity. This triggers suspend on the oozie bundle that was scheduled earlier through the schedule function. No further instances are executed on a suspended process/feed.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -suspend
-
-</pre></div></div><div class="section"><h4>Resume<a name="Resume"></a></h4><p>Puts a suspended process/feed back to active, which in turn resumes applicable oozie bundle.</p><div class="source"><pre class="prettyprint">
-Usage:
- $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -resume
-
-</pre></div></div><div class="section"><h4>Delete<a name="Delete"></a></h4><p>Delete operation on the entity removes any scheduled activity on the workflow engine, besides removing the entity from the falcon configuration store. Delete operation on an entity would only succeed if there are no dependent entities on the deleted entity.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon entity  -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -delete
-
-</pre></div></div><div class="section"><h4>List<a name="List"></a></h4><p>List all the entities within the falcon config store for the entity type being requested. This will include both scheduled and submitted entity configurations.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -list
-
-</pre></div></div><div class="section"><h4>Update<a name="Update"></a></h4><p>Update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently not allowed. Feed update can cause cascading update to all the processes already scheduled. The following set of actions are performed in Oozie to realize an update.</p><p></p><ul><li>Suspend the previously scheduled Oozie coordinator. This is prevent any new action from being triggered.</li><li>Update the coordinator to set the end time to &quot;now&quot;</li><li>Resume the suspended coordiantors</li><li>Schedule as per the new process/feed definition with the start time as &quot;now&quot;</li></ul><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -update
-
-</pre></div></div><div class="section"><h4>Status<a name="Status"></a></h4><p>Status returns the current status of the entity.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -status
-
-</pre></div></div><div class="section"><h4>Dependency<a name="Dependency"></a></h4><p>Returns the dependencies of the requested entity. Dependency list include both forward and backward dependencies (depends on &amp; is dependent on). For ex, a feed would show process that are dependent on the feed and the clusters that it depends on.'</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -dependency
-
-</pre></div></div><div class="section"><h4>Definition<a name="Definition"></a></h4><p>Gets the current entity definition as stored in the configuration store. Please note that user documentations in the entity will not be retained.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -definition
-
-</pre></div></div><div class="section"><h3>Instance Management Options<a name="Instance_Management_Options"></a></h3><p>Instance Manager gives user the option to control individual instances of the process based on their instance start time (start time of that instance). Start time needs to be given in standard TZ format. Example:   01 Jan 2012 01:00  =&gt; 2012-01-01T01:00Z</p><p>All the instance management operations (except running) allow single instance or list of instance within a Date range to be acted on. Make sure the dates are valid. i.e are within the start and  end time of process itself.</p><p>For every query in instance management the process name is a compulsory parameter.</p><p>Parameters -start and -end are used to mention the date range within which you want the instance to be operated upon.</p><p>-start:   using only  &quot;-start&quot; without  &quot;-end&quot;  will conduct the desired operation only on single instance given by date along with start.</p><p>-end: 
  &quot;-end&quot;  can only be used along with &quot;-start&quot; . It corresponds to the end date till which instance need to operated upon.</p><p></p><ul><li>1. <b>status</b>: -status option via CLI can be used to get the status of a single or multiple instances.  If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state.Along with the status of the instance log location is also returned.</li></ul><p></p><ul><li>2.	<b>running</b>: -running returns all the running instance of the process. It does not take any start or end dates but simply return all the instances in state RUNNING at that given time.</li></ul><p></p><ul><li>3.	<b>rerun</b>: -rerun is the option that you will use most often from instance management. As the name suggest this option is used to rerun a particular instance or instances of the process. The rerun option reruns all parent workflow for the instance, which in turn rerun all the sub-workflows for it. Thi
 s option is valid for any instance in terminal state, i.e. KILLED, SUCCEEDED, FAILED. User can also set properties in the request, which will give options what types of actions should be rerun like, only failed, run all etc. These properties are dependent on the workflow engine being used along with falcon.</li></ul><p></p><ul><li>4. <b>suspend</b>: -suspend is used to suspend a instance or instances  for the given process. This option pauses the parent workflow at the state, which it was in at the time of execution of this command. This command is similar to SUSPEND process command in functionality only difference being, SUSPEND process suspends all the instance whereas suspend instance suspend only that instance or instances in the range.</li></ul><p></p><ul><li>5.	<b>resume</b>: -resume option is used to resume any instance that  is in suspended state.  (Note: due to a bug in oozie &#xef;&#xbf;&#xbd;resume option in some cases may not actually resume the suspended instance/ insta
 nces)</li><li>6. <b>kill</b>: -kill option can be used to kill an instance or multiple instances</li></ul><p>In all the cases where your request is syntactically correct but logically not, the instance / instances are returned with the same status as earlier. Example:  trying to resume a KILLED  / SUCCEEDED instance will return the instance with KILLED / SUCCEEDED, without actually performing any operation. This is so because only an instance in SUSPENDED state can be resumed. Same thing is valid for rerun a SUSPENDED or RUNNING options etc.</p></div><div class="section"><h4>Status<a name="Status"></a></h4><p>Status option via CLI can be used to get the status of a single or multiple instances.  If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state. Along with the status of the instance time is also returned. Log location gives the oozie workflow url If the instance is in WAITING state, missing dependencies are listed</p><
 p>Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:</p><p>{&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getStatus is successful&quot;,&quot;instances&quot;:[{&quot;instance&quot;:&quot;2012-05-07T05:02Z&quot;,&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;},{&quot;instance&quot;:&quot;2012-05-07T05:07Z&quot;,&quot;status&quot;:&quot;RUNNING&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;}, {&quot;instance&quot;:&quot;2010-01-02T11:05Z&quot;,&quot;status&quot;:&quot;WAITING&quot;}]</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -status -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;
-
-</pre></div></div><div class="section"><h4>Kill<a name="Kill"></a></h4><p>Kill sub-command is used to kill all the instances of the specified process whose nominal time is between the given start time and end time.</p><p>Note:  1. For all the instance management sub-commands, if end time is not specified, Falcon will perform the actions on all the instances whose instance time falls after the start time.</p><p>2. The start time and end time needs to be specified in TZ format.  Example:   01 Jan 2012 01:00  =&gt; 2012-01-01T01:00Z</p><p>3. Process name is compulsory parameter for each instance management command.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -kill -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;
-
-</pre></div></div><div class="section"><h4>Suspend<a name="Suspend"></a></h4><p>Suspend is used to suspend a instance or instances  for the given process. This option pauses the parent workflow at the state, which it was in at the time of execution of this command.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -suspend -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;
-
-</pre></div></div><div class="section"><h4>Continue<a name="Continue"></a></h4><p>Continue option is used to continue the failed workflow instance. This option is valid only for process instances in terminal state, i.e. SUCCEDDED, KILLED or FAILED.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -re-run -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;
-
-</pre></div></div><div class="section"><h4>Rerun<a name="Rerun"></a></h4><p>Rerun option is used to rerun instances of a given process. This option is valid only for process instances in terminal state, i.e. SUCCEDDED, KILLED or FAILED. Optionally, you can specify the properties to override.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -re-run -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; [-file &lt;&lt;properties file&gt;&gt;]
-
-</pre></div></div><div class="section"><h4>Resume<a name="Resume"></a></h4><p>Resume option is used to resume any instance that  is in suspended state.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -resume -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;
-
-</pre></div></div><div class="section"><h4>Running<a name="Running"></a></h4><p>Running option provides all the running instances of the mentioned process.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -running
-
-</pre></div></div><div class="section"><h4>Logs<a name="Logs"></a></h4><p>Get logs for instance actions</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -logs -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; [-end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;] [-runid &lt;&lt;runid&gt;&gt;]
-
-</pre></div></div><div class="section"><h3>Admin Options<a name="Admin_Options"></a></h3></div><div class="section"><h4>Help<a name="Help"></a></h4><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon admin -version
-
-</pre></div></div><div class="section"><h4>Version<a name="Version"></a></h4><p>Version returns the current verion of Falcon installed.</p><div class="source"><pre class="prettyprint">
-Usage:
-$FALCON_HOME/bin/falcon admin -help
-
-</pre></div></div>
+            <div class="section"><h2>FalconCLI<a name="FalconCLI"></a></h2><p>FalconCLI is a interface between user and Falcon. It is a command line utility provided by Falcon. FalconCLI supports Entity Management, Instance Management and Admin operations.There is a set of web services that are used by FalconCLI to interact with Falcon.</p></div><div class="section"><h3>Entity Management Operations<a name="Entity_Management_Operations"></a></h3></div><div class="section"><h4>Submit<a name="Submit"></a></h4><p>Submit option is used to set up entity definition.</p><p>Example:  $FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml</p><p>Note: The url option in the above and all subsequent commands is optional. If not mentioned it will be picked from client.properties file. If the option is not provided and also not set in client.properties, Falcon CLI will fail.</p></div><div class="section"><h4>Schedule<a name="Schedule"></a></h4><p>Once submitted, an enti
 ty can be scheduled using schedule option. Process and feed can only be scheduled.</p><p>Usage: $FALCON_HOME/bin/falcon entity  -type [process|feed] -name &lt;&lt;name&gt;&gt; -schedule</p><p>Example: $FALCON_HOME/bin/falcon entity  -type process -name sampleProcess -schedule</p></div><div class="section"><h4>Suspend<a name="Suspend"></a></h4><p>Suspend on an entity results in suspension of the oozie bundle that was scheduled earlier through the schedule function. No further instances are executed on a suspended entity. Only schedulable entities(process/feed) can be suspended.</p><p>Usage: $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -suspend</p></div><div class="section"><h4>Resume<a name="Resume"></a></h4><p>Puts a suspended process/feed back to active, which in turn resumes applicable oozie bundle.</p><p>Usage:  $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -resume</p></div><div class="section"><h4>Delete<a name
 ="Delete"></a></h4><p>Delete removes the submitted entity definition for the specified entity and put it into the archive.</p><p>Usage: $FALCON_HOME/bin/falcon entity  -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -delete</p></div><div class="section"><h4>List<a name="List"></a></h4><p>Entities of a particular type can be listed with list sub-command.</p><p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -list</p></div><div class="section"><h4>Update<a name="Update"></a></h4><p>Update operation allows an already submitted/scheduled entity to be updated. Cluster update is currently not allowed.</p><p>Usage: $FALCON_HOME/bin/falcon entity  -type [feed|process] -name &lt;&lt;name&gt;&gt; -update [-effective &lt;&lt;effective time&gt;&gt;]</p></div><div class="section"><h4>Status<a name="Status"></a></h4><p>Status returns the current status of the entity.</p><p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -sta
 tus</p></div><div class="section"><h4>Dependency<a name="Dependency"></a></h4><p>With the use of dependency option, we can list all the entities on which the specified entity is dependent. For example for a feed, dependency return the cluster name and for process it returns all the input feeds, output feeds and cluster names.</p><p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -dependency</p></div><div class="section"><h4>Definition<a name="Definition"></a></h4><p>Definition option returns the entity definition submitted earlier during submit step.</p><p>Usage: $FALCON_HOME/bin/falcon entity -type [cluster|feed|process] -name &lt;&lt;name&gt;&gt; -definition</p></div><div class="section"><h3>Instance Management Options<a name="Instance_Management_Options"></a></h3></div><div class="section"><h4>Kill<a name="Kill"></a></h4><p>Kill sub-command is used to kill all the instances of the specified process whose nominal time is between the gi
 ven start time and end time.</p><p>Note:  1. For all the instance management sub-commands, if end time is not specified, Falcon will perform the actions on all the instances whose instance time falls after the start time.</p><p>2. The start time and end time needs to be specified in TZ format.  Example:   01 Jan 2012 01:00  =&gt; 2012-01-01T01:00Z</p><p>3. Process name is compulsory parameter for each instance management command.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -kill -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Suspend<a name="Suspend"></a></h4><p>Suspend is used to suspend a instance or instances  for the given process. This option pauses the parent workflow at the state, which it was in at the time of execution of this command.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -sus
 pend -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Continue<a name="Continue"></a></h4><p>Continue option is used to continue the failed workflow instance. This option is valid only for process instances in terminal state, i.e. SUCCEDDED, KILLED or FAILED.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -re-run -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Rerun<a name="Rerun"></a></h4><p>Rerun option is used to rerun instances of a given process. This option is valid only for process instances in terminal state, i.e. SUCCEDDED, KILLED or FAILED. Optionally, you can specify the properties to override.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -re-run -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&qu
 ot; [-file &lt;&lt;properties file&gt;&gt;]</p></div><div class="section"><h4>Resume<a name="Resume"></a></h4><p>Resume option is used to resume any instance that  is in suspended state.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -resume -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Status<a name="Status"></a></h4><p>Status option via CLI can be used to get the status of a single or multiple instances.  If the instance is not yet materialized but is within the process validity range, WAITING is returned as the state. Along with the status of the instance time is also returned. Log location gives the oozie workflow url If the instance is in WAITING state, missing dependencies are listed</p><p>Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:</p><p>{&quot;status&quot;:&
 quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getStatus is successful&quot;,&quot;instances&quot;:[{&quot;instance&quot;:&quot;2012-05-07T05:02Z&quot;,&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;},{&quot;instance&quot;:&quot;2012-05-07T05:07Z&quot;,&quot;status&quot;:&quot;RUNNING&quot;,&quot;logFile&quot;:&quot;http://oozie-dashboard-url&quot;}, {&quot;instance&quot;:&quot;2010-01-02T11:05Z&quot;,&quot;status&quot;:&quot;WAITING&quot;}]</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -status -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Summary<a name="Summary"></a></h4><p>Summary option via CLI can be used to get the consolidated status of the instances between the specified time period. Each status along with the corresponding instance count are listed for each of the applicable colos. The unschedul
 ed instances between the specified time period are included as UNSCHEDULED in the output to provide more clarity.</p><p>Example : Suppose a process has 3 instance, one has succeeded,one is in running state and other one is waiting, the expected output is:</p><p>{&quot;status&quot;:&quot;SUCCEEDED&quot;,&quot;message&quot;:&quot;getSummary is successful&quot;, &quot;cluster&quot;: &lt;&lt;name&gt;&gt; [{&quot;SUCCEEDED&quot;:&quot;1&quot;}, {&quot;WAITING&quot;:&quot;1&quot;}, {&quot;RUNNING&quot;:&quot;1&quot;}]}</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -summary -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; -end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;</p></div><div class="section"><h4>Running<a name="Running"></a></h4><p>Running option provides all the running instances of the mentioned process.</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -running</p></div><div cla
 ss="section"><h4>Logs<a name="Logs"></a></h4><p>Get logs for instance actions</p><p>Usage: $FALCON_HOME/bin/falcon instance -type &lt;&lt;feed/process&gt;&gt; -name &lt;&lt;name&gt;&gt; -logs -start &quot;yyyy-MM-dd'T'HH:mm'Z'&quot; [-end &quot;yyyy-MM-dd'T'HH:mm'Z'&quot;] [-runid &lt;&lt;runid&gt;&gt;]</p></div><div class="section"><h3>Admin Options<a name="Admin_Options"></a></h3></div><div class="section"><h4>Help<a name="Help"></a></h4><p>Usage: $FALCON_HOME/bin/falcon admin -version</p></div><div class="section"><h4>Version<a name="Version"></a></h4><p>Version returns the current verion of Falcon installed. Usage: $FALCON_HOME/bin/falcon admin -help</p></div>
                   </div>
           </div>
 
@@ -327,7 +253,7 @@ $FALCON_HOME/bin/falcon admin -help
 
     <footer>
             <div class="container">
-              <div class="row span12">Copyright &copy;                    2013
+              <div class="row span12">Copyright &copy;                    2013-2014
                         <a href="http://www.apache.org">Apache Software Foundation</a>.
             All Rights Reserved.      
                     

Modified: incubator/falcon/site/docs/GettingStarted.html
URL: http://svn.apache.org/viewvc/incubator/falcon/site/docs/GettingStarted.html?rev=1565098&r1=1565097&r2=1565098&view=diff
==============================================================================
--- incubator/falcon/site/docs/GettingStarted.html (original)
+++ incubator/falcon/site/docs/GettingStarted.html Thu Feb  6 07:37:58 2014
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at Oct 28, 2013
+ | Generated by Apache Maven Doxia at Feb 6, 2014
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20131028" />
+    <meta name="Date-Revision-yyyymmdd" content="20140206" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Falcon - Apache Falcon - Data management and processing platform</title>
     <link rel="stylesheet" href="../css/apache-maven-fluido-1.3.0.min.css" />
@@ -99,6 +99,9 @@
                       <li>      <a href="http://www.apache.org/dist/incubator/falcon/0.3-incubating"  title="0.3-incubating">0.3-incubating</a>
 </li>
                   
+                      <li>      <a href="http://www.apache.org/dist/incubator/falcon/0.4-incubating"  title="0.4-incubating">0.4-incubating</a>
+</li>
+                  
                       <li>      <a href="https://cwiki.apache.org/confluence/display/FALCON/Roadmap"  title="Roadmap">Roadmap</a>
 </li>
                           </ul>
@@ -112,6 +115,9 @@
                   
                       <li>      <a href="../0.3-incubating/index.html"  title="0.3-incubating">0.3-incubating</a>
 </li>
+                  
+                      <li>      <a href="../0.4-incubating/index.html"  title="0.4-incubating">0.4-incubating</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -144,6 +150,9 @@
                   
                       <li>      <a href="../docs/restapi/ResourceList.html"  title="Rest API">Rest API</a>
 </li>
+                  
+                      <li>      <a href="../docs/HiveIntegration.html"  title="Hive Integration">Hive Integration</a>
+</li>
                           </ul>
       </li>
                 <li class="dropdown">
@@ -227,7 +236,7 @@
         
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2013-10-28</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 2014-02-06</li> 
             
                             </ul>
       </div>
@@ -244,7 +253,7 @@
 
     <footer>
             <div class="container">
-              <div class="row span12">Copyright &copy;                    2013
+              <div class="row span12">Copyright &copy;                    2013-2014
                         <a href="http://www.apache.org">Apache Software Foundation</a>.
             All Rights Reserved.      
                     



Mime
View raw message