Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.4.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.4.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.4.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hadoop-nar/1.4.0/org.apache.nifi.processors.hadoop.inotify.GetHDFSEvents/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +GetHDFSEvents

GetHDFSEvents

Description:

This processor polls the notification events provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is required to run as an HDFS super user. Currently there are six types of events (append, close, create, metadata, rename, and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for full explanations of each event. This processor will poll for new events based on a defined duration. For each event received a new flow file will be created with the expected attributes and the event itself serialized to JSON and written to th e flow file's content. For example, if event.type is APPEND then the content of the flow file will contain a JSON file containing the information about the append event. If successful the flow files are sent to the 'success' relationship. Be careful of where the generated flow files are stored. If the flow files are stored in one of processor's watch directories there will be a never ending flow of events. It is also important to be aware that this processor must consume all events. The filtering must happen within the processor. This is because the HDFS admin's event notifications API does not have filtering.

Tags:

hadoop, events, inotify, notifications, filesystem

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Hadoop Configuration ResourcesA file or comma separated list of files which contains the Hadoop file system configuration. Without this, Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a default configuration.
Supports Expression Language: true
Kerberos PrincipalKerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties
Supports Expression Language: true
Kerberos KeytabKerbero s keytab associated with the principal. Requires nifi.kerberos.krb5.file to be set in your nifi.properties
Supports Expression Language: true
Kerberos Relogin Period4 hoursPeriod of time which should pass before attempting a kerberos relogin
Supports Expression Language: true
Additional Classpath ResourcesA comma-separated list of paths to files and/or directories that will be added to the classpath. When specifying a directory, all files with in the directory will be added to the classpath, but further sub-directories will not be included.
Poll Duration1 secondThe time before the polling method returns with the next batch of events if they exist. It may exceed this amount of time by up to the time required for an RPC to the NameNode.
HDFS Path to WatchThe HDFS path to get event notifications for. This property accepts both expression language and regular expressions. This will be evaluated during the OnScheduled phase.
Supports Expression Language: true
Ignore Hidden Filesfalse
  • true
  • false
If true and the final component of the path associated with a given event starts with a '.' then that event will not be processed.
Event Types to Filter Onappend, close, create, metadata, rename, unlinkA comma-separated list of event types to process. Valid event types are: append, close, create, metadata, rename, and unlink. Case does not matter.
IOException Retries During Event Polling3According to the HDFS admin API for event polling it is good to retry at least a few times. This number defines how many times the poll will be retried if it throws an IOException.

Relationships:

NameDescription
successA flow file with updated information about a specific event will be sent to this relationship.

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
mime.typeThis is always application/json.
hdfs.inotify.event.typeThis will specify the specific HDFS notification event type. Currently there are six types of events (append, close, create, metadata, rename, and unlink).
hdfs.inotify.event.pathThe specific path that the event is tied to.

State management:

ScopeDescription
CLUSTERThe last used transaction id is stored. This is used

Restricted:

This component is not restricted.

Input requirement:

This component does not allow an incoming relationship.

See Also:

GetHDFS, FetchHDFS, PutHDFS, ListHDFS

\ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.FetchHBaseRow/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.FetchHBaseRow/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.FetchHBaseRow/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.FetchHBaseRow/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +FetchHBaseRow

FetchHBaseRow

Description:

Fetches a row from an HBase table. The Destination property controls whether the cells are added as flow file attributes, or the row is written to the flow file content as JSON. This processor may be used to fetch a fixed row on a interval by specifying the table and row id directly in the processor, or it may be used to dynamically fetch rows by referencing the table and row id from incoming flow files.

Tags:

hbase, scan, fetch, get, enrich

Properties:

In the list below, the names of required properties appear in bol d. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
HBase Client ServiceController Service API:
HBaseClientService
Implementation: HBase_1_1_2_ClientService
Specifies the Controller Service to use for accessing HBase.
Table NameThe name of the HBase Table to fetch from.
Supports Expression Language: true
Row IdentifierThe identifier of the row to fetch.
Supports Expression Language: true
ColumnsAn optional comma-separated list of "<colFamily>:<colQualifier>" pairs to fetch. To return all columns for a given family, leave off the qualifier such as "<colFamily1>,<colFamily2>".
Supports Expression Language: true
Destinationflowfile-attributes
  • flowfile-attributes Adds the JSON document representing the row that was fetched as an attribute named hbase.row. The format of th
 e JSON document is determined by the JSON Format property. NOTE: Fetching many large rows into attributes may have a negative impact on performance.
  • flowfile-content Overwrites the FlowFile content with a JSON document representing the row that was fetched. The format of the JSON document is determined by the JSON Format property.
Indicates whether the row fetched from HBase is written to FlowFile content or FlowFile Attributes.
JSON Formatfull-row
  • full-row Creates a JSON document with the format: {"row":<row-id>, "cells":[{"fam":<col-fam>, "qual":<col-val>, "val":<value>, "ts":<timestamp>}]}.
  • col-qual-and-val Creates a JSON document with the format: {"<col-qual>":"<value>", "<col-qual>":"<value>".
Specifies how to represent the HBase row as a JSON document.
JSON Value Encodingnone
  • none Creates a String using the bytes of given data and the given Character Set.
  • base64 Creates a Base64 encoded String of the given data.
Specifies how to represent row ids, column families, column qualifiers, and values when stored in FlowFile attributes, or written to JSON.
Encode Character SetUTF-8The character set used to encode the JSON representation of the row.
Decode Character SetUTF-8The character set used to decode data from HBase.

Relationships:

NameDescription
successAll successful fetches are routed to this relationship.
failureAll failed fetches are routed to this relationship.
not foundAll fetches where the row id is not found are routed to this relationship.

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
hbase.tableThe name of the HBase table that the row was fetched from
hbase.rowA JSON docu ment representing the row. This property is only written when a Destination of flowfile-attributes is selected.
mime.typeSet to application/json when using a Destination of flowfile-content, not set or modified otherwise

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.GetHBase/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.GetHBase/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.GetHBase/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.GetHBase/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +GetHBase

GetHBase

Description:

This Processor polls HBase for any records in the specified table. The processor keeps track of the timestamp of the cells that it receives, so that as new records are pushed to HBase, they will automatically be pulled. Each record is output in JSON format, as {"row": "<row key>", "cells": { "<column 1 family>:<column 1 qualifier>": "<cell 1 value>", "<column 2 family>:<column 2 qualifier>": "<cell 2 value>", ... }}. For each record received, a Provenance RECEIVE event is emitted with the format hbase://<table name>/& lt;row key>, where <row key> is the UTF-8 encoded value of the row's key.

Tags:

hbase, get, ingest

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values.

NameDefault ValueAllowable ValuesDescription
HBase Client ServiceController Service API:
HBaseClientService
Implementation: HBase_1_1_2_ClientService
Specifies the Controller Service to use for accessing HBase.
Distributed Cache ServiceController Service API:
DistributedMapCacheClient
Implementations: HBase_1_1_2_ClientMapCacheService
RedisDistributedMapCacheClientService
DistributedMapCacheClientService
Specifies the Controller Service that should be used to maintain state about what has been pulled from HBase so that if a new node begins pulling data, it won't duplicate all of the work that has been done.
Table NameThe name of the HBase Table to put data into
ColumnsA comma-separated list of "<colFamily>:<colQualifier>" pairs to return when scanning. To return all columns for a given family, leave off the qualifier such as "<colFamily1>,<colFamily2>".
Filter ExpressionAn HBase filter expression that will be applied to the scan. This property can not be used when also using the Columns property.
Initial Time RangeNone
  • None
  • Current Time
The time range to use on the first scan of a table. None will pull the entire table on the first scan, Current Time will pull entries from that p oint forward.
Character SetUTF-8Specifies which character set is used to encode the data in HBase

Relationships:

NameDescription
successAll FlowFiles are routed to this relationship

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
hbase.tableThe name of the HBase table that the data was pulled from
mime.typeSet to application/json to indicate that output is JSON

State management:

ScopeDescription
CLUSTERAfter performing a fetching from HBase, stores a timestamp of the last-modified cell that was found. In addition, it stores the ID of the row(s) and the value of each cell that has that timestamp as its modification date. This is stored across the cluster and allows the next fetch to avoid duplicating data, even if this Processor is run on Primary Node only and the Primary Node changes.

Restricted:

This component is not restricted.

Input requirement:

This component does not allow an incoming relationship. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseCell/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseCell/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseCell/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseCell/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +PutHBaseCell

PutHBaseCell

Description:

Adds the Contents of a FlowFile to HBase as the value of a single cell

Tags:

hadoop, hbase

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
HBase Client ServiceController Service API:
HBaseClientService
Implementation: HBase_1_1_2_ClientService
Specifies the Controller Service to use for accessing HBase.
Table NameThe name of the HBase Table to put data into
Supports Expression Language: true
Row IdentifierSpecifies the Row ID to use when inserting data into HBase
Supports Expression Language: true
Row Identifier Encoding StrategyString
  • String Stores the value of row id as a UTF-8 String.
  • Binary Stores the value of the rows id as a binary byte array. It expects that the row id is a binary formatted string.
Specifies the data type of Row ID used when inserting data into HBase. The default behavior is to convert the row id to a UTF-8 byte array. Choosing Binary will convert a binary formatted string to the correct byte[] representation. The Binary option should be used if you are using Binary row keys in HBase
Column FamilyThe Column Family to use when inserting data into HBase
Supports Expression Language: true
Column QualifierThe Column Qualifier to use when inserting data into HBase
Supports Expression Language: true
TimestampThe timestamp for the cells being created in HBase. This field can be left blank and HBase will use the current time.
Supports Expression Language: true
Batch Size25The maximum number of FlowFiles to process in a single execution. The FlowFiles will be grouped by table, and a single Put per table will be perfor med.

Relationships:

NameDescription
successA FlowFile is routed to this relationship after it has been successfully stored in HBase
failureA FlowFile is routed to this relationship if it cannot be sent to HBase

Reads Attributes:

None specified.

Writes Attributes:

None specified.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseJSON/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseJSON/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseJSON/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseJSON/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +PutHBaseJSON

PutHBaseJSON

Description:

Adds rows to HBase based on the contents of incoming JSON documents. Each FlowFile must contain a single UTF-8 encoded JSON document, and any FlowFiles where the root element is not a single document will be routed to failure. Each JSON field name and value will become a column qualifier and value of the HBase row. Any fields with a null value will be skipped, and fields with a complex value will be handled according to the Complex Field Strategy. The row id can be specified either directly on the processor through the Row Identifier property, or can be ext racted from the JSON document by specifying the Row Identifier Field Name property. This processor will hold the contents of all FlowFiles for the given batch in memory at one time.

Tags:

hadoop, hbase, put, json

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
HBase Client ServiceController Service API:
HBaseClientService
Implementation: HBase_1_1_2_ClientService
Specifies the Controller Service to use for accessing HBase.
Table NameThe name of the HBase Table to put data into
Supports Expression Language: true
Row IdentifierSpecifies the Row ID to use when inserting data into HBase
Supports Expression Language: true
Row Identifier Field NameSpecifies the name of a JSON element whose value should be used as the row id for the given JSON document.
Supports Expression Language: true
Row Identifier Encoding StrategyString
  • String Stores the value of row id as a UTF-8 String.
  • Binary Stores the value of the rows id as a binary byte array. It expects that the row id is a binary formatted string.
Specifies the data type of Row ID used when inserting data into HBase. The default behavior is to convert the row id to a UTF-8 byte array. Choosing Binary will convert a binary formatted string to the correct byte[] representation. The Binary option should be used if you are using Binary row keys in HBase
Column FamilyThe Column Family to use when inserting data into HBase
Supports Expression Language: true
TimestampThe timestamp for the cells being created in HBase. This field can be left blank and HBase will use the current time.
Supports Expression Language: true
Batch Size25The maximum number of FlowFiles to process in a single execution. The FlowFiles will be grouped by table, and a single Put per table will be performed.
Complex Field StrategyText
  • Fail Route entire FlowFile to failure if any elements contain complex valu
 es.
  • Warn Provide a warning and do not include field in row sent to HBase.
  • Ignore Silently ignore and do not include in row sent to HBase.
  • Text Use the string representation of the complex field as the value of the given column.
Indicates how to handle complex fields, i.e. fields that do not have a single text value.
Field Encoding StrategyStringIndicates how to store the value of each field in HBase. The default behavior is to convert each value from the JSON to a String, and store the UTF-8 bytes. Choosing Bytes will interpret the type of each field from the JSON, and convert the value to the byte representation of that type, meaning an integer will be stored as the byte representation of that integer.

Relationships:

NameDescription
successA FlowFile is routed to this relationship after it has been successfully stored in HBase
failureA FlowFile is routed to this relationship if it cannot be sent to HBase

Reads Attributes:

None specified.

Writes Attributes:

None specified.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseRecord/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseRecord/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseRecord/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase-nar/1.4.0/org.apache.nifi.hbase.PutHBaseRecord/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +PutHBaseRecord

PutHBaseRecord

Description:

Adds rows to HBase based on the contents of a flowfile using a configured record reader.

Tags:

hadoop, hbase, put, record

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Record ReaderController Service API:
RecordReaderFactory
Implementations: CSVReader
GrokReader
AvroReader
JsonTreeReader
JsonPathReader
Script edReader
Specifies the Controller Service to use for parsing incoming data and determining the data's schema
HBase Client ServiceController Service API:
HBaseClientService
Implementation: HBase_1_1_2_ClientService
Specifies the Controller Service to use for accessing HBase.
Table NameThe name of the HBase Table to put data into
Supports Expression Language: true
Row Identifier Field NameSpec ifies the name of a record field whose value should be used as the row id for the given record.
Supports Expression Language: true
Row Identifier Encoding StrategyString
  • String Stores the value of row id as a UTF-8 String.
  • Binary Stores the value of the rows id as a binary byte array. It expects that the row id is a binary formatted string.
Specifies the data type of Row ID used when inserting data into HBase. The default behavior is to convert the row id to a UTF-8 byte array. Choosing Binary will convert a binary formatted string to the correct byte[] representation. The Binary option should be used if you are using Binary row keys in HBase
Column FamilyThe Column Family to use when inserting data into HBase
Supports Expression Language: true
Timestamp Field NameSpecifies the name of a record field whose value should be used as the timestamp for the cells in HBase. The value of this field must be a number, string, or date that can be converted to a long. If this field is left blank, HBase will use the current time.
Supports Expression Language: true
Batch Size1000The maximum number of records to be sent to HBase at any one time from the record set.
Complex Field StrategyText
  • Fail Route entire FlowFile to failure if any elements contain complex values.
  • Warn Provide a warning and do not include field in row sent to HBase.
  • Ignore Silently ignore and do not include in row sent to HBase.
  • Text Use the string representation of the complex field as the value of the given column.
Indicates how to handle complex fields, i.e. fields that do not have a single text value.
Field Encoding StrategyString
  • String Stores the value of each field as a UTF-8 String.
  • Bytes Stores the value of each field as the byte representation of the type derived from the record.
Indicates how to store the value of each field in HBase. The default behavior is to convert each value from the record to a String, and store the UTF-8 bytes. Ch oosing Bytes will interpret the type of each field from the record, and convert the value to the byte representation of that type, meaning an integer will be stored as the byte representation of that integer.

Relationships:

NameDescription
successA FlowFile is routed to this relationship after it has been successfully stored in HBase
failureA FlowFile is routed to this relationship if it cannot be sent to HBase

Reads Attributes:

NameDescription
restart.indexReads restart.index when it needs to replay part of a record set that did not get into HBase.

Writes Attributes:

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.4.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.4.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.4.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.4.0/org.apache.nifi.hbase.HBase_1_1_2_ClientMapCacheService/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +HBase_1_1_2_ClientMapCacheService

HBase_1_1_2_ClientMapCacheService

Description:

Provides the ability to use an HBase table as a cache, in place of a DistributedMapCache. Uses a HBase_1_1_2_ClientService controller to communicate with HBase.

Tags:

distributed, cache, state, map, cluster, hbase

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDescription
restart.indexWrites restart.index when a batch fails to be insert into HBase
NameDefault ValueAllowable ValuesDescription
HBase Cache Table NameName of the table on HBase to use for the cache.
Supports Expression Language: true
HBase Client ServiceController Service API:
HBaseClientService
Implementation: HBase_1_1_2_ClientService
Specifies the HBase Client Controller Service to use for accessing HBase.
HBase Column FamilyfName of the column family on HBase to use for the cache.
Supports Expression Language: true
HBase Column QualifierqName of the column qualifier on HBase to use for the cache
Supports Expression Language: true

State management:

This component does not store state.

Restricted:

This component is not restricted.

See Also:

HBase_1_1_2_ClientService

\ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.4.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.4.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.4.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hbase_1_1_2-client-service-nar/1.4.0/org.apache.nifi.hbase.HBase_1_1_2_ClientService/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +HBase_1_1_2_ClientService

HBase_1_1_2_ClientService

Description:

Implementation of HBaseClientService for HBase 1.1.2. This service can be configured by providing a comma-separated list of configuration files, or by specifying values for the other properties. If configuration files are provided, they will be loaded first, and the values of the additional properties will override the values from the configuration files. In addition, any user defined properties on the processor will also be passed to the HBase configuration.

Tags:

hbase, client

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Hadoop Configuration FilesComma-separated list of Hadoop Configuration files, such as hbase-site.xml and core-site.xml for kerberos, including full paths to the files.
Kerberos PrincipalKerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties
Supports Expression Language: true
Kerberos KeytabKerberos keytab associated with the principal. Requires nifi.kerberos.krb5.file to be set in your nifi.properties
Supports Expression Language: true
ZooKeeper QuorumComma-separated list of ZooKeeper hosts for HBase. Required if Hadoop Configuration Files are not provided.
ZooKeeper Client PortThe port on which ZooKeeper is accepting client connections. Required if Hadoop Configuration Files are not provided.
ZooKeeper ZNode ParentThe ZooKeeper ZNode Parent value for HBase (example: /hbase). Required if Hadoop Configurati on Files are not provided.
HBase Client Retries1The number of times the HBase client will retry connecting. Required if Hadoop Configuration Files are not provided.
Phoenix Client JAR LocationThe full path to the Phoenix client JAR. Required if Phoenix is installed on top of HBase.
Supports Expression Language: true

Dynamic Properties:

Dynamic Properties allow the user to specify both the name and value of a property.
NameValueDescription
The name of an HBase configuration property.The value of the given HBase configuration property.These properties will be set on the HBase configuration after loading an y provided configuration files.

State management:

This component does not store state.

Restricted:

This component is not restricted. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.dbcp.hive.HiveConnectionPool/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +HiveConnectionPool

HiveConnectionPool

Description:

Provides Database Connection Pooling Service for Apache Hive. Connections can be asked from pool and returned after usage.

Tags:

hive, dbcp, jdbc, database, connection, pooling, store

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, whether a property supports the NiFi Expression Language, and whether a property is considered "sensitive", meaning that its value will be encrypted. Before entering a value in a sensitive property, ensure that the nifi.properties file has an entry for the property nifi.sensitive.props.key.

NameDefault ValueAllowable ValuesDescription
Database Connection URLA database connection URL used to connect to a database. May contain database system name, host, port, database name and some parameters. The exact syntax of a database connection URL is specified by the Hive documentation. For example, the server principal is often included as a connection parameter when connecting to a secure Hive server.
Supports Expression Language: true
Hive Configuration Resources< /td>A file or comma separated list of files which contains the Hive configuration (hive-site.xml, e.g.). Without this, Hadoop will search the classpath for a 'hive-site.xml' file or will revert to a default configuration. Note that to enable authentication with Kerberos e.g., the appropriate properties must be set in the configuration files. Please see the Hive documentation for more details.
Supports Expression Language: true
Database UserDatabase user name
Supports Expression Language: true
PasswordThe password for the database user
Sensitive Property: true
Supports Expression Language: true
M ax Wait Time500 millisThe maximum amount of time that the pool will wait (when there are no available connections) for a connection to be returned before failing, or -1 to wait indefinitely.
Supports Expression Language: true
Max Total Connections8The maximum number of active connections that can be allocated from this pool at the same time, or negative for no limit.
Supports Expression Language: true
Validation queryValidation query used to validate connections before returning them. When a borrowed connection is invalid, it gets dropped and a new valid connection will be returned. NOTE: Using validation may h ave a performance penalty.
Supports Expression Language: true
Kerberos PrincipalKerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties
Supports Expression Language: true
Kerberos KeytabKerberos keytab associated with the principal. Requires nifi.kerberos.krb5.file to be set in your nifi.properties
Supports Expression Language: true

State management:

This component does not store state.

Restricted:

This component is not restricted. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.ConvertAvroToORC/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.ConvertAvroToORC/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.ConvertAvroToORC/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.ConvertAvroToORC/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +ConvertAvroToORC

ConvertAvroToORC

Description:

Converts an Avro record into ORC file format. This processor provides a direct mapping of an Avro record to an ORC record, such that the resulting ORC file will have the same hierarchical structure as the Avro document. If an incoming FlowFile contains a stream of multiple Avro records, the resultant FlowFile will contain a ORC file containing all of the Avro records. If an incoming FlowFile does not contain any records, an empty ORC file is the output. NOTE: Many Avro datatypes (collections, primitives, and unions of primitives, e.g.) can be conve rted to ORC, but unions of collections and other complex datatypes may not be able to be converted to ORC.

Tags:

avro, orc, hive, convert

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
ORC Configuration ResourcesA file or comma separated list of files which contains the ORC configuration (hive-site.xml, e.g.). Without this, Hadoop will search the classpath for a 'hive-site.xml' file or will revert to a default configuration. Please see the ORC documentation for m ore details.
Stripe Size64 MBThe size of the memory buffer (in bytes) for writing stripes to an ORC file
Buffer Size10 KBThe maximum size of the memory buffers (in bytes) used for compressing and storing a stripe in memory. This is a hint to the ORC writer, which may choose to use a smaller buffer size based on stripe size and number of columns for efficient stripe writing and memory utilization.
Compression TypeNONE
  • NONE
  • ZLIB
  • SNAPPY
  • LZO
No Description Provided.
Hive Table NameAn optional table name to insert into the hive.ddl attribute. The generated DDL can be used by a PutHiveQL processor (presumably after a PutHDFS processor) to create a table backed by the converted ORC file. If this property is not provided, the full name (including namespace) of the incoming Avro record will be normalized and used as the table name.
Supports Expression Language: true

Relationships:

NameDescription
successA FlowFile is routed to this relationship after it has been converted to ORC format.
failureA FlowFile is routed to this relationship if it cannot be parsed as Avro or cannot be converted to ORC for any reason

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
mime.typeSets the m ime type to application/octet-stream
filenameSets the filename to the existing filename with the extension replaced by / added to by .orc
record.countSets the number of records in the ORC file.
hive.ddlCreates a partial Hive DDL statement for creating a table in Hive from this ORC file. This can be used in ReplaceText for setting the content to the DDL. To make it valid DDL, add "LOCATION '<path_to_orc_file_in_hdfs>'", where the path is the directory that contains this ORC file on HDFS. For example, ConvertAvroToORC can send flow files to a PutHDFS processor to send the file to HDFS, then to a ReplaceText to set the content to this DDL (plus the LOCATION clause as described), then to PutHiveQL processor to create the table if it doesn't exist.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement: This component requires an incoming relationship. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.PutHiveQL/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.PutHiveQL/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.PutHiveQL/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.PutHiveQL/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +PutHiveQL

PutHiveQL

Description:

Executes a HiveQL DDL/DML command (UPDATE, INSERT, e.g.). The content of an incoming FlowFile is expected to be the HiveQL command to execute. The HiveQL command may use the ? to escape parameters. In this case, the parameters to use must exist as FlowFile attributes with the naming convention hiveql.args.N.type and hiveql.args.N.value, where N is a positive integer. The hiveql.args.N.type is expected to be a number indicating the JDBC Type. The content of the FlowFile is expected to be in UTF-8 format.

Tags:

sql, hive, put, database, update, inser t

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values.

NameDefault ValueAllowable ValuesDescription
Hive Database Connection Pooling ServiceController Service API:
HiveDBCPService
Implementation: HiveConnectionPool
The Hive Controller Service that is used to obtain connection(s) to the Hive database
Batch Size100The preferred number of FlowFiles to put to the database in a single transaction
Character SetUTF-8Specifies the character set of the record data.
Statement Delimiter;Statement Delimiter used to separate SQL statements in a multiple statement script
Rollback On Failurefalse
  • true
  • false
Specify how to handle error. By default (false), if an error occurs while processing a FlowFile, the FlowFile will be routed to 'failure' or 'retry' relationship based on error type, and processor can continue with next FlowFile. Instead, you may want to rollback currently processed FlowFiles and stop further processing immediately. In that case, you can do so by enabling this 'Roll back On Failure' property. If enabled, failed FlowFiles will stay in the input relationship without penalizing it and being processed repeatedly until it gets processed successfully or removed by other means. It is important to set adequate 'Yield Duration' to avoid retrying too frequently.

Relationships:

NameDescription
retryA FlowFile is routed to this relationship if the database cannot be updated but attempting the operation again may succeed
successA FlowFile is routed to this relationship after the database is successfully updated
failureA FlowFile is routed to this relationship if the database cannot be updated and retrying the operation will also fail, such as an invalid query or an integrity constraint violation

Reads Attributes:

NameDescription
hiveql.args.N.typeIncoming FlowFiles are expected to be parametrized HiveQL statements. The type of each Parameter is specified as an integer that represents the JDBC Type of the parameter.
hiveql.args.N.valueIncoming FlowFiles are expected to be parametrized HiveQL statements. The value of the Parameters are specified as hiveql.args.1.value, hiveql.args.2.value, hiveql.args.3.value, and so on. The type of the hiveql.args.1.value Parameter is specified by the hiveql.args.1.type attribute.

Writes Attributes:

None specified.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

See Also:

SelectHiveQL

\ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.PutHiveStreaming/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.PutHiveStreaming/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.PutHiveStreaming/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.PutHiveStreaming/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +PutHiveStreaming

PutHiveStreaming

Description:

This processor uses Hive Streaming to send flow file data to an Apache Hive table. The incoming flow file is expected to be in Avro format and the table must exist in Hive. Please see the Hive documentation for requirements on the Hive table (format, partitions, etc.). The partition values are extracted from the Avro record based on the names of the partition columns as specified in the processor.

Tags:

hive, streaming, put, database, store

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Hive Metastore URIThe URI location for the Hive Metastore. Note that this is not the location of the Hive Server. The default port for the Hive metastore is 9043.
Supports Expression Language: true
Hive Configuration ResourcesA file or comma separated list of files which contains the Hive configuration (hive-site.xml, e.g.). Without this, Hadoop will search the classpath for a 'hive-site.xml' file or will revert to a default configuration. Note that to enable authentication with Kerberos e.g., the appropriate properties must be set in the configuration files. Please see the Hive documentation for more details.
Database NameThe name of the database in which to put the data.
Supports Expression Language: true
Table NameThe name of the database table in which to put the data.
Supports Expression Language: true
Partition ColumnsA comma-delimited list of column names on which the table has been partitioned. The order of values in this list must correspond exactly to the order of partition columns specified during the table creation.
Supports Expression Language: true
Auto-Create Partitionstrue
  • true
  • false
Flag indicating whether partitions should be automatically created
Max Open Connections8The maximum number of open connections that can be allocated from this pool at the same time, or negative for no limit.
Heartbeat Interval60Indicates that a heartbeat should be sent when the specified number of seconds has elapsed. A value of 0 indicates that no heartbeat should be sent.
T ransactions per Batch100A hint to Hive Streaming indicating how many transactions the processor task will need. This value must be greater than 1.
Supports Expression Language: true
Records per Transaction10000Number of records to process before committing the transaction. This value must be greater than 1.
Supports Expression Language: true
Rollback On Failurefalse
  • true
  • false
Specify how to handle error. By default (false), if an error occurs while processing a FlowFile, the FlowFile will be routed to 'failure' or 'retry' relationship based on error type, and processor ca n continue with next FlowFile. Instead, you may want to rollback currently processed FlowFiles and stop further processing immediately. In that case, you can do so by enabling this 'Rollback On Failure' property. If enabled, failed FlowFiles will stay in the input relationship without penalizing it and being processed repeatedly until it gets processed successfully or removed by other means. It is important to set adequate 'Yield Duration' to avoid retrying too frequently.NOTE: When an error occurred after a Hive streaming transaction which is derived from the same input FlowFile is already committed, (i.e. a FlowFile contains more records than 'Records per Transaction' and a failure occurred at the 2nd transaction or later) then the succeeded records will be transferred to 'success' relationship while the original input FlowFile stays in incoming queue. Duplicated records can be created for the succeeded ones when the same FlowFile is processed again.
Ke rberos PrincipalKerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties
Supports Expression Language: true
Kerberos KeytabKerberos keytab associated with the principal. Requires nifi.kerberos.krb5.file to be set in your nifi.properties
Supports Expression Language: true

Relationships:

NameDescription
retryThe incoming FlowFile is routed to this relationship if its records cannot be transmitted to Hive. Note that some records may have been processed successfully, they will be routed (as Avro flow files) to the success relationship. The combination of the retry, success, and failure relationships indicate ho w many records succeeded and/or failed. This can be used to provide a retry capability since full rollback is not possible.
successA FlowFile containing Avro records routed to this relationship after the record has been successfully transmitted to Hive.
failureA FlowFile containing Avro records routed to this relationship if the record could not be transmitted to Hive.

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
hivestreaming.record.countThis attribute is written on the flow files routed to the 'success' and 'failure' relationships, and contains the number of records from the incoming flow file written successfully and unsuccessfully, respectively.

State management:

This component does not store state.

Restricted:

This component is not restricted. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.SelectHiveQL/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.SelectHiveQL/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.SelectHiveQL/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hive-nar/1.4.0/org.apache.nifi.processors.hive.SelectHiveQL/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +SelectHiveQL

SelectHiveQL

Description:

Execute provided HiveQL SELECT query against a Hive database connection. Query result will be converted to Avro or CSV format. Streaming is used so arbitrarily large result sets are supported. This processor can be scheduled to run on a timer, or cron expression, using the standard scheduling methods, or it can be triggered by an incoming FlowFile. If it is triggered by an incoming FlowFile, then attributes of that FlowFile will be available when evaluating the select query. FlowFile attribute 'selecthiveql.row.count' indicates how many rows were selected.< /p>

Tags:

hive, sql, select, jdbc, query, database

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Hive Database Connection Pooling ServiceController Service API:
HiveDBCPService
Implementation: HiveConnectionPool
The Hive Controller Service that is used to obtain connection(s) to the Hive database
HiveQL Select QueryHiveQL SELECT query to execute
Supports Expression Language: true
Output FormatAvro
  • Avro
  • CSV
How to represent the records coming from Hive (Avro, CSV, e.g.)
CSV Headertrue
  • true
  • false
Include Header in Output
Alternate CSV HeaderComma separated list of header fields
Supports Expression Language: true
CSV Delimiter,CS V Delimiter used to separate fields
Supports Expression Language: true
CSV Quotetrue
  • true
  • false
Whether to force quoting of CSV fields. Note that this might conflict with the setting for CSV Escape.
CSV Escapetrue
  • true
  • false
Whether to escape CSV strings in output. Note that this might conflict with the setting for CSV Quote.
Character SetUTF-8Specifies the character set of the record data.

Relationships:

NameDescription
successSuccessfully c reated FlowFile from HiveQL query result set.
failureHiveQL query execution failed. Incoming FlowFile will be penalized and routed to this relationship

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
mime.typeSets the MIME type for the outgoing flowfile to application/avro-binary for Avro or text/csv for CSV.
filenameAdds .avro or .csv to the filename attribute depending on which output format is selected.
selecthiveql.row.countIndicates how many rows were selected/returned by the query.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component allows an incoming relationship. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hl7-nar/1.4.0/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hl7-nar/1.4.0/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hl7-nar/1.4.0/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-hl7-nar/1.4.0/org.apache.nifi.processors.hl7.ExtractHL7Attributes/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +ExtractHL7Attributes

ExtractHL7Attributes

Description:

Extracts information from an HL7 (Health Level 7) formatted FlowFile and adds the information as FlowFile Attributes. The attributes are named as <Segment Name> <dot> <Field Index>. If the segment is repeating, the naming will be <Segment Name> <underscore> <Segment Index> <dot> <Field Index>. For example, we may have an attribute named "MHS.12" with a value of "2.1" and an attribute named "OBX_11.3" with a value of "93000^CPT4".

Tags:

HL7, health level 7, healthcare, extract, at tributes

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Character EncodingUTF-8The Character Encoding that is used to encode the HL7 data
Supports Expression Language: true
Use Segment Namesfalse
  • true
  • false
Whether or not to use HL7 segment names in attributes
Parse Segment Fieldsfalse
  • true
  • false
Whether or not to parse HL7 segment fields into attributes
Skip Validationtrue
  • true
  • false
Whether or not to validate HL7 message values
HL7 Input Versionautodetect
  • autodetect
  • 2.2
  • 2.3
  • 2.3.1
  • 2.4
  • 2.5
  • 2.5.1
  • 2.6
The HL7 version to use for parsing and validation

Relationships:

NameDescription
successA FlowFile is routed to this relationship if it is properly parsed as HL7 and its attributes extracted
failureA FlowFile is routed to this relationship if it cannot be mapped to FlowFile Attributes. This would happen if the FlowFile does not contain valid HL7 data

Reads Attributes:

None specified.

Writes Attributes:

None specified.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship. \ No newline at end of file