Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id A7ACA200D39 for ; Tue, 3 Oct 2017 15:30:48 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id A682E1609DD; Tue, 3 Oct 2017 13:30:48 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id D4052160BE5 for ; Tue, 3 Oct 2017 15:30:45 +0200 (CEST) Received: (qmail 32204 invoked by uid 500); 3 Oct 2017 13:30:39 -0000 Mailing-List: contact commits-help@nifi.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@nifi.apache.org Delivered-To: mailing list commits@nifi.apache.org Received: (qmail 32012 invoked by uid 99); 3 Oct 2017 13:30:39 -0000 Received: from Unknown (HELO svn01-us-west.apache.org) (209.188.14.144) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Oct 2017 13:30:39 +0000 Received: from svn01-us-west.apache.org (localhost [127.0.0.1]) by svn01-us-west.apache.org (ASF Mail Server at svn01-us-west.apache.org) with ESMTP id 03F5D3A1387 for ; Tue, 3 Oct 2017 13:30:35 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1811008 [24/43] - in /nifi/site/trunk/docs: ./ nifi-docs/ nifi-docs/components/ nifi-docs/components/org.apache.nifi/ nifi-docs/components/org.apache.nifi/nifi-ambari-nar/ nifi-docs/components/org.apache.nifi/nifi-ambari-nar/1.4.0/ nifi-do... Date: Tue, 03 Oct 2017 13:30:27 -0000 To: commits@nifi.apache.org From: jstorck@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20171003133035.03F5D3A1387@svn01-us-west.apache.org> archived-at: Tue, 03 Oct 2017 13:30:48 -0000 Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MergeRecord/additionalDetails.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MergeRecord/additionalDetails.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MergeRecord/additionalDetails.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MergeRecord/additionalDetails.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1,229 @@ + + + + + + MergeRecord + + + + + + +

Introduction

+

+ The MergeRecord Processor allows the user to take many FlowFiles that consist of record-oriented data (any data format for which there is + a Record Reader available) and combine the FlowFiles into one larger FlowFile. This may be preferable before pushing the data to a downstream + system that prefers larger batches of data, such as HDFS, or in order to improve performance of a NiFi flow by reducing the number of FlowFiles + that flow through the system (thereby reducing the contention placed on the FlowFile Repository, Provenance Repository, Content Repository, and + FlowFile Queues). +

+ +

+ The Processor creates several 'bins' to put the FlowFiles in. The maximum number of bins to use is set to 5 by default, but this can be changed + by updating the value of the <Maximum number of Bins> property. The number of bins is bound in order to avoid running out of Java heap space. + Note: while the contents of a FlowFile are stored in the Content Repository and not in the Java heap space, the Processor must hold the FlowFile + objects themselves in memory. As a result, these FlowFiles with their attributes can potentially take up a great deal of heap space and cause + OutOfMemoryError's to be thrown. In order to avoid this, if you expect to merge many small FlowFiles together, it is advisable to instead use a + MergeContent that merges no more than say 1,000 FlowFiles into a bundle and then use a second MergeContent to merges these small bundles into + larger bundles. For example, to merge 1,000,000 FlowFiles together, use MergeRecord that uses a <Maximum Number of Records> of 1,000 and route the + "merged" Relationship to a second MergeRecord that also sets the <Maximum Number of Records> to 1,000. The second MergeRecord will then merge 1,000 bundles + of 1,000, which in effect produces bundles of 1,000,000. +

+ + + +

How FlowFiles are Binned

+

+ How the Processor determines which bin to place a FlowFile in depends on a few different configuration options. Firstly, the Merge Strategy + is considered. The Merge Strategy can be set to one of two options: Bin Packing Algorithm, or Defragment. When the goal is to simply combine + smaller FlowFiles into one larger FlowFiles, the Bin Packing Algorithm should be used. This algorithm picks a bin based on whether or not the FlowFile + can fit in the bin according to its size and the <Maximum Bin Size> property and whether or not the FlowFile is 'like' the other FlowFiles in + the bin. What it means for two FlowFiles to be 'like FlowFiles' is discussed at the end of this section. +

+ +

+ The "Defragment" Merge Strategy can be used when records need to be explicitly assigned to the same bin. For example, if data is split apart using + the SplitRecord Processor, each 'split' can be processed independently and later merged back together using this Processor with the + Merge Strategy set to Defragment. In order for FlowFiles to be added to the same bin when using this configuration, the FlowFiles must have the same + value for the "fragment.identifier" attribute. Each FlowFile with the same identifier must also have the same value for the "fragment.count" attribute + (which indicates how many FlowFiles belong in the bin) and a unique value for the "fragment.index" attribute so that the FlowFiles can be ordered + correctly. +

+ +

+ In order to be added to the same bin, two FlowFiles must be 'like FlowFiles.' In order for two FlowFiles to be like FlowFiles, they must have the same + schema, and if the <Correlation Attribute Name> property is set, they must have the same value for the specified attribute. For example, if the + <Correlation Attribute Name> is set to "filename" then two FlowFiles must have the same value for the "filename" attribute in order to be binned + together. If more than one attribute is needed in order to correlate two FlowFiles, it is recommended to use an UpdateAttribute processor before the + MergeRecord processor and combine the attributes. For example, if the goal is to bin together two FlowFiles only if they have the same value for the + "abc" attribute and the "xyz" attribute, then we could accomplish this by using UpdateAttribute and adding a property with name "correlation.attribute" + and a value of "abc=${abc},xyz=${xyz}" and then setting MergeRecord's <Correlation Attribute Name> property to "correlation.attribute". +

+ +

+ It is often useful to bin together only Records that have the same value for some field. For example, if we have point-of-sale data, perhaps the desire + is to bin together records that belong to the same store, as identified by the 'storeId' field. This can be accomplished by making use of the PartitionRecord + Processor ahead of MergeRecord. This Processor will allow one or more fields to be configured as the partitioning criteria and will create attributes for those + corresponding values. An UpdateAttribute processor could then be used, if necessary, to combine multiple attributes into a single correlation attribute, + as described above. See documentation for those processors for more details. +

+ + + +

When a Bin is Merged

+

+ Above, we discussed how a bin is chosen for a given FlowFile. Once a bin has been created and FlowFiles added to it, we must have some way to determine + when a bin is "full" so that we can bin those FlowFiles together into a "merged" FlowFile. There are a few criteria that are used to make a determination as + to whether or not a bin should be merged. +

+ +

+ If the <Merge Strategy> property is set to "Bin Packing Algorithm" then then the following rules will be evaluated. + Firstly, in order for a bin to be full, both of the thresholds specified by the <Minimum Bin Size> and the <Minimum Number of Records> properties + must be satisfied. If one of these properties is not set, then it is ignored. Secondly, if either the <Maximum Bin Size> or the <Maximum Number of + Records> property is reached, then the bin is merged. That is, both of the minimum values must be reached but only one of the maximum values need be reached. + Note that the <Maximum Number of Records> property is a "soft limit," meaning that all records in a given input FlowFile will be added to the same bin, and + as a result the number of records may exceed the maximum configured number of records. Once this happens, though, no more Records will be added to that same bin + from another FlowFile. + If the <Max Bin Age> is reached for a bin, then the FlowFiles in that bin will be merged, even if the minimum bin size and minimum number of records + have not yet been met. Finally, if the maximum number of bins have been created (as specified by the <Maximum number of Bins> property), and some input FlowFiles + cannot fit into any of the existing bins, then the oldest bin will be merged to make room. This is done because otherwise we would not be able to add any + additional FlowFiles to the existing bins and would have to wait until the Max Bin Age is reached (if ever) in order to merge any FlowFiles. +

+ +

+ If the <Merge Strategy> property is set to "Defragment" then a bin is full only when the number of FlowFiles in the bin is equal to the number specified + by the "fragment.count" attribute of one of the FlowFiles in the bin. All FlowFiles that have this attribute must have the same value for this attribute, + or else they will be routed to the "failure" relationship. It is not necessary that all FlowFiles have this value, but at least one FlowFile in the bin must have + this value or the bin will never be complete. If all of the necessary FlowFiles are not binned together by the point at which the bin times amount + (as specified by the <Max Bin Age> property), then the FlowFiles will all be routed to the 'failure' relationship instead of being merged together. +

+ +

+ Once a bin is merged into a single FlowFile, it can sometimes be useful to understand why exactly the bin was merged when it was. For example, if the maximum number + of allowable bins is reached, a merged FlowFile may consist of far fewer records than expected. In order to help understand the behavior, the Processor will emit + a JOIN Provenance Events when creating the merged FlowFile, and the JOIN event will include in it a "Details" field that explains why the bin was merged when it was. + For example, the event will indicate "Records Merged due to: Bin is full" if the bin reached its minimum thresholds and no more subsequent FlowFiles were able to be + added to it. Or it may indicate "Records Merged due to: Maximum number of bins has been exceeded" if the bin was merged due to the configured maximum number of bins + being filled and needing to free up space for a new bin. +

+ + +

When a Failure Occurs

+

+ When a bin is filled, the Processor is responsible for merging together all of the records in those FlowFiles into a single FlowFile. If the Processor fails + to do so for any reason (for example, a Record cannot be read from an input FlowFile), then all of the FlowFiles in that bin are routed to the 'failure' + Relationship. The Processor does not skip the single problematic FlowFile and merge the others. This behavior was chosen because of two different considerations. + Firstly, without those problematic records, the bin may not truly be full, as the minimum bin size may not be reached without those records. + Secondly, and more importantly, if the problematic FlowFile contains 100 "good" records before the problematic ones, those 100 records would already have been + written to the "merged" FlowFile. We cannot un-write those records. If we were to then send those 100 records on and route the problematic FlowFile to 'failure' + then in a situation where the "failure" relationship is eventually routed back to MergeRecord, we could end up continually duplicating those 100 successfully + processed records. +

+ + + +

Examples

+ +

+ To better understand how this Processor works, we will lay out a few examples. For the sake of simplicity of these examples, we will use CSV-formatted data and + write the merged data as CSV-formatted data, but + the format of the data is not really relevant, as long as there is a Record Reader that is capable of reading the data and a Record Writer capable of writing + the data in the desired format. +

+ + + +

Example 1 - Batching Together Many Small FlowFiles

+ +

+ When we want to batch together many small FlowFiles in order to create one larger FlowFile, we will accomplish this by using the "Bin Packing Algorithm" + Merge Strategy. The idea here is to bundle together as many FlowFiles as we can within our minimum and maximum number of records and bin size. + Consider that we have the following properties set: +

+ + + + + + + + + + + + + + + + + + +
Property NameProperty Value
Merge StrategyBin Packing Algorithm
Minimum Number of Records3
Maximum Number of Records5
+ +

+ Also consider that we have the following data on the queue, with the schema indicating a Name and an Age field: +

+ + + + + + + + + + + + + + + + + + + + + + +
FlowFile IDFlowFile Contents
1Mark, 33
2John, 45
Jane, 43
3Jake, 3
4Jan, 2
+ +

+ In this, because we have not configured a Correlation Attribute, and because all FlowFiles have the same schema, the Processor + will attempt to add all of these FlowFiles to the same bin. Because the Minimum Number of Records is 3 and the Maximum Number of Records is 5, + all of the FlowFiles will be added to the same bin. The output, then, is a single FlowFile with the following content: +

+ + +
+Mark, 33
+John, 45
+Jane, 43
+Jake, 3
+Jan, 2
+
+
+ +

+ When the Processor runs, it will bin all of the FlowFiles that it can get from the queue. After that, it will merge any bin that is "full enough." + So if we had only 3 FlowFiles on the queue, those 3 would have been added, and a new bin would have been created in the next iteration, once the + 4th FlowFile showed up. However, if we had 8 FlowFiles queued up, only 5 would have been added to the first bin. The other 3 would have been added + to a second bin, and that bin would then be merged since it reached the minimum threshold of 3 also. +

+ + + \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MergeRecord/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MergeRecord/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MergeRecord/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MergeRecord/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +MergeRecord

MergeRecord

Description:

This Processor merges together multiple record-oriented FlowFiles into a single FlowFile that contains all of the Records of the input FlowFiles. This Processor works by creating 'bins' and then adding FlowFiles to these bins until they are full. Once a bin is full, all of the FlowFiles will be combined into a single output FlowFile, and that FlowFile will be routed to the 'merged' Relationship. A bin will consist of potentially many 'like FlowFiles'. In order for two FlowFiles to be considered 'like FlowFiles', they must have the same Schema (as identified b y the Record Reader) and, if the <Correlation Attribute Name> property is set, the same value for the specified attribute. See Processor Usage and Additional Details for more information.

Additional Details...

Tags:

merge, record, content, correlation, stream, event

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values.

NameDefault ValueAllowable ValuesDescription
Record ReaderController Service API:
RecordReaderFactory
Implementations: CSVReader
GrokReader
AvroReader
JsonTreeReader
JsonPathReader
ScriptedReader
Specifies the Controller Service to use for reading incoming data
Record WriterController Service API:
RecordSetWriterFactory
Implementations: JsonRecordSetWriter
FreeFormTextRecordSetWriter
AvroRecordSetWriter
ScriptedRecordSetWriter
CSVRecordSetWriter
Specifies the Controller Service to use for writing out the records
Merge StrategyBin-Packing Algorithm
  • Bin-Packing Algorithm Generates 'bins' of FlowFiles and fills each bin as full as possible. FlowFiles are placed into a bin based on their size and optionally their attributes (if the <Correlation Attribute> property is set)
  • Defragment Combines fragments that are associated by attributes back into a single cohesive FlowFile. If using this strategy, all FlowFiles must have the attributes <fragment.identifier> and <fragment.count>. All FlowFiles with the same value for "fragment.identifier" will be grouped together. All FlowFiles in this group must have the same value for the "fragment.count" attribute. The ordering of the Records that are output is not guaranteed.
Specifies the algorithm used to merge records. The 'Defragment' algorithm combines fragments that are associated by attributes back into a single cohesive FlowFile. The 'Bin-Packing Algorithm' generates a FlowFile populated by arbitrarily chosen FlowFiles
Correlation Attribute NameIf specified, two FlowFiles will be binned together only if they have the same value for this Attribut e. If not specified, FlowFiles are bundled by the order in which they are pulled from the queue.
Attribute StrategyKeep Only Common Attributes
  • Keep Only Common Attributes Any attribute that is not the same on all FlowFiles in a bin will be dropped. Those that are the same across all FlowFiles will be retained.
  • Keep All Unique Attributes Any attribute that has the same value for all FlowFiles in a bin, or has no value for a FlowFile, will be kept. For example, if a bin consists of 3 FlowFiles and 2 of them have a value of 'hello' for the 'greeting' attribute and the third FlowFile has no 'greeting' attri
 bute then the outbound FlowFile will get a 'greeting' attribute with the value 'hello'.
Determines which FlowFile attributes should be added to the bundle. If 'Keep All Unique Attributes' is selected, any attribute on any FlowFile that gets bundled will be kept unless its value conflicts with the value from another FlowFile. If 'Keep Only Common Attributes' is selected, only the attributes that exist on all FlowFiles in the bundle, with the same value, will be preserved.
Minimum Number of Records1The minimum number of records to include in a bin
Maximum Number of Records1000The maximum number of Records to include in a bin. This is a 'soft limit' in that if a FlowFIle is added to a bin, all records in that FlowFile will be added, so this limit may be exceeded by up to the number of records in the last input FlowFile.
Minimum Bin Size0 BThe minimum size of for the bin
Maximum Bin SizeThe maximum size for the bundle. If not specified, there is no maximum. This is a 'soft limit' in that if a FlowFile is added to a bin, all records in that FlowFile will be added, so this limit may be excee ded by up to the number of bytes in last input FlowFile.
Max Bin AgeThe maximum age of a Bin that will trigger a Bin to be complete. Expected format is <duration> <time unit> where <duration> is a positive integer and time unit is one of seconds, minutes, hours
Maximum Number of Bins10Specifies the maximum number of bins that can be held in memory at any one time. This number should not be smaller than the maximum number of conurrent threads for this Processor, or the bins that are created will often consist only of a single incoming FlowFile.

Relationships:

NameDescription
failureIf the bundle cannot be created, all FlowFiles that wou ld have been used to created the bundle will be transferred to failure
originalThe FlowFiles that were used to create the bundle
mergedThe FlowFile containing the merged records

Reads Attributes:

NameDescription
fragment.identifierApplicable only if the <Merge Strategy> property is set to Defragment. All FlowFiles with the same value for this attribute will be bundled together.
fragment.countApplicable only if the <Merge Strategy> property is set to Defragment. This attribute must be present on all FlowFiles with the same value for the fragment.identifier attribute. All FlowFiles in the same bundle must have the same value for this attribute. The value of this attribute indicates how many FlowFiles should be expected in the given bundle.

Writes Attributes:

NameDescription
record.countThe merged FlowFile will have a 'record.count' attribute indicating the number of records that were written to the FlowFile.
mime.typeThe MIME Type indicated by the Record Writer
merge.countThe number of FlowFiles that were merged into this bundle
merge.bin.ageThe age of the bin, in milliseconds, when it was merged and output. Effectively this is the greatest amount of time that any FlowFile in this bundle remained waiting in this processor before it was output
<Attributes from Record Writer>Any Attribute that the configured Record Writer returns will be added to the FlowFile.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

See Also:

MergeContent, SplitRecord, PartitionRecord

\ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ModifyBytes/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ModifyBytes/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ModifyBytes/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ModifyBytes/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +ModifyBytes

ModifyBytes

Description:

Discard byte range at the start and end or all content of a binary file.

Tags:

binary, discard, keep

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Start Offset0 BNumber of bytes removed at the beginning of the file.
Supports Expression Language: true
End Offset0 BNumber of bytes removed at the end of the file.
Supports Expression Language: true
Remove All Contentfalse
  • true
  • false
Remove all content from the FlowFile superseding Start Offset and End Offset properties.

Relationships:

NameDescription
successProcessed flowfiles.

Reads Attributes:

None specified.

Wr ites Attributes:

None specified.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MonitorActivity/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MonitorActivity/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MonitorActivity/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MonitorActivity/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +MonitorActivity

MonitorActivity

Description:

Monitors the flow for activity and sends out an indicator when the flow has not had any data for some specified amount of time and again when the flow's activity is restored

Tags:

monitor, flow, active, inactive, activity, detection

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Threshold Duration5 minDetermines how much time must elapse before considering the flow to be inactive
Continually Send Messagesfalse
  • true
  • false
If true, will send inactivity indicator continually every Threshold Duration amount of time until activity is restored; if false, will send an indicator only when the flow first becomes inactive
Inactivity MessageLacking activity as of time: ${now():format('yyyy/MM/dd HH:mm:ss')}; flow has been inactive for ${inactivityDurationMillis:toNumber():divide(600 00)} minutesThe message that will be the content of FlowFiles that are sent to the 'inactive' relationship
Supports Expression Language: true
Activity Restored MessageActivity restored at time: ${now():format('yyyy/MM/dd HH:mm:ss')} after being inactive for ${inactivityDurationMillis:toNumber():divide(60000)} minutesThe message that will be the content of FlowFiles that are sent to 'activity.restored' relationship
Supports Expression Language: true
Copy Attributesfalse
  • true
  • false
If true, will copy all flow file attributes from the flow file that resumed activity to the newly created indicator flow file
Monitoring Scopenode
  • node
  • cluster
Specify how to determine activeness of the flow. 'node' means that activeness is examined at individual node separately. It can be useful if DFM expects each node should receive flow files in a distributed manner. With 'cluster', it defines the flow is active while at least one node receives flow files actively. If NiFi is running as standalone mode, this should be set as 'node', if it's 'cluster', NiFi logs a warning message and act as 'node' scope.
Reporting Nodeall
  • all
  • primary
Specify which node should send notification flow-files to inactive and activity.restored relationships. With 'all', every node in this cluster send notification flow-files. 'primary' means flow- files will be sent only from a primary node. If NiFi is running as standalone mode, this should be set as 'all', even if it's 'primary', NiFi act as 'all'.

Relationships:

NameDescription
inactiveThis relationship is used to transfer an Inactivity indicator when no FlowFiles are routed to 'success' for Threshold Duration amount of time
successAll incoming FlowFiles are routed to success
activity.restoredThis relationship is used to transfer an Activity Restored indicator when FlowFiles are routing to 'success' following a period of inactivity

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
inactivityStartMillisThe time at which Inactivity began, in the form of milliseconds since Epoch
inactivityDu rationMillisThe number of milliseconds that the inactivity has spanned

State management:

ScopeDescription
CLUSTERMonitorActivity stores the last timestamp at each node as state, so that it can examine activity at cluster wide.If 'Copy Attribute' is set to true, then flow file attributes are also persisted.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.Notify/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.Notify/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.Notify/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.Notify/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +Notify

Notify

Description:

Caches a release signal identifier in the distributed cache, optionally along with the FlowFile's attributes. Any flow files held at a corresponding Wait processor will be released once this signal in the cache is discovered.

Tags:

map, cache, notify, distributed, signal, release

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Release Signal IdentifierA value, or the results of an Attribute Expression Language statement, which will be evaluated against a FlowFile in order to determine the release signal cache key
Supports Expression Language: true
Signal Counter NamedefaultA value, or the results of an Attribute Expression Language statement, which will be evaluated against a FlowFile in order to determine the signal counter name. Signal counter name is useful when a corresponding Wait processor needs to know the number of occurrences of different types of events , such as success or failure, or destination data source names, etc.
Supports Expression Language: true
Signal Counter Delta1A value, or the results of an Attribute Expression Language statement, which will be evaluated against a FlowFile in order to determine the signal counter delta. Specify how much the counter should increase. For example, if multiple signal events are processed at upstream flow in batch oriented way, the number of events processed can be notified with this property at once. Zero (0) has a special meaning, it clears target count back to 0, which is especially useful when used with Wait Releasable FlowFile Count = Zero (0) mode, to provide 'open-close-gate' type of flow control. One (1) can open a corresponding Wait processor, and Zero (0) can negate it as if closing a gate.
Supports Expression Language: true
Signal Buffer Count1Specify the maximum number of incoming flow files that can be buffered until signals are notified to cache service. The more buffer can provide the better performance, as it reduces the number of interactions with cache service by grouping signals by signal identifier when multiple incoming flow files share the same signal identifier.
Distributed Cache ServiceController Service API:
AtomicDistributedMapCacheClient
Implementations: RedisDistributedMapCacheClientService
DistributedMapCacheClientService
The Controller Service that is used to cache release signals in order to release files queued at a corresponding Wait processor
Attribute Cache RegexAny attributes whose names match this regex will be stored in the distributed cache to be copied to any FlowFiles released from a corresponding Wait processor. Note that the uuid attribute will not be cached regardless of this value. If blank, no attributes will be cached.

Relationships:

NameDescription
successAll FlowFiles where the release signal has been successfully entered in the cache will be routed to this relationship
failureWhen the cache cannot be reached, or if the Release Sig nal Identifier evaluates to null or empty, FlowFiles will be routed to this relationship

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
notifiedAll FlowFiles will have an attribute 'notified'. The value of this attribute is true, is the FlowFile is notified, otherwise false.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

See Also:

DistributedMapCacheClientService, DistributedMapCacheServer, Wait

\ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ParseCEF/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ParseCEF/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ParseCEF/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ParseCEF/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1,2 @@ +ParseCEF

ParseCEF

Description:

Parses the contents of a CEF formatted message and adds attributes to the FlowFile for headers and extensions of the parts of the CEF message. +Note: This Processor expects CEF messages WITHOUT the syslog headers (i.e. starting at "CEF:0"

Tags:

logs, cef, attributes, system, event, message

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values.

NameDefault ValueAllowable ValuesDescription
Parsed fields destinationflowfile-content
  • flowfile-content
  • flowfile-attribute
Indicates whether the results of the CEF parser are written to the FlowFile content or a FlowFile attribute; if using flowfile-attributeattribute, fields will be populated as attributes. If set to flowfile-content, the CEF extension field will be converted into a flat JSON object.
Append raw message to JSONtrueWhen using flowfile-content (i.e. JSON output), add the original CEF message to the resulting JSON object. The original message is added as a string to _raw.
TimezoneLocal Timezone (system Default)
  • UTC
  • Local Timezone (system Default)
Timezone to be used when representing date fields. UTC will convert all dates to UTC, while Local Timezone will convert them to the timezone used by NiFi.
DateTime Localeen-USThe IETF BCP 47 representation of the Locale to be used when parsing date fields with long or short month names (e.g. may <en-US> vs. mai. & lt;fr-FR>. The defaultvalue is generally safe. Only change if having issues parsing CEF messages

Relationships:

NameDescription
successAny FlowFile that is successfully parsed as a CEF message will be transferred to this Relationship.
failureAny FlowFile that could not be parsed as a CEF message will be transferred to this Relationship without any attributes being added

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
cef.header.versionThe version of the CEF message.
cef.header.deviceVendorThe Device Vendor of the CEF message.
cef.header.deviceProductThe Device Product of the CEF message.
cef.header.deviceVersionThe Device Version of the CEF message.
cef.header.deviceEventClassIdThe Device Event Class ID of the CEF message.
cef.header.nameThe name of the CEF message.
cef.header.severityThe severity of the CEF message.
cef.extension.*The key and value generated by the parsing of the message.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

See Also:

ParseSyslog

\ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ParseSyslog/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ParseSyslog/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ParseSyslog/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.ParseSyslog/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +ParseSyslog

ParseSyslog

Description:

Attempts to parses the contents of a Syslog message in accordance to RFC5424 and RFC3164 formats and adds attributes to the FlowFile for each of the parts of the Syslog message.Note: Be mindfull that RFC3164 is informational and a wide range of different implementations are present in the wild. If messages fail parsing, considering using RFC5424 or using a generic parsing processors such as ExtractGrok.

Tags:

logs, syslog, attributes, system, event, message

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values.

NameDefault ValueAllowable ValuesDescription
Character SetUTF-8Specifies which character set of the Syslog messages

Relationships:

NameDescription
successAny FlowFile that is successfully parsed as a Syslog message will be to this Relationship.
failureAny FlowFile that could not be parsed as a Syslog message will be transferred to this Relationship without any attributes being added

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
sysl og.priorityThe priority of the Syslog message.
syslog.severityThe severity of the Syslog message derived from the priority.
syslog.facilityThe facility of the Syslog message derived from the priority.
syslog.versionThe optional version from the Syslog message.
syslog.timestampThe timestamp of the Syslog message.
syslog.hostnameThe hostname or IP address of the Syslog message.
syslog.senderThe hostname of the Syslog server that sent the message.
syslog.bodyThe body of the Syslog message, everything after the hostname.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

See Also:

ListenSyslog, PutSyslog

\ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PartitionRecord/additionalDetails.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PartitionRecord/additionalDetails.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PartitionRecord/additionalDetails.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PartitionRecord/additionalDetails.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1,190 @@ + + + + + + PartitionRecord + + + + + +

+ PartitionRecord allows the user to separate out records in a FlowFile such that each outgoing FlowFile + consists only of records that are "alike." To define what it means for two records to be alike, the Processor + makes use of NiFi's RecordPath DSL. +

+ +

+ In order to make the Processor valid, at least one user-defined property must be added to the Processor. + The value of the property must be a valid RecordPath. Expression Language is supported and will be evaluated before + attempting to compile the RecordPath. However, if Expression Language is used, the Processor is not able to validate + the RecordPath before-hand and may result in having FlowFiles fail processing if the RecordPath is not valid when being + used. +

+ +

+ Once one or more RecordPath's have been added, those RecordPath's are evaluated against each Record in an incoming FlowFile. + In order for Record A and Record B to be considered "like records," both of them must have the same value for all RecordPath's + that are configured. Only the values that are returned by the RecordPath are held in Java's heap. The records themselves are written + immediately to the FlowFile content. This means that for most cases, heap usage is not a concern. However, if the RecordPath points + to a large Record field that is different for each record in a FlowFile, then heap usage may be an important consideration. In such + cases, SplitRecord may be useful to split a large FlowFile into smaller FlowFiles before partitioning. +

+ +

+ Once a FlowFile has been written, we know that all of the Records within that FlowFile have the same value for the fields that are + described by the configured RecordPath's. As a result, this means that we can promote those values to FlowFile Attributes. We do so + by looking at the name of the property to which each RecordPath belongs. For example, if we have a property named country + with a value of /geo/country/name, then each outbound FlowFile will have an attribute named country with the + value of the /geo/country/name field. The addition of these attributes makes it very easy to perform tasks such as routing, + or referencing the value in another Processor that can be used for configuring where to send the data, etc. + However, for any RecordPath whose value is not a scalar value (i.e., the value is of type Array, Map, or Record), no attribute will be added. +

+ + + +

Examples

+ +

+ To better understand how this Processor works, we will lay out a few examples. For the sake of these examples, let's assume that our input + data is JSON formatted and looks like this: +

+ + +
+[ {
+  "name": "John Doe",
+  "dob": "11/30/1976",
+  "favorites": [ "spaghetti", "basketball", "blue" ],
+  "locations": {
+  	"home": {
+  		"number": 123,
+  		"street": "My Street",
+  		"city": "New York",
+  		"state": "NY",
+  		"country": "US"
+  	},
+  	"work": {
+  		"number": 321,
+  		"street": "Your Street",
+  		"city": "New York",
+  		"state": "NY",
+  		"country": "US"
+  	}
+  }
+}, {
+  "name": "Jane Doe",
+  "dob": "10/04/1979",
+  "favorites": [ "spaghetti", "football", "red" ],
+  "locations": {
+  	"home": {
+  		"number": 123,
+  		"street": "My Street",
+  		"city": "New York",
+  		"state": "NY",
+  		"country": "US"
+  	},
+  	"work": {
+  		"number": 456,
+  		"street": "Our Street",
+  		"city": "New York",
+  		"state": "NY",
+  		"country": "US"
+  	}
+  }
+}, {
+  "name": "Jacob Doe",
+  "dob": "04/02/2012",
+  "favorites": [ "chocolate", "running", "yellow" ],
+  "locations": {
+  	"home": {
+  		"number": 123,
+  		"street": "My Street",
+  		"city": "New York",
+  		"state": "NY",
+  		"country": "US"
+  	},
+  	"work": null
+  }
+}, {
+  "name": "Janet Doe",
+  "dob": "02/14/2007",
+  "favorites": [ "spaghetti", "reading", "white" ],
+  "locations": {
+  	"home": {
+  		"number": 1111,
+  		"street": "Far Away",
+  		"city": "San Francisco",
+  		"state": "CA",
+  		"country": "US"
+  	},
+  	"work": null
+  }
+}]
+
+
+ + +

Example 1 - Partition By Simple Field

+ +

+ For a simple case, let's partition all of the records based on the state that they live in. + We can add a property named state with a value of /locations/home/state. + The result will be that we will have two outbound FlowFiles. The first will contain an attribute with the name + state and a value of NY. This FlowFile will consist of 3 records: John Doe, Jane Doe, and Jacob Doe. + The second FlowFile will consist of a single record for Janet Doe and will contain an attribute named state that + has a value of CA. +

+ + +

Example 2 - Partition By Nullable Value

+ +

+ In the above example, there are three different values for the work location. If we use a RecordPath of /locations/work/state + with a property name of state, then we will end up with two different FlowFiles. The first will contain records for John Doe and Jane Doe + because they have the same value for the given RecordPath. This FlowFile will have an attribute named state with a value of NY. +

+

+ The second FlowFile will contain the two records for Jacob Doe and Janet Doe, because the RecordPath will evaluate + to null for both of them. This FlowFile will have no state attribute (unless such an attribute existed on the incoming FlowFile, + in which case its value will be unaltered). +

+ + +

Example 3 - Partition By Multiple Values

+ +

+ Now let's say that we want to partition records based on multiple different fields. We now add two properties to the PartitionRecord processor. + The first property is named home and has a value of /locations/home. The second property is named favorite.food + and has a value of /favorites[0] to reference the first element in the "favorites" array. +

+ +

+ This will result in three different FlowFiles being created. The first FlowFile will contain records for John Doe and Jane Doe. If will contain an attribute + named "favorite.food" with a value of "spaghetti." However, because the second RecordPath pointed to a Record field, no "home" attribute will be added. + In this case, both of these records have the same value for both the first element of the "favorites" array + and the same value for the home address. Janet Doe has the same value for the first element in the "favorites" array but has a different home address. Similarly, + Jacob Doe has the same home address but a different value for the favorite food. +

+ +

+ The second FlowFile will consist of a single record: Jacob Doe. This FlowFile will have an attribute named "favorite.food" with a value of "chocolate." + The third FlowFile will consist of a single record: Janet Doe. This FlowFile will have an attribute named "favorite.food" with a value of "spaghetti." +

+ + + \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PartitionRecord/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PartitionRecord/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PartitionRecord/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PartitionRecord/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +PartitionRecord

PartitionRecord

Description:

Receives Record-oriented data (i.e., data that can be read by the configured Record Reader) and evaluates one or more RecordPaths against the each record in the incoming FlowFile. Each record is then grouped with other "like records" and a FlowFile is created for each group of "like records." What it means for two records to be "like records" is determined by user-defined properties. The user is required to enter at least one user-defined property whose value is a RecordPath. Two records are considered alike if they have the same value for all configu red RecordPaths. Because we know that all records in a given output FlowFile have the same value for the fields that are specified by the RecordPath, an attribute is added for each field. See Additional Details on the Usage page for more information and examples.

Additional Details...

Tags:

record, partition, recordpath, rpath, segment, split, group, bin, organize

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values.

NameDefault ValueAllowable ValuesDescription
Record ReaderController Service API:
RecordReaderFactory
Implementations: CSVReader
GrokReader
AvroReader
JsonTreeReader
JsonPathReader
ScriptedReader
Specifies the Controller Service to use for reading incoming data
Record WriterController Service API:
RecordSetWr iterFactory
Implementations: JsonRecordSetWriter
FreeFormTextRecordSetWriter
AvroRecordSetWriter
ScriptedRecordSetWriter
CSVRecordSetWriter
Specifies the Controller Service to use for writing out the records

Dynamic Properties:

Dynamic Properties allow the user to specify both the name and value of a prope rty.
NameValueDescription
The name given to the dynamic property is the name of the attribute that will be used to denote the value of the associted RecordPath.A RecordPath that points to a field in the Record.Each dynamic property represents a RecordPath that will be evaluated against each record in an incoming FlowFile. When the value of the RecordPath is determined for a Record, an attribute is added to the outgoing FlowFile. The name of the attribute is the same as the name of this property. The value of the attribute is the same as the value of the field in the Record that the RecordPath points to. Note that no attribute will be added if the value returned for the RecordPath is null or is not a scalar value (i.e., the value is an Array, Map, or Record).
Supports Expression Language: true

Relationships:

NameDescription
successFlowFiles that are successfully partitioned will be routed to this relationship
failureIf a FlowFile cannot be partitioned from the configured input format to the configured output format, the unchanged FlowFile will be routed to this relationship
originalOnce all records in an incoming FlowFile have been partitioned, the original FlowFile is routed to this relationship.

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
record.countThe number of records in an outgoing FlowFile
mime.typeThe MIME Type that the configured Record Writer indicates is appropriate
<dynamic property name>For each dynamic property that is added, an attribute may be added to the FlowFile. See the descr iption for Dynamic Properties for more information.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

See Also:

ConvertRecord, SplitRecord, UpdateRecord, QueryRecord

\ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PostHTTP/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PostHTTP/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PostHTTP/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.PostHTTP/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +PostHTTP

PostHTTP

Description:

Performs an HTTP Post with the content of the FlowFile

Tags:

http, https, remote, copy, archive

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, whether a property supports the NiFi Expression Language, and whether a property is considered "sensitive", meaning that its value will be encrypted. Before entering a value in a sensitive property, ensure that the nifi.properties file has an entry for the property nifi.sensitive.props.key.

NameDefault ValueAllowable ValuesDescription
URLThe URL to POST to. The first part of the URL must be static. However, the path of the URL may be defined using the Attribute Expression Language. For example, https://${hostname} is not valid, but https://1.1.1.1:8080/files/${nf.file.name} is valid.
Supports Expression Language: true
Max Batch Size100 MBIf the Send as FlowFile property is true, specifies the max data size for a batch of FlowFiles to send in a single HTTP POST. If not specified, each FlowFile will be sent s eparately. If the Send as FlowFile property is false, this property is ignored
Max Data to Post per SecondThe maximum amount of data to send per second; this allows the bandwidth to be throttled to a specified data rate; if not specified, the data rate is not throttled
SSL Context ServiceController Service API:
SSLContextService
Implementations: StandardSSLContextService
StandardRestrictedSSLContextService
The Controller Service to use in order to obtain an SSL Context
UsernameUsername required to access the URL
PasswordPassword required to access the URL
Sensitive Property: true
Send as FlowFilefalse
  • true
  • false
If true, will package the FlowFile's contents and attributes together and send the FlowFile Package; otherwise, will send only the FlowFile's content
Use Chunked Encoding
  • true
  • false
Specifies whether or not to use Chunked Encoding to send the data. This property is ignored in the event the contents are compressed or sent as Flow Files.
Compression Level0Determines the GZIP Compression Level to use when sending the file; the value must be in the range of 0-9. A value of 0 indicates that the file will not be GZIP'ed
Connection Timeout30 secHow long to wait when attempting to connect to the remote server before giving up
Data Timeout30 secHow long to wait between receiving segments of data from the remote server before giving up and discarding the partial file
Attributes to Send as HTTP Headers (Regex)Specifies the Regul ar Expression that determines the names of FlowFile attributes that should be sent as HTTP Headers
User AgentApache-HttpClient/4.5.3 (Java/1.8.0_102)What to report as the User Agent when we connect to the remote server
Proxy HostThe fully qualified hostname or IP address of the proxy server
Proxy PortThe port of the proxy server
Content-Type${mime.type}The Content-Type to specify for the content of the FlowFile being POSTed if Send as FlowFile is false. In the case of an empty value after evaluating an expression language expr ession, Content-Type defaults to application/octet-stream
Supports Expression Language: true

Relationships:

NameDescription
successFiles that are successfully send will be transferred to success
failureFiles that fail to send will transferred to failure

Reads Attributes:

None specified.

Writes Attributes:

None specified.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship. \ No newline at end of file