Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.avro.AvroRecordSetWriter/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +AvroRecordSetWriter

AvroRecordSetWriter

Description:

Writes the contents of a RecordSet in Binary Avro format.

Tags:

avro, result, set, writer, serializer, record, recordset, row

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Schema Write Strategyavro-embedded
  • Embed Avro Schema The FlowFile will have the Avro schema embedded into the content, as is typical with Avro
  • Set 'schema.name' Attribute The FlowFile will be given an attribute named 'schema.name' and this attribute will indicate the name of the schema in the Schema Registry. Note that ifthe schema for a record is not obtained from a Schema Registry, then no attribute will be added.
  • Set 'avro.schema' Attribute The FlowFile will be given an attribute named 'avro.schema' and this attribute will contain the Avro Schema that describes the records in the FlowFile. The contents of the FlowFile need not be Avro, but the text of the schema will be used.
  • HWX Schema Reference Attributes The FlowFile will be given a set of 3 attributes to describe the schema: 'schema.identifier', 'schema.version', and 'schema.protocol.version'. Note that if the schema for a record does not contain the necessary identifier and version, an Exception
  will be thrown when attempting to write the data.
  • HWX Content-Encoded Schema Reference The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, as found at https://github.com/hortonworks/registry. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and versi
 on, an Exception will be thrown when attempting to write the data.
  • Confluent Schema Registry Reference The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as out
 lined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data. This is based on the encoding used by version 3.2.x of the Confluent Schema Registry.
  • Do Not Write Schema Do not add any schema-related information to the FlowFile.
Specifies how the schema for a Record should be added to the data.
Schema Access Strategyinherit-record-schema
  • Use 'Schema Name' Property The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service.
  • Inherit Record Schema The schema us
 ed to write records will be the same schema that was given to the Record when the Record was created.
  • Use 'Schema Text' Property The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions.
Specifies how to obtain the schema that is to be used for interpreting the data.
Schema Reg istryController Service API:
SchemaRegistry
Implementations: AvroSchemaRegistry
HortonworksSchemaRegistry
ConfluentSchemaRegistry
Specifies the Controller Service to use for the Schema Registry
Schema Name${schema.name}Specifies the name of the schema to lookup in the Schema Registry property
Supports Expression Language: true
Schema Text${avro.schema}The text of an Avro-formatted Schema
Supports Expression Language: true
Compression FormatNONE
  • BZIP2
  • DEFLATE
  • NONE
  • SNAPPY
  • LZO
Compression type to use when writing Avro files. Default is None.

State management:

This component does not store state.

Restricted:

This component is not restricted. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVReader/additionalDetails.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVReader/additionalDetails.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVReader/additionalDetails.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVReader/additionalDetails.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1,334 @@ + + + + + + CSVReader + + + + +

+ The CSVReader Controller Service, expects input in such a way that the first line of a FlowFile specifies the name of + each column in the data. Following the first line, the rest of the FlowFile is expected to be valid CSV data from which + to form appropriate Records. The reader allows for customization of the CSV Format, such as which character should be used + to separate CSV fields, which character should be used for quoting and when to quote fields, which character should denote + a comment, etc. +

+ + +

Schemas and Type Coercion

+ +

+ When a record is parsed from incoming data, it is separated into fields. Each of these fields is then looked up against the + configured schema (by field name) in order to determine what the type of the data should be. If the field is not present in + the schema, that field is omitted from the Record. If the field is found in the schema, the data type of the received data + is compared against the data type specified in the schema. If the types match, the value of that field is used as-is. If the + schema indicates that the field should be of a different type, then the Controller Service will attempt to coerce the data + into the type specified by the schema. If the field cannot be coerced into the specified type, an Exception will be thrown. +

+ +

+ The following rules apply when attempting to coerce a field value from one data type to another: +

+ + + +

+ If none of the above rules apply when attempting to coerce a value from one data type to another, the coercion will fail and an Exception + will be thrown. +

+ + + +

Examples

+ +

Example 1

+ +

+ As an example, consider a FlowFile whose contents consists of the following: +

+ + +id, name, balance, join_date, notes
+1, John, 48.23, 04/03/2007 "Our very
+first customer!"
+2, Jane, 1245.89, 08/22/2009,
+3, Frank Franklin, "48481.29", 04/04/2016,
+
+ +

+ Additionally, let's consider that this Controller Service is configured with the Schema Registry pointing to an AvroSchemaRegistry and the schema is + configured as the following: +

+ + +
+{
+  "namespace": "nifi",
+  "name": "balances",
+  "type": "record",
+  "fields": [
+    { "name": "id", "type": "int" },
+    { "name": "name": "type": "string" },
+    { "name": "balance": "type": "double" },
+    { "name": "join_date", "type": {
+      "type": "int",
+      "logicalType": "date"
+    }},
+    { "name": "notes": "type": "string" }
+  ]
+}
+
+
+ +

+ In the example above, we see that the 'join_date' column is a Date type. In order for the CSV Reader to be able to properly parse a value as a date, + we need to provide the reader with the date format to use. In this example, we would configure the Date Format property to be MM/dd/yyyy + to indicate that it is a two-digit month, followed by a two-digit day, followed by a four-digit year - each separated by a slash. + In this case, the result will be that this FlowFile consists of 3 different records. The first record will contain the following values: +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field NameField Value
id1
nameJohn
balance48.23
join_date04/03/2007
notesOur very
first customer!
+ +

+ The second record will contain the following values: +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field NameField Value
id2
nameJane
balance1245.89
join_date08/22/2009
notes
+ +

+ The third record will contain the following values: +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field NameField Value
id3
nameFrank Franklin
balance48481.29
join_date04/04/2016
notes
+ + + +

Example 2 - Schema with CSV Header Line

+ +

+ When CSV data consists of a header line that outlines the column names, the reader provides + a couple of different properties for configuring how to handle these column names. The + "Schema Access Strategy" property as well as the associated properties ("Schema Registry," "Schema Text," and + "Schema Name" properties) can be used to specify how to obtain the schema. If the "Schema Access Strategy" is set + to "Use String Fields From Header" then the header line of the CSV will be used to determine the schema. Otherwise, + a schema will be referenced elsewhere. But what happens if a schema is obtained from a Schema Registry, for instance, + and the CSV Header indicates a different set of column names? +

+ +

+ For example, let's say that the following schema is obtained from the Schema Registry: +

+ + +
+{
+  "namespace": "nifi",
+  "name": "balances",
+  "type": "record",
+  "fields": [
+    { "name": "id", "type": "int" },
+    { "name": "name": "type": "string" },
+    { "name": "balance": "type": "double" },
+    { "name": "memo": "type": "string" }
+  ]
+}
+
+
+ +

+ And the CSV contains the following data: +

+ + +
+id, name, balance, notes
+1, John Doe, 123.45, First Customer
+
+
+ +

+ Note here that our schema indicates that the final column is named "memo" whereas the CSV Header indicates that it is named "notes." +

+ +

+ In this case, the reader will look at the "Ignore CSV Header Column Names" property. If this property is set to "true" then the column names + provided in the CSV will simply be ignored and the last column will be called "memo." However, if the "Ignore CSV Header Column Names" property + is set to "false" then the result will be that the last column will be named "notes" and each record will have a null value for the "memo" column. +

+ +

+ With "Ignore CSV Header Column Names" property set to "false":
+ + + + + + + + + + + + + + + + + + + + + + + +
Field NameField Value
id1
nameJohn Doe
balance123.45
memoFirst Customer
+

+ + +

+ With "Ignore CSV Header Column Names" property set to "true":
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field NameField Value
id1
nameJohn Doe
balance123.45
notesFirst Customer
memonull
+

+ + + Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVReader/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVReader/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVReader/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVReader/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +CSVReader

CSVReader

Description:

Parses CSV-formatted data, returning each row in the CSV file as a separate record. This reader assumes that the first line in the content is the column names and all subsequent lines are the values. See Controller Service's Usage for further documentation.

Additional Details...

Tags:

csv, parse, record, row, reader, delimited, comma, separated, values

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are consi dered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Schema Access Strategycsv-header-derived
  • Use 'Schema Name' Property The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service.
  • Use 'Schema Text' Property The text of the 
 Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions.
  • HWX Schema Reference Attributes The FlowFile contains 3 Attributes that will be used to lookup a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'
  • HWX Content-Encod ed Schema Reference The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry
  • Confluent Content-Encoded Schema Reference The conten
 t of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This is based on version 3.2.x of the Confluent Schema Registry.
  • Use String Fields From Header The first non-comment line of the CSV file is a header line that contains the names of the columns. The schema will be derived by using the column names in the hea
 der and assuming that all columns are of type String.
Specifies how to obtain the schema that is to be used for interpreting the data.
Schema RegistryController Service API:
SchemaRegistry
Implementations: AvroSchemaRegistry
HortonworksSchemaRegistry
ConfluentSchemaRegistry
Specifies the Controller Service to use for the Schema Registry
Schema Name${schema.name}Specifies the name of the schema to lookup in the Schema Registry property
Supports Expression Language: true
Schema Text${avro.schema}The text of an Avro-formatted Schema
Supports Expression Language: true
Date FormatSpecifies the format to use when reading/writing Date fields. If not specified, Date fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters, as in 01/01/2017).
Time FormatSpecifies the format to use when reading/writing Time fields. If not specified, Time fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, HH:mm:ss for a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 18:04:15).
Timestamp FormatSpecifies the format to use when reading/writing Timestamp fields. If not specified, Timestamp fields will be assumed to be number of milliseco nds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy HH:mm:ss for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters; and then followed by a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 01/01/2017 18:04:15).
CSV Formatcustom
  • Custom Format The format of the CSV is configured by using the properties of this Controller Service, such as Value Separator
  • RFC 4180 CSV data follows the RFC 4180 Specification def
 ined at https://tools.ietf.org/html/rfc4180
  • Microsoft Excel CSV data follows the format used by Microsoft Excel
  • Tab-Delimited CSV data is Tab-Delimited instead of Comma Delimited
  • MySQL Format CSV data follows the format used by MySQL
  • Informix Unload The format used by Informix when issuing the UNLOAD TO file_name command
  • Informix Unload Esc ape Disabled The format used by Informix when issuing the UNLOAD TO file_name command with escaping disabled
Specifies which "format" the CSV data is in, or specifies if custom formatting should be used.
Value Separator,The character that is used to separate values/fields in a CSV Record
Treat First Line as Headerfalse
  • true
  • false
Specifies whether or not the first line of CSV should be considered a Header or should be considered a record. If the Schema Access Strategy indicates that the columns must be defi ned in the header, then this property will be ignored, since the header must always be present and won't be processed as a Record. Otherwise, if 'true', then the first line of CSV data will not be processed as a record and if 'false',then the first line will be interpreted as a record.
Ignore CSV Header Column Namesfalse
  • true
  • false
If the first line of a CSV is a header, and the configured schema does not match the fields named in the header line, this controls how the Reader will interpret the fields. If this property is true, then the field names mapped to each column are driven only by the configured schema and any fields not in the schema will be ignored. If this property is false, then the field names found in the CSV Header will be used as the names of the fields.
Quote Character"The character that is used to quote values so that escape characters do not have to be used
Escape Character\The character that is used to escape characters that would otherwise have a specific meaning to the CSV Parser.
Comment MarkerThe character that is used to denote the start of a comment. Any line that begins with this comment will be ignored.
Null StringSpecifies a String that, if present as a value in the CSV, should be considered a null field instead of using the literal value.
Trim Fieldstrue
  • true
  • false
Whether or not white space should be removed from the beginning and end of fields

State management:

This component does not store state.

Restricted:

This component is not restricted. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVRecordSetWriter/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVRecordSetWriter/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVRecordSetWriter/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.csv.CSVRecordSetWriter/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +CSVRecordSetWriter

CSVRecordSetWriter

Description:

Writes the contents of a RecordSet as CSV data. The first line written will be the column names (unless the 'Include Header Line' property is false). All subsequent lines will be the values corresponding to the record fields.

Tags:

csv, result, set, recordset, record, writer, serializer, row, tsv, tab, separated, delimited

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default va lues, and whether a property supports the NiFi Expression Language.

\n
NameDefault ValueAllowable ValuesDescription
Schema Write Strategyschema-name
  • Set 'schema.name' Attribute The FlowFile will be given an attribute named 'schema.name' and this attribute will indicate the name of the schema in the Schema Registry. Note that ifthe schema for a record is not obtained from a Schema Registry, then no attribute will be added.
  • Set 'avro .schema' Attribute The FlowFile will be given an attribute named 'avro.schema' and this attribute will contain the Avro Schema that describes the records in the FlowFile. The contents of the FlowFile need not be Avro, but the text of the schema will be used.
  • HWX Schema Reference Attributes The FlowFile will be given a set of 3 attributes to describe the schema: 'schema.identifier', 'schema.version', and 'schema.protocol.version'. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data.
  • HWX Content-Encoded Schema Reference The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, as found at https://github.com/hortonworks/registry. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data.
  • Confluent Schema Registry Reference The content of the FlowFile will contain a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter
 .html. This will be prepended to each FlowFile. Note that if the schema for a record does not contain the necessary identifier and version, an Exception will be thrown when attempting to write the data. This is based on the encoding used by version 3.2.x of the Confluent Schema Registry.
  • Do Not Write Schema Do not add any schem
 a-related information to the FlowFile.
Specifies how the schema for a Record should be added to the data.
Schema Access Strategyinherit-record-schema
  • Use 'Schema Name' Property The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in the configured Schema Registry service.
  • Inherit Record Schema The schema used to write records will be the same schema that was given to the Record when the R
 ecord was created.
  • Use 'Schema Text' Property The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions.
Specifies how to obtain the schema that is to be used for interpreting the data.
Schema RegistryController Service API:
SchemaRegistry
Implementations: AvroSchemaRegistry
HortonworksSchemaRegistry
ConfluentSchemaRegistry
Specifies the Controller Service to use for the Schema Registry
Schema Name${schema.name}Specifies the name of the schema to lookup in the Schema Registry property
Supports Expression Language: true
Schema Text${avro.schema}The text of an Avro-formatted Schema
Supports Expression Language: true
Date FormatSpecifies the format to use when reading/writing Date fields. If not specified, Date fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters, as in 01/01/2017).
Time FormatSpecifies the format to use when reading/writing Time fields. If not specified, Time fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match th e Java Simple Date Format (for example, HH:mm:ss for a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 18:04:15).
Timestamp FormatSpecifies the format to use when reading/writing Timestamp fields. If not specified, Timestamp fields will be assumed to be number of milliseconds since epoch (Midnight, Jan 1, 1970 GMT). If specified, the value must match the Java Simple Date Format (for example, MM/dd/yyyy HH:mm:ss for a two-digit month, followed by a two-digit day, followed by a four-digit year, all separated by '/' characters; and then followed by a two-digit hour in 24-hour format, followed by a two-digit minute, followed by a two-digit second, all separated by ':' characters, as in 01/01/2017 18:04:15).
CSV Formatc ustom
  • Custom Format The format of the CSV is configured by using the properties of this Controller Service, such as Value Separator
  • RFC 4180 CSV data follows the RFC 4180 Specification defined at https://tools.ietf.org/html/rfc4180
  • Microsoft Excel CSV data follows the format used by Microsoft Excel
  • Tab-Delimited CSV data is Tab-Delimited instead of Comma Delimited
  • MySQL Format CSV data follows the format used by MySQL
  • Informix Unload The format used by Informix when issuing the UNLOAD TO file_name command
  • Informix Unload Escape Disabled The format used by Informix when issuing the UNLOAD TO file_name command with escaping disabled
Specifies which "format" the CSV data is in, or specifies if custom formatting should be used.
Value Separator,The character that is used to separate values/fields in a CSV Record
Include Header Linetrue
  • true
  • false
Specifies whether or not the CSV column names should be written out as the first line.
Quote Character"The character that is used to quote values so that escape characters do not have to be used
Escape Character\The character that is used to escape characters that would otherwise have a specific meaning to the CSV Parser.
Comment MarkerThe character that is used to denote the start of a comment. Any line that begins with this comment will be ignored.
Null StringSpecifies a String that, if present as a value in the CSV, should be considered a null field instead of using the literal value.
Trim Fieldstrue
  • true
  • false
Whether or not white space should be removed from the beginning and end of fields
Quote ModeMINIMAL
  • Quote All Values All values will be quoted using the configured quote character.
  • Quote Minimal
  • Quote Non-Numeric Values Values will be quoted unless the value is a number.
  • Do Not Quote Values Values will not be quoted. Instead, all special characters will be escaped using the configured escape character.
Specifies how fields should be quoted when they are written
Record SeparatorSpecifies the characters to use in order to separate CSV Records
Include Trailing Delimiterfalse
  • true
  • false
If true, a trailing delimiter will be added to each CSV Record that is written. If false, the trailing delimiter will be omitted.

State management:

This component does not store state.

Restricted:

This component is not restricted. \ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.grok.GrokReader/additionalDetails.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.grok.GrokReader/additionalDetails.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.grok.GrokReader/additionalDetails.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.grok.GrokReader/additionalDetails.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1,405 @@ + + + + + + GrokReader + + + + +

+ The GrokReader Controller Service provides a means for parsing and structuring input that is + made up of unstructured text, such as log files. Grok allows users to add a naming construct to + Regular Expressions such that they can be composed in order to create expressions that are easier + to manage and work with. This Controller Service consists of one Required Property and a few Optional + Properties. The is named Grok Pattern File property specifies the filename of + a file that contains Grok Patterns that can be used for parsing log data. If not specified, a default + patterns file will be used. Its contents are provided below. There are also properties for specifying + the schema to use when parsing data. The schema is not required. However, when data is parsed + a Record is created that contains all of the fields present in the Grok Expression (explained below), + and all fields are of type String. If a schema is chosen, the field can be declared to be a different, + compatible type, such as number. Additionally, if the schema does not contain one of the fields in the + parsed data, that field will be ignored. This can be used to filter out fields that are not of interest. +

+ +

+ The Required Property is named Grok Expression and specifies how to parse each + incoming record. This is done by providing a Grok Expression such as: + %{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} \[%{DATA:thread}\] %{DATA:class} %{GREEDYDATA:message}. + This Expression will parse Apache NiFi log messages. This is accomplished by specifying that a line begins + with the TIMESTAMP_ISO8601 pattern (which is a Regular Expression defined in the default + Grok Patterns File). The value that matches this pattern is then given the name timestamp. As a result, + the value that matches this pattern will be assigned to a field named timestamp in the Record that + produced by this Controller Service. +

+ +

+ If a line is encountered in the FlowFile that does not match the configured Grok Expression, it is assumed that the line + is part of the previous message. If the line is the start of a stack trace, then the entire stack trace is read in and assigned + to a field named STACK_TRACE. Otherwise, the line is appended to the last field defined in the Grok Expression. This + is done because typically the last field is a 'message' type of field, which can consist of new-lines. +

+ + +

Schemas and Type Coercion

+ +

+ When a record is parsed from incoming data, it is separated into fields. Each of these fields is then looked up against the + configured schema (by field name) in order to determine what the type of the data should be. If the field is not present in + the schema, that field is omitted from the Record. If the field is found in the schema, the data type of the received data + is compared against the data type specified in the schema. If the types match, the value of that field is used as-is. If the + schema indicates that the field should be of a different type, then the Controller Service will attempt to coerce the data + into the type specified by the schema. If the field cannot be coerced into the specified type, an Exception will be thrown. +

+ +

+ The following rules apply when attempting to coerce a field value from one data type to another: +

+ + + +

+ If none of the above rules apply when attempting to coerce a value from one data type to another, the coercion will fail and an Exception + will be thrown. +

+ + + +

+ Examples +

+ +

+ As an example, consider that this Controller Service is configured with the following properties: +

+ + + + + + + + + + + + +
Property NameProperty Value
Grok Expression%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} \[%{DATA:thread}\] %{DATA:class} %{GREEDYDATA:message}
+ +

+ Additionally, let's consider a FlowFile whose contents consists of the following: +

+ +
+2016-08-04 13:26:32,473 INFO [Leader Election Notification Thread-1] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@1fa27ea5 has been interrupted; no longer leader for role 'Cluster Coordinator'
+2016-08-04 13:26:32,474 ERROR [Leader Election Notification Thread-2] o.apache.nifi.controller.FlowController One
+Two
+Three
+org.apache.nifi.exception.UnitTestException: Testing to ensure we are able to capture stack traces
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_45]
+	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_45]
+        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_45]
+        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_45]
+        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
+        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
+        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
+Caused by: org.apache.nifi.exception.UnitTestException: Testing to ensure we are able to capture stack traces
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    ... 12 common frames omitted
+2016-08-04 13:26:35,475 WARN [Curator-Framework-0] org.apache.curator.ConnectionState Connection attempt unsuccessful after 3008 (greater than max timeout of 3000). Resetting connection and trying again with a new connection.
+        
+ +

+ In this case, the result will be that this FlowFile consists of 3 different records. The first record will contain the following values: +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field NameField Value
timestamp2016-08-04 13:26:32,473
levelINFO
threadLeader Election Notification Thread-1
classo.a.n.c.l.e.CuratorLeaderElectionManager
messageorg.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@1fa27ea5 has been interrupted; no longer leader for role 'Cluster Coordinator'
STACK_TRACEnull
+ +

+ The second record will contain the following values: +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field NameField Value
timestamp2016-08-04 13:26:32,474
levelERROR
threadLeader Election Notification Thread-2
classo.apache.nifi.controller.FlowController
messageOne
+Two
+Three
STACK_TRACE +
+org.apache.nifi.exception.UnitTestException: Testing to ensure we are able to capture stack traces
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_45]
+	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_45]
+        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_45]
+        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_45]
+        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
+        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
+        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
+Caused by: org.apache.nifi.exception.UnitTestException: Testing to ensure we are able to capture stack traces
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.getElectedActiveCoordinatorAddress(NodeClusterCoordinator.java:185)
+    ... 12 common frames omitted
+
+ +

+ The third record will contain the following values: +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field NameField Value
timestamp2016-08-04 13:26:35,475
levelWARN
threadCurator-Framework-0
classorg.apache.curator.ConnectionState
messageConnection attempt unsuccessful after 3008 (greater than max timeout of 3000). Resetting connection and trying again with a new connection.
STACK_TRACEnull
+ + +

+

+ +

Default Patterns

+ +

+ The following patterns are available in the default Grok Pattern File: +

+ + +
+# Log Levels
+LOGLEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)|FINE|FINER|FINEST|CONFIG
+
+# Syslog Dates: Month Day HH:MM:SS
+SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
+PROG (?:[\w._/%-]+)
+SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?
+SYSLOGHOST %{IPORHOST}
+SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
+HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}
+
+# Months: January, Feb, 3, 03, 12, December
+MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
+MONTHNUM (?:0?[1-9]|1[0-2])
+MONTHNUM2 (?:0[1-9]|1[0-2])
+MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
+
+# Days: Monday, Tue, Thu, etc...
+DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
+
+# Years?
+YEAR (?>\d\d){1,2}
+HOUR (?:2[0123]|[01]?[0-9])
+MINUTE (?:[0-5][0-9])
+# '60' is a leap second in most time standards and thus is valid.
+SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
+TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
+
+# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)
+DATE_US_MONTH_DAY_YEAR %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
+DATE_US_YEAR_MONTH_DAY %{YEAR}[/-]%{MONTHNUM}[/-]%{MONTHDAY}
+DATE_US %{DATE_US_MONTH_DAY_YEAR}|%{DATE_US_YEAR_MONTH_DAY}
+DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
+ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
+ISO8601_SECOND (?:%{SECOND}|60)
+TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
+DATE %{DATE_US}|%{DATE_EU}
+DATESTAMP %{DATE}[- ]%{TIME}
+TZ (?:[PMCE][SD]T|UTC)
+DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
+DATESTAMP_RFC2822 %{DAY}, %{MONTHDAY} %{MONTH} %{YEAR} %{TIME} %{ISO8601_TIMEZONE}
+DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
+DATESTAMP_EVENTLOG %{YEAR}%{MONTHNUM2}%{MONTHDAY}%{HOUR}%{MINUTE}%{SECOND}
+
+
+POSINT \b(?:[1-9][0-9]*)\b
+NONNEGINT \b(?:[0-9]+)\b
+WORD \b\w+\b
+NOTSPACE \S+
+SPACE \s*
+DATA .*?
+GREEDYDATA .*
+QUOTEDSTRING (?>(?"(?>\\.|[^\\"]+)+"|""|(?>'(?>\\.|[^\\']+)+')|''|(?>`(?>\\.|[^\\`]+)+`)|``))
+UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}
+
+USERNAME [a-zA-Z0-9._-]+
+USER %{USERNAME}
+INT (?:[+-]?(?:[0-9]+))
+BASE10NUM (?[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
+NUMBER (?:%{BASE10NUM})
+BASE16NUM (?/(?>[\w_%!$@:.,-]+|\\.)*)+
+TTY (?:/dev/(pts|tty([pq])?)(\w+)?/?(?:[0-9]+))
+WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+
+URIPROTO [A-Za-z]+(\+[A-Za-z+]+)?
+URIHOST %{IPORHOST}(?::%{POSINT:port})?
+# uripath comes loosely from RFC1738, but mostly from what Firefox
+# doesn't turn into %XX
+URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%_\-]*)+
+#URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?
+URIPARAM \?[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]]*
+URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
+URI %{URIPROTO}://(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?
+
+# Shortcuts
+QS %{QUOTEDSTRING}
+
+# Log formats
+SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
+COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
+COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
+		
+
+ + + Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.grok.GrokReader/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.grok.GrokReader/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.grok.GrokReader/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.4.0/org.apache.nifi.grok.GrokReader/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +GrokReader

GrokReader

Description:

Provides a mechanism for reading unstructured text data, such as log files, and structuring the data so that it can be processed. The service is configured using Grok patterns. The service reads from a stream of data and splits each message that it finds into a separate Record, each containing the fields that are configured. If a line in the input does not match the expected message pattern, the line of text is either considered to be part of the previous message or is skipped, depending on the configuration, with the exception of stack traces. A stack trace th at is found at the end of a log message is considered to be part of the previous message but is added to the 'stackTrace' field of the Record. If a record has no stack trace, it will have a NULL value for the stackTrace field (assuming that the schema does in fact include a stackTrace field of type String). Assuming that the schema includes a '_raw' field of type String, the raw message will be included in the Record.

Additional Details...

Tags:

grok, logs, logfiles, parse, unstructured, text, record, reader, regex, pattern, logstash

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
Schema Access Strategystring-fields-from-grok-expression
  • Use String Fields From Grok Expression The schema will be derived by using the field names present in the Grok Expression. All fields will be assumed to be of type String. Additionally, a field will be included with a name of 'stackTrace' and a type of String.
  • Use 'Schema Name' Property The name of the Schema to use is specified by the 'Schema Name' Property. The value of this property is used to lookup the Schema in 
 the configured Schema Registry service.
  • Use 'Schema Text' Property The text of the Schema itself is specified by the 'Schema Text' Property. The value of this property must be a valid Avro Schema. If Expression Language is used, the value of the 'Schema Text' property must be valid after substituting the expressions.
  • HWX Schema Reference Attributes The FlowFile contains 3 Attributes that will be used to looku
 p a Schema from the configured Schema Registry: 'schema.identifier', 'schema.version', and 'schema.protocol.version'
  • HWX Content-Encoded Schema Reference The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single byte indicating the 'protocol version', followed by 8 bytes indicating the schema identifier, and finally 4 bytes indicating the schema version, as per the Hortonworks Schema Registry serializers and deserializers, found at https://github.com/hortonworks/registry
  • Confluent Content-Encoded Schema Reference The content of the FlowFile contains a reference to a schema in the Schema Registry service. The reference is encoded as a single 'Magic Byte' followed by 4 bytes representing the identifier of the schema, as outlined at http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html. This is based on version 3.2.x of the Confluent Schema Registry.
Specifies how to obtain the schema that is to be used for interpreting the data.
Schema RegistryController Service API:
SchemaRegistry
Implementations: AvroSchemaRegistry
HortonworksSchemaRegistry
ConfluentSchemaRegistry
Specifies the Controller Service to use for the Schema Registry
Schema Name${schema.name}Specifies the name of the schema to lookup in the Schema Registry property
Supports Expression Language: true
Schema Text${avro.schema}The text of an Avro-formatted Schema
Supports Expression Language: true
Grok Pattern FilePath to a file that contains Grok Patterns to use for parsing logs. If not specified, a built-in default Pattern file will be used. If specified, all patterns in the given pattern file will override the default patterns. See the Controller Service's Additional Details for a list of pre-defined patterns.
Supports Expression Language: true
Grok ExpressionSpecifies the format of a log line in Grok format. This allows the Record Reader to understand how to parse each log line. If a line in the log file does not match this pattern, the line will be assumed to belong to the previous log message.
No Match Behaviorappend-to-previous-message
  • Append to Previous Message The line of text that does not match the Grok Expression will be appended to the last field of the prior message.
  • Skip Line The line of text that does not match the Grok Expression will be skipped.
If a line of text is encountered and it does not match the given Grok Expression, and it is not part of a stack trace, this property specifies how the text should be processed.

State management:

This component does not store state.

Restricted:

This component is not restricted. \ No newline at end of file