Return-Path: Puts flow files to a Google Cloud Bucket. google, google cloud, gcs, archive, put In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, whether a property supports the NiFi Expression Language, and whether a property is considered "sensitive", meaning that its value will be encrypted. Before entering a value in a
sensitive property, ensure that the nifi.properties file has an entry for the property nifi.sensitive.props.key. Dynamic Properties allow the user to specify both the name and value of a property.PutGCSObject
Description:
Tags:
Properties:
Name Default Value Allowable Values Description GCP Credentials Provider Service Controller Service API:
GCPCredentialsService
Implementation: GCPCredentialsControllerServiceThe Controller Service used to obtain Google Cloud Platform credentials. Project ID Google Cloud Project ID Number of retries 6 How many retry attempts should be made before routing to the failure relationship. Bucket ${gcs.bucket} Bucket of the object.
Supports Expression Language: trueKey ${filename} Name of the object.
Supports Expression Language: trueContent Type ${mime.type} Content Type for the file, i.e. text/plain
Supports Expression Language: trueMD5 Hash MD5 Hash (encoded in Base64) of the file for server-side validatio
n.
Supports Expression Language: trueCRC32C Checksum CRC32C Checksum (encoded in Base64, big-Endian order) of the file for server-side validation.
Supports Expression Language: trueObject ACL Access Control to be attached to the object uploaded. Not providing this will revert to bucket defaults. Server Side Encryption Key An AES256 Encryption Key (encoded in base64) for server-side encryption of the object.
Sensitive Property: true
Suppor
ts Expression Language: trueOverwrite Object true If false, the upload to GCS will succeed only if the object does not exist. Content Disposition Type Type of RFC-6266 Content Disposition to be attached to the object Dynamic Properties:
Name Value Description The name of a User-Defined Metadata field to add to the GCS Object The value of a User-Defined Metadata field to add to the GCS Object Allows user-defined metadata to be added to the GCS object as key/value pairs
Supports Expression Language: true
Name | Description |
---|---|
success | FlowFiles are routed to this relationship after a successful Google Cloud Storage operation. |
failure | FlowFiles are routed to this relationship if the Google Cloud Storage operation fails. |
Name | Description |
---|---|
filename | Uses the FlowFile's filename as the filename for the GCS object |
mime.type | Uses the FlowFile's MIME type as the content-type for the GCS object |
Name | Description |
---|---|
gcs.bucket | Bucket of the object. |
gcs.key | Name of the object. |
gcs.size | Size of the object. |
gcs.cache.control | Data cache control of the object. |
gcs.component.count | The number of components which make up the object. |
gcs.content.disposition | The data content disposition of the object. |
gcs.content.encoding | The content encoding of the object. |
gcs.content.language | The content language of the object. |
mime.t ype | The MIME/Content-Type of the object |
gcs.crc32c | The CRC32C checksum of object's data, encoded in base64 in big-endian order. |
gcs.create.time | The creation time of the object (milliseconds) |
gcs.update.time | The last modification time of the object (milliseconds) |
gcs.encryption.algorithm | The algorithm used to encrypt the object. |
gcs.encryption.sha256 | The SHA256 hash of the key used to encrypt the object |
gcs.etag | The HTTP 1.1 Entity tag for the object. |
gcs.generated.id | The service-generated for the object |
gcs.generation | The data generation of the object. |
gcs.md5 | The MD5 hash of the object's data encoded in base64. |
gcs.media.link | The media download link to the object. |
gcs.metageneration | The metageneration of the object. |
gcs.owner | The owner (uploader) of the object. |
gcs.owner.type | The ACL entity type of the uploader of the object. |
gcs.uri | The URI of the object as a string. |
FetchGCSObject, DeleteGCSObject, ListGCSBucket
\ No newline at end of file Added: nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-grpc-nar/1.4.0/org.apache.nifi.processors.grpc.InvokeGRPC/index.html URL: http://svn.apache.org/viewvc/nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-grpc-nar/1.4.0/org.apache.nifi.processors.grpc.InvokeGRPC/index.html?rev=1811008&view=auto ============================================================================== --- nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-grpc-nar/1.4.0/org.apache.nifi.processors.grpc.InvokeGRPC/index.html (added) +++ nifi/site/trunk/docs/nifi-docs/components/org.apache.nifi/nifi-grpc-nar/1.4.0/org.apache.nifi.processors.grpc.InvokeGRPC/index.html Tue Oct 3 13:30:16 2017 @@ -0,0 +1 @@ +Sends FlowFiles, optionally with content, to a configurable remote gRPC service endpoint. The remote gRPC service must abide by the service IDL defined in NiFi. gRPC isn't intended to carry large payloads, so this processor should be used only when FlowFile sizes are on the order of megabytes. The default maximum message size is 4MB.
grpc, rpc, client
In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also ind icates any default values.
Name | Default Value | Allowable Values | Description |
---|---|---|---|
Remote gRPC service hostname | Remote host which will be connected to | ||
Remote gRPC service port | Remote port which will be connected to | ||
Max Message Size | 4MB | The maximum size of FlowFiles that this processor will allow to be received. The default is 4MB. If FlowFiles exceed this size, you should consider using another transport mechanism as gRPC isn't designed for heavy payloads. | |
Use SSL/TLS | false |
| Whether or not to use SSL/TLS to send the contents of the gRPC messages. |
SSL Context Service | Controller Service API: SSLContextService Implementations: StandardSSLContextService StandardRestrictedSSLContextService | The SSL Context Service used to provide client certificate information for TLS/SSL (https) connections. | |
Send FlowFile Content | true |
| Whether or not to include the FlowFile content in the FlowFileReq uest to the gRPC service. |
Always Output Response | false |
| Will force a response FlowFile to be generated and routed to the 'Response' relationship regardless of what the server status code received is or if the processor is configured to put the server response body in the request attribute. In the later configuration a request FlowFile with the response body in the attribute and a typical response FlowFile will be emitted to their respective relationships. |
Penalize on "No Retry" | false |
| Enabling this property will penalize FlowFiles that are routed to the "No Retry" relationship. |
Name | Description |
---|---|
Origi nal | The original FlowFile will be routed upon success. It will have new attributes detailing the success of the request. |
Failure | The original FlowFile will be routed on any type of connection failure, timeout or general exception. It will have new attributes detailing the request. |
Retry | The original FlowFile will be routed on any status code that can be retried. It will have new attributes detailing the request. |
No Retry | The original FlowFile will be routed on any status code that should NOT be retried. It will have new attributes detailing the request. |
Response | A Response FlowFile will be routed upon success. If the 'Output Response Regardless' property is true then the response will be sent to this relationship regardless of the status code received. |
Name | Description |
---|---|
invokegrpc.response.code | The response code that is returned (0 = ERROR, 1 = SUCCESS, 2 = RETRY) |
invokegrpc.response.body | The response message that is returned |
invokegrpc.service.host | The remote gRPC service hostname |
invokegrpc.service.port | The remote gRPC service port |
invokegrpc.java.exception.class | The Java exception class raised when the processor fails |
invokegrpc.java.exception.message | The Java exception message raised when the processor fails |
Starts a gRPC server and listens on the given port to transform the incoming messages into FlowFiles. The message format is defined by the standard gRPC protobuf IDL provided by NiFi. gRPC isn't intended to carry large payloads, so this processor should be used only when FlowFile sizes are on the order of megabytes. The default maximum message size is 4MB.
ingest, grpc, rpc, listen
In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values.
Name | Default Value | Allowable Values | Description |
---|---|---|---|
Local gRPC Service Port | The local port that the gRPC service will listen on. | ||
Use TLS | false |
| Whether or not to use TLS to send the contents of the gRPC messages. |
SSL Context Service | Controller Service API: RestrictedSSLContextService Implementation: StandardRestrictedSSLContextService | The SSL Context Service used to provide client certificate information for TLS (https) connections. | |
Flow Control Window | 1MB | The initial HTTP/2 flow control window for both new streams and overall connection. Flow-control schemes ensure that streams on the same connection do not destructively interfere with each other. The default is 1MB. | |
Authorized DN Pattern | .* | A Regular Expression to apply against the Distinguished Name of incoming connections. If the Pattern does not match the DN, the connection will be refused. | |
Maximum Message Size | 4MB | The maximum size of FlowFiles that this processor will allow to be received. The default is 4MB. If Flo wFiles exceed this size, you should consider using another transport mechanism as gRPC isn't designed for heavy payloads. |
Name | Description |
---|---|
Success | The FlowFile was received successfully. |
Name | Description |
---|---|
listengrpc.remote.user.dn | The DN of the user who sent the FlowFile to this NiFi |
listengrpc.remote.host | The IP of the client who sent the FlowFile to this NiFi |
This processor is used to create a Hadoop Sequence File, which essentially is a file of key/value pairs. The key
+ will be a file name and the value will be the flow file content. The processor will take either a merged (a.k.a. packaged) flow
+ file or a singular flow file. Historically, this processor handled the merging by type and size or time prior to creating a
+ SequenceFile output; it no longer does this. If creating a SequenceFile that contains multiple files of the same type is desired,
+ precede this processor with a RouteOnAttribute
processor to segregate files of the same type and follow that with a
+ MergeContent
processor to bundle up files. If the type of files is not important, just use the
+ MergeContent
processor. When using the MergeContent
processor, the following Merge Formats are
+ supported by this processor:
+
Creates Hadoop Sequence Files from incoming flow files
hadoop, sequence file, create, sequencefile
In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.
Name | Default Value | Allowable Values | Description |
---|---|---|---|
Hadoop Configuration Resources | A file or comma separated list of files which contains the Hadoop file system configuration. Without this, Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a default configuration. Supports Expression Language: true | ||
Kerberos Principal | Kerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Keytab | Kerberos keytab associated with the principal. Require
s nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Relogin Period | 4 hours | Period of time which should pass before attempting a kerberos relogin Supports Expression Language: true | |
Additional Classpath Resources | A comma-separated list of paths to files and/or directories that will be added to the classpath. When specifying a directory, all files with in the directory will be added to the classpath, but further sub-directories will not be included. | ||
Compression type |
| Type of compression to use when creating Sequence Fi le | |
Compression codec | NONE |
| No Description Provided. |
Name | Description |
---|---|
success | Generated Sequence Files are sent to this relationship |
failure | Incoming files that failed to generate a Sequence File are sent to this relationship |
Deletes one or more files or directories from HDFS. The path can be provided as an attribute from an incoming FlowFile, or a statically set path that is periodically removed. If this processor has an incoming connection, itwill ignore running on a periodic basis and instead rely on incoming FlowFiles to trigger a delete. Note that you may use a wildcard character to match multiple files or directories. If there are no incoming connections no flowfiles will be transfered to any output relationships. If there is an incoming flowfile then provided there are no de tected failures it will be transferred to success otherwise it will be sent to false. If knowledge of globbed files deleted is necessary use ListHDFS first to produce a specific list of files to delete.
hadoop, HDFS, delete, remove, filesystem, restricted
In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.
Name | Default Value | Allowable Values | Description |
---|---|---|---|
Hadoop Configuration Resources | A file or comma separated list of files which contains the Hadoop file system configuration. Without this, Hadoop will search t
he classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a default configuration. Supports Expression Language: true | ||
Kerberos Principal | Kerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Keytab | Kerberos keytab associated with the principal. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Relogin Period | 4 hours | Period of time which should pass before attempting a kerberos relogin Supports Expression Langu age: true | |
Additional Classpath Resources | A comma-separated list of paths to files and/or directories that will be added to the classpath. When specifying a directory, all files with in the directory will be added to the classpath, but further sub-directories will not be included. | ||
Path | The HDFS file or directory to delete. A wildcard expression may be used to only delete certain files Supports Expression Language: true | ||
Recursive | true |
| Remove contents of a non-empty directory recursively |
Name | Description |
---|---|
success | When an incoming flowfile is used then if there are no errors invoking delete the flowfile will route here. |
failure | When an incoming flowfile is used and there is a failure while deleting then the flowfile will route here. |
Name | Description |
---|---|
hdfs.filename | HDFS file to be deleted |
hdfs.path | HDFS Path specified in the delete request |
hdfs.error.message | HDFS error message related to the hdfs.error.code |
Retrieves a file from HDFS. The content of the incoming FlowFile is replaced by the content of the file in HDFS. The file in HDFS is left intact without any changes being made to it.
hadoop, hdfs, get, ingest, fetch, source, restricted
In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.
Name | Default Value | Allowable Values | Description |
---|---|---|---|
Hadoop Configuration Resources | A file or comma separated list of files which contains the Hadoop file system configuration. Without this, Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a default configuration. Supports Expression Language: true | ||
Kerberos Principal | Kerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Keytab | Kerberos key
tab associated with the principal. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Relogin Period | 4 hours | Period of time which should pass before attempting a kerberos relogin Supports Expression Language: true | |
Additional Classpath Resources | A comma-separated list of paths to files and/or directories that will be added to the classpath. When specifying a directory, all files with in the directory will be added to the classpath, but further sub-directories will not be included. | ||
HDFS Filename | ${path}/${filename} | The name of the HDFS file to
retrieve Supports Expression Language: true | |
Compression codec | NONE |
| No Description Provided. |
Name | Description |
---|---|
success | FlowFiles will be routed to this relationship once they have been updated with the content of the HDFS file |
comms.failure | FlowFiles will be routed to this relationship if the content of the HDFS file cannot be retrieve due to a communications failure. This generally indicates that the Fetch should be tried again. |
failure | FlowFiles will be routed to this relationship if the c ontent of the HDFS file cannot be retrieved and trying again will likely not be helpful. This would occur, for instance, if the file is not found or if there is a permissions issue |
Name | Description |
---|---|
hdfs.failure.reason | When a FlowFile is routed to 'failure', this attribute is added indicating why the file could not be fetched from HDFS |
Fetch files from Hadoop Distributed File System (HDFS) into FlowFiles. This Processor will delete the file from HDFS after fetching it.
hadoop, HDFS, get, fetch, ingest, source, filesystem, restricted
In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.
Name | Description |
---|---|
success | All files retrieved from HDFS are transferred to this relationship |
Name | Description |
---|---|
filename | The name of the file that was read from HDFS. |
path | The path is set to the relative path of the file's directory on HDFS. For example, if the Directory property is set to /tmp, then files picked up from /tmp will have the path attribute set to "./". If the Recurse Subdirectories property is set to true and a file is picked up from /tmp/abc/1/2/3, then the path attribute will be set to "abc/1/2/3". |
Fetch sequence files from Hadoop Distributed File System (HDFS) into FlowFiles
hadoop, HDFS, get, fetch, ingest, source, sequence file
In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.
Name | De fault Value | Allowable Values | Description |
---|---|---|---|
Hadoop Configuration Resources | A file or comma separated list of files which contains the Hadoop file system configuration. Without this, Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a default configuration. Supports Expression Language: true | ||
Kerberos Principal | Kerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Keytab | Kerberos keytab associated with the principal. Requires nifi.kerberos.krb5.file to be set in
your nifi.properties Supports Expression Language: true | ||
Kerberos Relogin Period | 4 hours | Period of time which should pass before attempting a kerberos relogin Supports Expression Language: true | |
Additional Classpath Resources | A comma-separated list of paths to files and/or directories that will be added to the classpath. When specifying a directory, all files with in the directory will be added to the classpath, but further sub-directories will not be included. | ||
Directory | The HDFS directory from which files should be read Supports Expression Language: true | ||
Recurse Subdirectories | true |
| Indicates whether to pull files from subdirectories of the HDFS directory |
Keep Source File | false |
| Determines whether to delete the file from HDFS after it has been successfully transferred. If true, the file will be fetched repeatedly. This is intended for testing only. |
File Filter Regex | A Java Regular Expression for filtering Filenames; if a filter is supplied then only files whose names match that Regular Expression will be fetched, otherwise all files will be fetched | ||
Filter Match Name Only | true |
| If true then File Filter Regex will match on just the filename, otherwise subdirectory names will be included with filename in the regex comparison |
Ignore Dotted Files | true |
| If true, files whose names begin with a dot (".") will be ignored |
Minimum File Age | 0 sec | The minimum age that a file must be in order to be pulled; any file younger than this amount of time (based on last modification date) will be ignored | |
Maximum File Age | The maximum age that a file must be in order to be pulled; any file older than this amount of time (based on last modification date) will be ignored | ||
Polling Interval | 0 sec | Indicates how long to wait between performing directory listings | |
Batch Size | 100 | The maximum number of files to pull in each iteration, based on run schedule. | |
IO Buffer Size | Amount of memory to use to buffer file contents during IO. This overrides the Hadoop Configuration | ||
Compression codec | NONE |
| No Description Provided. |
FlowFile Content | VALUE ONLY |
| Indicate if the content is to be both the key and value of the Sequence File, or just the value. |
Name | Description |
---|---|
success | All files retrieved from HDFS are transferred to this relationship |
Name | Description |
---|---|
filename | The name of the file that was read from HDFS. |
path | The path is set to the relative path of the file's directory on HDFS. For example, if the Directory property is set to /tmp, then files picked up from /tmp will have the path attribute set to "./". If the Recurse Subdirectories property is set to true and a file is picked up from /tmp/abc/1/2/3, then the path attribute will be set to "abc/1/2/3". |
Retrieves a listing of files from HDFS. Each time a listing is performed, the files with the latest timestamp will be excluded and picked up during the next execution of the processor. This is done to ensure that we do not miss any files, or produce duplicates, in the cases where files with the same timestamp are written immediately before and after a single execution of the processor. For each file that is listed in HDFS, this processor creates a FlowFile that represents the HDFS file to be fetched in conjunction with FetchHDFS. This Processor is designed to run o n Primary Node only in a cluster. If the primary node changes, the new Primary Node will pick up where the previous node left off without duplicating all of the data. Unlike GetHDFS, this Processor does not delete any data from HDFS.
hadoop, HDFS, get, list, ingest, source, filesystem
In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.
Name | Default Value | Allowable Values | Description |
---|---|---|---|
Hadoop Configuration Resources | A file or comma separated list of files which contains the Hadoop file system configuration. Witho
ut this, Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a default configuration. Supports Expression Language: true | ||
Kerberos Principal | Kerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Keytab | Kerberos keytab associated with the principal. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Relogin Period | 4 hours | Period of time which should pass before attempting a kerberos relogin | |
Additional Classpath Resources | A comma-separated list of paths to files and/or directories that will be added to the classpath. When specifying a directory, all files with in the directory will be added to the classpath, but further sub-directories will not be included. | ||
Distributed Cache Service | Controller Service API: DistributedMapCacheClient Implementations: HBase_1_1_2_ClientMapCacheService RedisDistributedMapCacheClientService DistributedMapCacheClientService | Specifies the Controller Service that should be used to maintain state about what has been pulled from HDFS so that if a new node begins pulling data, it won't duplicate all of the work that has been done. | |
Directory | The HDFS directory from which files should be read Supports Expression Language: true | ||
Recurse Subdirectories | true |
| Indicates whether to list files from subdirectories of the HDFS directory |
File Filter | [ ^\.].* | Only files whose names match the given regular expression will be picked up | |
Minimum File Age | The minimum age that a file must be in order to be pulled; any file younger than this amount of time (based on last modification date) will be ignored | ||
Maximum File Age | The maximum age that a file must be in order to be pulled; any file older than this amount of time (based on last modification date) will be ignored. Minimum value is 100ms. |
Name | Description |
---|---|
success | All FlowFiles are transferred to this relationship |
Name | Description |
---|---|
filename | The name of the file that was read from HDFS. |
path | The path is set to the absolute path of the file's directory on HDFS. For example, if the Directory property is set to /tmp, then files picked up from /tmp will have the path attribute set to "./". If the Recurse Subdirectories property is set to true and a file is picked up from /tmp/abc/1/2/3, then the path attribute will be set to "/tmp/abc/1/2/3". |
hdfs.owner | The user that owns the file in HDFS |
hdfs.group | The group that owns the file in HDFS |
hdfs.lastModified | The timestamp of when the file in HDFS was last modified, as milliseconds since midnight Jan 1, 1970 UTC |
hdfs.length | The number of bytes in the file in HDFS |
hdfs.replication | The number of HDFS replicas for hte file |
hdfs.p ermissions | The permissions for the file in HDFS. This is formatted as 3 characters for the owner, 3 for the group, and 3 for other users. For example rw-rw-r-- |
Scope | Description |
---|---|
CLUSTER | After performing a listing of HDFS files, the latest timestamp of all the files listed and the latest timestamp of all the files transferred are both stored. This allows the Processor to list only files that have been added or modified after this date the next time that the Processor is run, without having to store all of the actual filenames/paths which could lead to performance problems. State is stored across the cluster so that this Processor can be run on Primary Node only and if a new Primary Node is selected, the new node can pick up where the previous node left off, without duplicating the data. |
Write FlowFile data to Hadoop Distributed File System (HDFS)
hadoop, HDFS, put, copy, filesystem, restricted
In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.
Name | Default Value | Allowable Values | Descr iption |
---|---|---|---|
Hadoop Configuration Resources | A file or comma separated list of files which contains the Hadoop file system configuration. Without this, Hadoop will search the classpath for a 'core-site.xml' and 'hdfs-site.xml' file or will revert to a default configuration. Supports Expression Language: true | ||
Kerberos Principal | Kerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expression Language: true | ||
Kerberos Keytab | Kerberos keytab associated with the principal. Requires nifi.kerberos.krb5.file to be set in your nifi.properties Supports Expressi on Language: true | ||
Kerberos Relogin Period | 4 hours | Period of time which should pass before attempting a kerberos relogin Supports Expression Language: true | |
Additional Classpath Resources | A comma-separated list of paths to files and/or directories that will be added to the classpath. When specifying a directory, all files with in the directory will be added to the classpath, but further sub-directories will not be included. | ||
Directory | The parent HDFS directory to which files should be written. The directory will be created if it doesn't exist. Supports Expression Language: true | ||
fail |
| Indicates what should happen when a file with the same name already exists in the output direct ory | |
Block Size | Size of each block as written to HDFS. This overrides the Hadoop Configuration | ||
IO Buffer Size | Amount of memory to use to buffer file contents during IO. This overrides the Hadoop Configuration | ||
Replication | Number of times that HDFS will replicate each file. This overrides the Hadoop Configuration | ||
Permissions umask | A umask represented as an octal number which determines the permissions of files written to HDFS. This overrides the Hadoop Configuration dfs.umaskmode | ||
Remote Owner | Changes the owner of the HDFS file to this value after it is written. This only works if NiFi is running as a user that has HDFS super user privilege to change owner Supports Expression Language: true | ||
Remote Group | Changes the group of the HDFS file to this value after it is written. This only works if NiFi is running as a user that has HDFS super user privilege to change group Supports Expression Language: true | ||
Compression codec | NONE |
| No Description Provided. | < /tr>
Name | Description |
---|---|
success | Files that have been successfully written to HDFS are transferred to this relationship |
failure | Files that could not be written to HDFS for some reason are transferred to this relationship |
Name | Description |
---|---|
filename | The name of the file written to HDFS comes from the value of this attribute. |
Name | Description |
---|---|
filename | The name of the file written to HDFS is stored in this attribute. |
absolute.hdfs.path | The absolute path to the file on HDFS is stored in this attribute. |