nifi-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeff Storck (JIRA)" <>
Subject [jira] [Commented] (NIFI-3472) PutHDFS Kerberos relogin not working (tgt) after ticket expires
Date Mon, 07 Aug 2017 13:34:00 GMT


Jeff Storck commented on NIFI-3472:

[~jomach] The fix I'm working on for this JIRA will be applied to all of NiFi's HDFS processors,
and eventually the HBase and Hive processors. I'll update the JIRA to reflect this.  Thanks!

> PutHDFS Kerberos relogin not working (tgt) after ticket expires
> ---------------------------------------------------------------
>                 Key: NIFI-3472
>                 URL:
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>    Affects Versions: 1.0.0, 1.1.0, 1.1.1, 1.0.1
>            Reporter: Jeff Storck
>            Assignee: Jeff Storck
> PutHDFS is not able to relogin if the ticket expires.
> NiFi, running locally as standalone, was sending files to HDFS.  After suspending the
system for the weekend, when the flow attempted to continue to process flowfiles, the following
exception occurred:
> {code}2017-02-13 11:59:53,460 WARN [Timer-Driven Process Thread-10] org.apache.hadoop.ipc.Client
Exception encountered while connecting to the server :
GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level:
Failed to find any Kerberos tgt)]
> 2017-02-13 11:59:53,463 INFO [Timer-Driven Process Thread-10]
Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over [host:port]
after 3 fail over attempts. Trying to fail over immediately.
> Failed on local exception:
GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level:
Failed to find any Kerberos tgt)]; Host Details : local host is: "[host:port]"; destination
host is: [host:port];
> 	at ~[hadoop-common-2.7.3.jar:na]
> 	at ~[hadoop-common-2.7.3.jar:na]
> 	at ~[hadoop-common-2.7.3.jar:na]
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(
> 	at com.sun.proxy.$Proxy136.getFileInfo(Unknown Source) ~[na:na]
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(
> 	at sun.reflect.GeneratedMethodAccessor386.invoke(Unknown Source) ~[na:na]
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> 	at java.lang.reflect.Method.invoke( ~[na:1.8.0_102]
> 	at
> 	at
> 	at com.sun.proxy.$Proxy137.getFileInfo(Unknown Source) [na:na]
> 	at org.apache.hadoop.hdfs.DFSClient.getFileInfo( [hadoop-hdfs-2.7.3.jar:na]
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(
> 	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(
> 	at org.apache.nifi.processors.hadoop.PutHDFS$ [nifi-hdfs-processors-1.1.1.jar:1.1.1]
> 	at Method) [na:1.8.0_102]
> 	at [na:1.8.0_102]
> 	at
> 	at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger( [nifi-hdfs-processors-1.1.1.jar:1.1.1]
> 	at org.apache.nifi.processor.AbstractProcessor.onTrigger(
> 	at org.apache.nifi.controller.StandardProcessorNode.onTrigger(
> 	at
> 	at
> 	at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$
> 	at java.util.concurrent.Executors$ [na:1.8.0_102]
> 	at java.util.concurrent.FutureTask.runAndReset( [na:1.8.0_102]
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker( [na:1.8.0_102]
> 	at java.util.concurrent.ThreadPoolExecutor$ [na:1.8.0_102]
> 	at [na:1.8.0_102]{code}
> After stopping and starting the PutHDFS processor, flowfiles were able to be transferred
to HDFS.

This message was sent by Atlassian JIRA

View raw message