nifi-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sumanth Chinthagunta <xmlk...@gmail.com>
Subject Re: Writing files to MapR File system using putHDFS
Date Thu, 16 Jun 2016 05:33:34 GMT

Thanks for instructions Andre.
Should we build with master branch from github or 0.6.1 ?
Hope this helps me to use kites and hbase processors with MapR
-Sumo
Sent from my iPhone

> On Jun 15, 2016, at 4:53 PM, Andre <andre-lists@fucs.org> wrote:
> 
> Ravi,
> 
> You likely need to build the Hbase connectors with the MapR libraries as well.
> 
> Can you please change your ~/.m2/settings.xml so that you have:
> 
>     <profile>
>       <id>mapr</id>
>       <repositories>
>         <repository>
>           <id>mapr-repo</id>
>           <name>MapR Repository</name>
>           <url>http://repository.mapr.com/maven/</url>
>           <releases>
>             <enabled>true</enabled>
>           </releases>
>           <snapshots>
>             <enabled>false</enabled>
>           </snapshots>
>         </repository>
>       </repositories>
>     </profile>
> 
> and then compile nifi (without Sumo's patch) using:
> 
> mvn <whatever you usually use> -Pmapr -Dhadoop.version=2.7.0-mapr-1602 (assuming
you are running 5.1 - otherwise check the docs for the appropriate version)
> 
> 
> This should cause hbase connector to be compiled with the MapR libraries as dependencies.
> 
> Cheers
> 
>> On Thu, Jun 16, 2016 at 9:32 AM, Ravi Papisetti (rpapiset) <rpapiset@cisco.com>
wrote:
>> Thanks Sumo. We are able to use 0.6.1.nar file to connect with maps 5.1.
>> 
>> Can we use HBase_1_1_2_ClientService to connect with Hbase 0.98.12? Also, again here
we are using mapr cluster (mapr db). Connection fails with below error. Any thoughts?
>> 
>> 2016-06-15 18:22:33,131 ERROR [StandardProcessScheduler Thread-8] o.a.n.c.s.StandardControllerServiceNode
Failed to invoke @OnEnabled method of HBase_1_1_2_ClientService[id=076c05f1-a2b9-4e6a-803b-c5eb76da1c6d]
due to org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=1,
exceptions:
>> Wed Jun 15 18:22:33 CDT 2016, RpcRetryingCaller{globalStartTime=1466032935495, pause=100,
retries=1}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException:
Can't get connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
>> 
>> Thanks,
>> Ravi Papisetti
>> Technical Leader
>> Services Technology Incubation Center
>> rpapiset@cisco.com
>> Phone: +1 512 340 3377
>> 
>> 
>> 
>> From: Sumanth Chinthagunta <xmlking@gmail.com>
>> Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
>> Date: Wednesday, June 15, 2016 at 5:16 PM
>> To: "users@nifi.apache.org" <users@nifi.apache.org>
>> Subject: Re: Writing files to MapR File system using putHDFS
>> 
>> Just want to conform that: 
>> After fresh MapR Client (Secure) installation on NiFi host, I was able to use putHDFS
from NiFi without any core-site.xml changes I described in mapr-client.md
>> we are using custom MapR nifi-hadoop-libraries-nar-0.6.1.nar bundle and /opt/mapr/conf/mapr-clusters.conf
now. 
>> If Matt’s PR is released, ideally we don’t have to do any custom configuration
 to make NiFi work with MapR 
>> 
>> PS: my MapR env is 4.0.2 
>> -Sumo 
>> 
>>> On Jun 14, 2016, at 3:55 PM, Andre <andre-lists@fucs.org> wrote:
>>> 
>>> Ravi,
>>> 
>>> "1. https://github.com/xmlking/mapr-nifi-hadoop-libraries-bundle/blob/master/mapr-client.md
– has a step to configure fs.defaultFS, should I configure the cluster resource manage here?
Or should I get maprfs url from our IT who is supporting the cluster?"
>>> 
>>> Depends on your configuration, but generally the fs.defaultFS can be configured
just by using your own settings (and pointing the NiFi processor to your settings).
>>> 
>>> Ideally you should be connecting to a cluster defined under /opt/mapr/conf/mapr-clusters.conf
(assuming you haven't played with client configs)
>>> 
>>> Also have in mind that mapr-client.md file seems to be focused on insecure MapR
clusters. Are you using insecure (terrible choice), MapR Hybrid security (not ideal IMHO)
or Kerberos (recommended IMHO)?
>>> 
>>> Cheers
>>> 
>>>> On Wed, Jun 15, 2016 at 3:36 AM, Ravi Papisetti (rpapiset) <rpapiset@cisco.com>
wrote:
>>>> Hi,
>>>> 
>>>> I have configure the way it is mentioned in below e-mail, still no luck:-(.
>>>> 
>>>> I have two questions:
>>>> 1. https://github.com/xmlking/mapr-nifi-hadoop-libraries-bundle/blob/master/mapr-client.md
– has a step to configure fs.defaultFS, should I configure the cluster resource manage here?
Or should I get maprfs url from our IT who is supporting the cluster?
>>>> 2. Matt: What is MaprPR? Can you please elaborate
>>>> 
>>>> Appreciate all your responses.
>>>> 
>>>> Thanks,
>>>> Ravi Papisetti
>>>> Technical Leader
>>>> Services Technology Incubation Center
>>>> rpapiset@cisco.com
>>>> Phone: +1 512 340 3377
>>>> 
>>>> <0B65E9AA-485A-49EE-9FB4-8600F4D55880[28].png>
>>>> 
>>>> From: Matt Burgess <mattyb149@gmail.com>
>>>> Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
>>>> Date: Monday, June 13, 2016 at 8:26 PM
>>>> To: "users@nifi.apache.org" <users@nifi.apache.org>
>>>> Subject: Re: Writing files to MapR File system using putHDFS
>>>> 
>>>> Sumo,
>>>> 
>>>> I'll try the MapR PR with your additional settings below. If they work, they'll
need to be added to the doc (or ideally, the profile if possible). That's what I suspected
had been missing but haven't had a chance to try yet, will do that shortly :)
>>>> 
>>>> Thanks,
>>>> Matt
>>>> 
>>>> On Jun 13, 2016, at 9:17 PM, Sumanth Chinthagunta <xmlking@gmail.com>
wrote:
>>>> 
>>>>> 
>>>>> I had been using custom build nifi-hadoop-libraries-nar-0.6.1.nar that
worked with MapR 4.02
>>>>> make sure you add java.security.auth.login.config and follow mapR client
setup on the NiFi server (https://github.com/xmlking/mapr-nifi-hadoop-libraries-bundle/blob/master/mapr-client.md)
 
>>>>> $NIFI_HOME/conf/bootstrap.conf
>>>>> 
>>>>> java.arg.15=-Djava.security.auth.login.config=/opt/mapr/conf/mapr.login.conf
>>>>> 
>>>>> I just build Nar with MapR 2.7.0-mapr-1602 libs. I haven’t tested with
MapR 5.1 but you can try and let us know. 
>>>>>  
>>>>> Hadoop bundle for NiFi v0.6.1 and MapR 2.7.0-mapr-1602
>>>>> https://github.com/xmlking/mapr-nifi-hadoop-libraries-bundle/releases
>>>>> 
>>>>> -Sumo
>>>>> 
>>>>> 
>>>>>> On Jun 13, 2016, at 5:57 PM, Bryan Bende <bbende@gmail.com>
wrote:
>>>>>> 
>>>>>> I'm not sure if this would make a difference, but typically the configuration
resources would be the full paths to core-site.xml and hdfs-site.xml. Wondering if using those
instead of yarn-site.xml changes anything.
>>>>>> 
>>>>>>> On Monday, June 13, 2016, Ravi Papisetti (rpapiset) <rpapiset@cisco.com>
wrote:
>>>>>>> Yes, Aldrin. Tried listHDFS, gets the similar error complaining
directory doesn't exist.
>>>>>>> 
>>>>>>> NiFi – 0.6.1
>>>>>>> MapR – 5.1
>>>>>>> 
>>>>>>> NiFi is local standalone instance. Target cluster is enabled
with token based authentication. I am able to execute "hadoop fs –ls <path>" from
cli on the node with NiFi installed.
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Ravi Papisetti
>>>>>>> Technical Leader
>>>>>>> Services Technology Incubation Center
>>>>>>> rpapiset@cisco.com
>>>>>>> Phone: +1 512 340 3377
>>>>>>> 
>>>>>>> <0B65E9AA-485A-49EE-9FB4-8600F4D55880[22].png>
>>>>>>> 
>>>>>>> From: Aldrin Piri <aldrinpiri@gmail.com>
>>>>>>> Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
>>>>>>> Date: Monday, June 13, 2016 at 6:24 PM
>>>>>>> To: "users@nifi.apache.org" <users@nifi.apache.org>
>>>>>>> Subject: Re: Writing files to MapR File system using putHDFS
>>>>>>> 
>>>>>>> Hi Ravi,
>>>>>>> 
>>>>>>> Could you provide some additional details in terms of both your
NiFi environment and the MapR destination?
>>>>>>> 
>>>>>>> Is your NiFi a single instance or clustered?  In the case of
the latter, is security established for your ZooKeeper ensemble? 
>>>>>>> 
>>>>>>> Is your target cluster Kerberized?  What version are you running?
 Have you attempted to use the List/GetHDFS processors?  Do they also have errors in reading?

>>>>>>> 
>>>>>>> Thanks!
>>>>>>> --aldrin
>>>>>>> 
>>>>>>>> On Mon, Jun 13, 2016 at 5:19 PM, Ravi Papisetti (rpapiset)
<rpapiset@cisco.com> wrote:
>>>>>>>> Thanks Conrad for your reply.
>>>>>>>> 
>>>>>>>> Yes, I have configured putHDFS with "Remove Owner" and "Renive
Group" with same values as on HDFS. Also, nifi service is started under the same user.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Thanks,
>>>>>>>> Ravi Papisetti
>>>>>>>> Technical Leader
>>>>>>>> Services Technology Incubation Center
>>>>>>>> rpapiset@cisco.com
>>>>>>>> Phone: +1 512 340 3377
>>>>>>>> 
>>>>>>>> <0B65E9AA-485A-49EE-9FB4-8600F4D55880[21].png>
>>>>>>>> 
>>>>>>>> From: Conrad Crampton <conrad.crampton@SecData.com>
>>>>>>>> Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
>>>>>>>> Date: Monday, June 13, 2016 at 4:01 PM
>>>>>>>> To: "users@nifi.apache.org" <users@nifi.apache.org>
>>>>>>>> Subject: Re: Writing files to MapR File system using putHDFS
>>>>>>>> 
>>>>>>>> Hi,
>>>>>>>> 
>>>>>>>> Sounds like a permissions problem. Have you set the Remote
Owner and Remote Groups settings in the processor appropriate for HDFS permissions?
>>>>>>>> 
>>>>>>>> Conrad
>>>>>>>> 
>>>>>>>>  
>>>>>>>> 
>>>>>>>> From: "Ravi Papisetti (rpapiset)" <rpapiset@cisco.com>
>>>>>>>> Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
>>>>>>>> Date: Monday, 13 June 2016 at 21:25
>>>>>>>> To: "users@nifi.apache.org" <users@nifi.apache.org>,
"dev@nifi.apache.org" <dev@nifi.apache.org>
>>>>>>>> Subject: Writing files to MapR File system using putHDFS
>>>>>>>> 
>>>>>>>>  
>>>>>>>> 
>>>>>>>> Hi,
>>>>>>>> 
>>>>>>>>  
>>>>>>>> 
>>>>>>>> We just started exploring apache nifi for data onboarding
into MapR distribution. Have configured putHDFS with yarn-site.xml from on local mapr client
where cluster information is provided, configured the "Directory" with mapr fs directory to
write the files, configured nifi to run as user has permission to write to mapr fs, inspie
of that we are getting below error while writing the  file into given file system path. I
am doubting, nifi is not talking to the cluster or talking with wrong user, appreciate if
you some can guide me to troubleshoot this issue or any solutions if we are doing something
wrong:
>>>>>>>> 
>>>>>>>> Nifi workflow is very simple: GetFile is configure to read
from locla file system, connected to PutHDFS with yarn-site.xml and directory information
configured.
>>>>>>>> 
>>>>>>>> 2016-06-13 15:14:36,305 INFO [Timer-Driven Process Thread-2]
o.apache.nifi.processors.hadoop.PutHDFS PutHDFS[id=07abcfaa-fa8d-496b-81f0-b1b770672719] Kerberos
relogin successful or ticket still valid
>>>>>>>> 2016-06-13 15:14:36,324 ERROR [Timer-Driven Process Thread-2]
o.apache.nifi.processors.hadoop.PutHDFS PutHDFS[id=07abcfaa-fa8d-496b-81f0-b1b770672719] Failed
to write to HDFS due to java.io.IOException: /app/DataAnalyticsFramework/catalog/nifi could
not be created: java.io.IOException: /app/DataAnalyticsFramework/catalog/nifi could not be
created
>>>>>>>> 2016-06-13 15:14:36,330 ERROR [Timer-Driven Process Thread-2]
o.apache.nifi.processors.hadoop.PutHDFS
>>>>>>>> java.io.IOException: /app/DataAnalyticsFramework/catalog/nifi
could not be created
>>>>>>>>         at org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:238)
~[nifi-hdfs-processors-0.6.1.jar:0.6.1]
>>>>>>>>         at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
[nifi-api-0.6.1.jar:0.6.1]
>>>>>>>>         at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
[nifi-framework-core-0.6.1.jar:0.6.1]
>>>>>>>>         at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
[nifi-framework-core-0.6.1.jar:0.6.1]
>>>>>>>>         at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
[nifi-framework-core-0.6.1.jar:0.6.1]
>>>>>>>>         at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
[nifi-framework-core-0.6.1.jar:0.6.1]
>>>>>>>>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
[na:1.7.0_101]
>>>>>>>>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
[na:1.7.0_101]
>>>>>>>>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
[na:1.7.0_101]
>>>>>>>>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
[na:1.7.0_101]
>>>>>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_101]
>>>>>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_101]
>>>>>>>>         at java.lang.Thread.run(Thread.java:745) [na:1.7.0_101]
>>>>>>>> 
>>>>>>>> Appreciate any help.
>>>>>>>> 
>>>>>>>>  
>>>>>>>> 
>>>>>>>> Thanks,
>>>>>>>> 
>>>>>>>> Ravi Papisetti
>>>>>>>> 
>>>>>>>> Technical Leader
>>>>>>>> 
>>>>>>>> Services Technology Incubation Center
>>>>>>>> 
>>>>>>>> rpapiset@cisco.com
>>>>>>>> 
>>>>>>>> Phone: +1 512 340 3377
>>>>>>>> 
>>>>>>>>  
>>>>>>>> 
>>>>>>>> <image001.png>
>>>>>>>> 
>>>>>>>>  
>>>>>>>> 
>>>>>>>> ***This email originated outside SecureData***
>>>>>>>> 
>>>>>>>> Click here to report this email as spam.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> SecureData, combating cyber threats
>>>>>>>> 
>>>>>>>> The information contained in this message or any of its attachments
may be privileged and confidential and intended for the exclusive use of the intended recipient.
If you are not the intended recipient any disclosure, reproduction, distribution or other
dissemination or use of this communications is strictly prohibited. The views expressed in
this email are those of the individual and not necessarily of SecureData Europe Ltd. Any prices
quoted are only valid if followed up by a formal written quote.
>>>>>>>> 
>>>>>>>> SecureData Europe Limited. Registered in England & Wales
04365896. Registered Address: SecureData House, Hermitage Court, Hermitage Lane, Maidstone,
Kent, ME16 9NT
>>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> -- 
>>>>>> Sent from Gmail Mobile
> 

Mime
View raw message