Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 41E891779E for ; Wed, 5 Nov 2014 13:28:32 +0000 (UTC) Received: (qmail 30504 invoked by uid 500); 5 Nov 2014 13:28:26 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 30400 invoked by uid 500); 5 Nov 2014 13:28:26 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 30389 invoked by uid 99); 5 Nov 2014 13:28:25 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 05 Nov 2014 13:28:25 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of brahmareddy.battula@huawei.com designates 119.145.14.66 as permitted sender) Received: from [119.145.14.66] (HELO szxga03-in.huawei.com) (119.145.14.66) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 05 Nov 2014 13:27:59 +0000 Received: from 172.24.2.119 (EHLO szxeml459-hub.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id AWQ97635; Wed, 05 Nov 2014 21:27:53 +0800 (CST) Received: from SZXEML510-MBX.china.huawei.com ([169.254.3.151]) by szxeml459-hub.china.huawei.com ([10.82.67.202]) with mapi id 14.03.0158.001; Wed, 5 Nov 2014 21:27:52 +0800 From: Brahma Reddy Battula To: "user@hadoop.apache.org" Subject: RE: issue about pig can not know HDFS HA configuration Thread-Topic: issue about pig can not know HDFS HA configuration Thread-Index: AQHP+NoKG1I/sQ1m8kmODeKJBByL9ZxRYu6AgACjRuQ= Date: Wed, 5 Nov 2014 13:27:51 +0000 Message-ID: <8AD4EE147886274A8B495D6AF407DF69811E901E@szxeml510-mbx.china.huawei.com> References: , In-Reply-To: Accept-Language: en-US, zh-CN Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.18.144.148] Content-Type: multipart/alternative; boundary="_000_8AD4EE147886274A8B495D6AF407DF69811E901Eszxeml510mbxchi_" MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020203.545A25DA.00B1,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=169.254.3.151, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 19f13ab4bd019e52ad1044f10b77085d X-Virus-Checked: Checked by ClamAV on apache.org --_000_8AD4EE147886274A8B495D6AF407DF69811E901Eszxeml510mbxchi_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hello Jagannath, Below exception will come when pigclient not able find the hdfs configuraio= ns..you need to do following.. Set the PIG_CLASSPATH environment variable to the location of the cluster c= onfiguration directory (the directory that contains the core-site.xml, hdfs= -site.xml and mapred-site.xml files): 1. export PIG_CLASSPATH=3D/mycluster/conf 2. Set the HADOOP_CONF_DIR environment variable to the location of the c= luster configuration directory: export HADOOP_CONF_DIR=3D/mycluster/conf Thanks & Regards Brahma Reddy Battula ________________________________ From: Jagannath Naidu [jagannath.naidu@fosteringlinux.com] Sent: Wednesday, November 05, 2014 5:11 PM To: user@hadoop.apache.org Subject: Re: issue about pig can not know HDFS HA configuration On 5 November 2014 14:49, ch huang > wrote: hi,maillist: i set namenode HA in my HDFS cluster,but seems pig can not know it ,why? 2014-11-05 14:34:54,710 [JobControl] INFO org.apache.hadoop.mapreduce.JobS= ubmitter - Cleaning up the staging area file:/tmp/hadoop-root/mapred/stagin= g/root1861403840/.staging/job_local1861403840_0001 2014-11-05 14:34:54,716 [JobControl] WARN org.apache.hadoop.security.UserG= roupInformation - PriviledgedActionException as:root (auth:SIMPLE) cause:or= g.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.Un= knownHostException: develop unknown host exception, this can be the issue. Check that the host is disco= verable either form dns or from hosts. 2014-11-05 14:34:54,717 [JobControl] INFO org.apache.hadoop.mapreduce.lib.= jobcontrol.ControlledJob - PigLatin:DefaultJobName got an error while submi= tting org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.= UnknownHostException: develop at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Pig= InputFormat.getSplits(PigInputFormat.java:288) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmi= tter.java:493) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitte= r.java:510) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSu= bmitter.java:394) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupIn= formation.java:1554) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292) at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(= ControlledJob.java:335) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessor= Impl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethod= AccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobContr= ol.java:128) at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.= java:191) at java.lang.Thread.run(Thread.java:744) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Map= ReduceLauncher$1.run(MapReduceLauncher.java:270) Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostExceptio= n: develop at org.apache.hadoop.security.SecurityUtil.buildTokenService(Securi= tyUtil.java:377) at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNode= Proxies.java:237) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxi= es.java:141) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:576) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:521) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(Distribu= tedFileSystem.java:146) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java= :2397) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.jav= a:2431) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(H= CatBaseInputFormat.java:326) at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCat= BaseInputFormat.java:127) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Pig= InputFormat.getSplits(PigInputFormat.java:274) ... 18 more Caused by: java.net.UnknownHostException: develop ... 33 more -- Jaggu Naidu --_000_8AD4EE147886274A8B495D6AF407DF69811E901Eszxeml510mbxchi_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
Hello Jagannath,

Below exception will come when pigclient not able find the hdfs conf= iguraions..you need to do following..     


Set the PIG_CLASSPATH environment variable to the location of the cluster c= onfiguration directory (the directory that contains the core-site.xml, hdfs= -site.xml and mapred-site.xml files):
  1. export PIG_CLASSPATH=
    =3D/mycluster/conf=0A=
    
  2. Set the HADOOP_CONF_DIR environment variable to the location of the cluster= configuration directory:
    export HADOOP_CONF_D=
    IR=3D/mycluster/conf




Thanks & Regards

Brahma Reddy Battula

 


From: Jagannath Naidu [jagannath.naidu@fo= steringlinux.com]
Sent: Wednesday, November 05, 2014 5:11 PM
To: user@hadoop.apache.org
Subject: Re: issue about pig can not know HDFS HA configuration



On 5 November 2014 14:49, ch huang <justlooks= @gmail.com> wrote:
hi,maillist:
   i set namenode HA in my HDFS cluster,but seems pig can no= t know it ,why?

2014-11-05 14:34:54,710 [JobControl] INFO  org.apache.hadoop.mapr= educe.JobSubmitter - Cleaning up the staging area file:/tmp/hadoop-root/map= red/staging/root1861403840/.staging/job_local1861403840_0001
2014-11-05 14:34:54,716 [JobControl] WARN  org.apache.hadoop.secu= rity.UserGroupInformation - PriviledgedActionException as:root (auth:SIMPLE= ) cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118: j= ava.net.UnknownHostException: develop

unknown host exception, this can be the issue. Check that the host is = discoverable either form dns or from hosts.
 
2014-11-05 14:34:54,717 [JobControl] INFO  org.apache.hadoop.mapr= educe.lib.jobcontrol.ControlledJob - PigLatin:DefaultJobName got an error w= hile submitting
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java= .net.UnknownHostException: develop
        at org.apache.pig.backend.hadoop.execution= engine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
        at org.apache.hadoop.mapreduce.JobSubmitte= r.writeNewSplits(JobSubmitter.java:493)
        at org.apache.hadoop.mapreduce.JobSubmitte= r.writeSplits(JobSubmitter.java:510)
        at org.apache.hadoop.mapreduce.JobSubmitte= r.submitJobInternal(JobSubmitter.java:394)
        at org.apache.hadoop.mapreduce.Job$10.run(= Job.java:1295)
        at org.apache.hadoop.mapreduce.Job$10.run(= Job.java:1292)
        at java.security.AccessController.doPrivil= eged(Native Method)
        at javax.security.auth.Subject.doAs(Subjec= t.java:415)
        at org.apache.hadoop.security.UserGroupInf= ormation.doAs(UserGroupInformation.java:1554)
        at org.apache.hadoop.mapreduce.Job.submit(= Job.java:1292)
        at org.apache.hadoop.mapreduce.lib.jobcont= rol.ControlledJob.submit(ControlledJob.java:335)
        at sun.reflect.NativeMethodAccessorImpl.in= voke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.in= voke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImp= l.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.= java:606)
        at org.apache.pig.backend.hadoop23.PigJobC= ontrol.submit(PigJobControl.java:128)
        at org.apache.pig.backend.hadoop23.PigJobC= ontrol.run(PigJobControl.java:191)
        at java.lang.Thread.run(Thread.java:744)
        at org.apache.pig.backend.hadoop.execution= engine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostExc= eption: develop
        at org.apache.hadoop.security.SecurityUtil= .buildTokenService(SecurityUtil.java:377)
        at org.apache.hadoop.hdfs.NameNodeProxies.= createNonHAProxy(NameNodeProxies.java:237)
        at org.apache.hadoop.hdfs.NameNodeProxies.= createProxy(NameNodeProxies.java:141)
        at org.apache.hadoop.hdfs.DFSClient.<in= it>(DFSClient.java:576)
        at org.apache.hadoop.hdfs.DFSClient.<in= it>(DFSClient.java:521)
        at org.apache.hadoop.hdfs.DistributedFileS= ystem.initialize(DistributedFileSystem.java:146)
        at org.apache.hadoop.fs.FileSystem.createF= ileSystem(FileSystem.java:2397)
        at org.apache.hadoop.fs.FileSystem.access$= 200(FileSystem.java:89)
        at org.apache.hadoop.fs.FileSystem$Cache.g= etInternal(FileSystem.java:2431)
        at org.apache.hadoop.fs.FileSystem$Cache.g= et(FileSystem.java:2413)
        at org.apache.hadoop.fs.FileSystem.get(Fil= eSystem.java:368)
        at org.apache.hadoop.fs.Path.getFileSystem= (Path.java:296)
        at org.apache.hcatalog.mapreduce.HCatBaseI= nputFormat.setInputPath(HCatBaseInputFormat.java:326)
        at org.apache.hcatalog.mapreduce.HCatBaseI= nputFormat.getSplits(HCatBaseInputFormat.java:127)
        at org.apache.pig.backend.hadoop.execution= engine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
        ... 18 more
Caused by: java.net.UnknownHostException: develop
        ... 33 more




--

Jaggu Naidu
--_000_8AD4EE147886274A8B495D6AF407DF69811E901Eszxeml510mbxchi_--