Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D47B9102DE for ; Mon, 17 Jun 2013 07:02:05 +0000 (UTC) Received: (qmail 55780 invoked by uid 500); 17 Jun 2013 07:02:00 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 55721 invoked by uid 500); 17 Jun 2013 07:02:00 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 55710 invoked by uid 99); 17 Jun 2013 07:02:00 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 17 Jun 2013 07:02:00 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of jksingh26jun@gmail.com designates 209.85.216.43 as permitted sender) Received: from [209.85.216.43] (HELO mail-qa0-f43.google.com) (209.85.216.43) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 17 Jun 2013 07:01:55 +0000 Received: by mail-qa0-f43.google.com with SMTP id d13so1254081qak.2 for ; Mon, 17 Jun 2013 00:01:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=HW8NjehlfzN2B9EV+dgyaVl2D8rbDvjwgeQiwm8ecvQ=; b=Y5vDPK6eqeBmgHhL/d1eH3WO5P16g8CdSus+g3gA3NbY1G5v7yypANs9b1ALxBljbX F7518nxpmkWvsZNqJ1dNwH0JCtyDQJSj1MqUsF0jLfhyNaKDDdUAA37Uy7Nm7TQj5lo1 JaTh5zNCwzphTbKmrJYMu9KvKy8iRiHSTNIjmE0gkf+AAg1PaRff9b+mTVhDfDjbxGjd Qrm0GkFXyW31C7R8yFqQhiX1pC5b24fTUukh0scIQqN1vufddUxraqopEfKNI8rbw4R8 GqoAhQZZKbxKzBr9ogGLB8D5fh9SeL9JOkG3hw/GiMLk6P6uqL11GceWE88594CU4oaq DkVQ== X-Received: by 10.229.71.134 with SMTP id h6mr5876353qcj.131.1371452494827; Mon, 17 Jun 2013 00:01:34 -0700 (PDT) MIME-Version: 1.0 Received: by 10.229.69.4 with HTTP; Mon, 17 Jun 2013 00:01:14 -0700 (PDT) In-Reply-To: References: From: Jitendra Kumar Singh Date: Mon, 17 Jun 2013 12:31:14 +0530 Message-ID: Subject: Re: Hive Query having virtual column INPUT__FILE__NAME in where clause gives exception To: user@hive.apache.org Content-Type: multipart/alternative; boundary=089e01628446c7d66f04df542ae2 X-Virus-Checked: Checked by ClamAV on apache.org --089e01628446c7d66f04df542ae2 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Thanks guys for reply. Following query also did not work hive> select count(*), filename from (select INPUT__FILE__NAME as filename from netflow) tmp where filename=3D'vzb.1351794600.0' group by filename; FAILED: SemanticException java.lang.RuntimeException: cannot find field input__file__name from [org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector$M= yField@1d264bf5, org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector$My= Field@3d44d0c6 I forgot to mention that my table uses partitions. Do you guys know any other way to filter files? Thanks and Regards, -- Jitendra Kumar Singh Mobile: (+91) 9891314709 On Sat, Jun 15, 2013 at 12:33 PM, Navis=EB=A5=98=EC=8A=B9=EC=9A=B0 wrote: > Firstly, the exception seemed > https://issues.apache.org/jira/browse/HIVE-3926. > > Secondly, file selection on vc (file-name, etc.) is > https://issues.apache.org/jira/browse/HIVE-1662 > > Both of them are not fixed yet. > > 2013/6/14 Nitin Pawar : > > Jitendra, > > I am really not sure you can use virtual columns in where clause. (I > never > > tried it so I may be wrong as well). > > > > can you try executing your query as below > > > > select count(*), filename from (select INPUT__FILE__NAME as filename fr= om > > netflow)tmp where filename=3D'vzb.1351794600.0'; > > > > please check for query syntax. I am giving an idea and have not verifie= d > the > > query > > > > > > On Fri, Jun 14, 2013 at 4:57 PM, Jitendra Kumar Singh > > wrote: > >> > >> Hi Guys, > >> > >> Executing hive query with filter on virtual column INPUT_FILE_NAME > result > >> in following exception. > >> > >> hive> select count(*) from netflow where > >> INPUT__FILE__NAME=3D'vzb.1351794600.0'; > >> > >> FAILED: SemanticException java.lang.RuntimeException: cannot find fiel= d > >> input__file__name from > >> > [org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector= $MyField@1d264bf5 > , > >> > org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector$= MyField@3d44d0c6 > , > >> > >> . > >> > >> . > >> > >> . > >> > >> > >> > org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector$= MyField@7e6bc5aa > ] > >> > >> This error is different from the one we get when column name is wrong > >> > >> hive> select count(*) from netflow where > >> INPUT__FILE__NAM=3D'vzb.1351794600.0'; > >> > >> FAILED: SemanticException [Error 10004]: Line 1:35 Invalid table alias > or > >> column reference 'INPUT__FILE__NAM': (possible column names are: first= , > >> last, ....) > >> > >> But using this virtual column in select clause works fine. > >> > >> hive> select INPUT__FILE__NAME from netflow group by INPUT__FILE__NAME= ; > >> > >> Total MapReduce jobs =3D 1 > >> > >> Launching Job 1 out of 1 > >> > >> Number of reduce tasks not specified. Estimated from input data size: = 4 > >> > >> In order to change the average load for a reducer (in bytes): > >> > >> set hive.exec.reducers.bytes.per.reducer=3D > >> > >> In order to limit the maximum number of reducers: > >> > >> set hive.exec.reducers.max=3D > >> > >> In order to set a constant number of reducers: > >> > >> set mapred.reduce.tasks=3D > >> > >> Starting Job =3D job_201306041359_0006, Tracking URL =3D > >> http://192.168.0.224:50030/jobdetails.jsp?jobid=3Djob_201306041359_000= 6 > >> > >> Kill Command =3D /opt/hadoop/bin/../bin/hadoop job -kill > >> job_201306041359_0006 > >> > >> Hadoop job information for Stage-1: number of mappers: 12; number of > >> reducers: 4 > >> > >> 2013-06-14 18:20:10,265 Stage-1 map =3D 0%, reduce =3D 0% > >> > >> 2013-06-14 18:20:33,363 Stage-1 map =3D 8%, reduce =3D 0% > >> > >> . > >> > >> . > >> > >> . > >> > >> 2013-06-14 18:21:15,554 Stage-1 map =3D 100%, reduce =3D 100% > >> > >> Ended Job =3D job_201306041359_0006 > >> > >> MapReduce Jobs Launched: > >> > >> Job 0: Map: 12 Reduce: 4 HDFS Read: 3107826046 HDFS Write: 55 SUCCE= SS > >> > >> Total MapReduce CPU Time Spent: 0 msec > >> > >> OK > >> > >> hdfs://192.168.0.224:9000/data/jk/vzb/vzb.1351794600.0 > >> > >> Time taken: 78.467 seconds > >> > >> I am trying to create external hive table on already present HDFS data= . > >> And I have extra files in the folder that I want to ignore. Similar to > what > >> is asked and suggested in following stackflow questions how to make hi= ve > >> take only specific files as input from hdfs folder when creating an > external > >> table in hive can I point the location to specific files in a direcotr= y? > >> > >> Any help would be appreciated. Full stack trace I am getting is as > follows > >> > >> 2013-06-14 15:01:32,608 ERROR ql.Driver > >> (SessionState.java:printError(401)) - FAILED: SemanticException > >> java.lang.RuntimeException: cannot find field input__ > >> > >> org.apache.hadoop.hive.ql.parse.SemanticException: > >> java.lang.RuntimeException: cannot find field input__file__name from > >> [org.apache.hadoop.hive.serde2.object > >> > >> at > >> > org.apache.hadoop.hive.ql.optimizer.pcr.PcrOpProcFactory$FilterPCR.proces= s(PcrOpProcFactory.java:122) > >> > >> at > >> > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleD= ispatcher.java:89) > >> > >> at > >> > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWal= ker.java:87) > >> > >> at > >> > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.= java:124) > >> > >> at > >> > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGrap= hWalker.java:101) > >> > >> at > >> > org.apache.hadoop.hive.ql.optimizer.pcr.PartitionConditionRemover.transfo= rm(PartitionConditionRemover.java:86) > >> > >> at > >> > org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:102= ) > >> > >> at > >> > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(Semantic= Analyzer.java:8163) > >> > >> at > >> > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemantic= Analyzer.java:258) > >> > >> at > >> > org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(E= xplainSemanticAnalyzer.java:50) > >> > >> at > >> > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemantic= Analyzer.java:258) > >> > >> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431) > >> > >> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335) > >> > >> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:893) > >> > >> at > >> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:25= 9) > >> > >> at > >> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) > >> > >> at > >> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) > >> > >> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755= ) > >> > >> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:61= 3) > >> > >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > >> > >> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) > >> > >> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown > Source) > >> > >> at java.lang.reflect.Method.invoke(Unknown Source) > >> > >> at org.apache.hadoop.util.RunJar.main(RunJar.java:156) > >> > >> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > >> java.lang.RuntimeException: cannot find field input__file__name from > >> [org.apache.hadoop.hive.ser > >> > >> at > >> > org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPr= uner.java:231) > >> > >> at > >> > org.apache.hadoop.hive.ql.optimizer.pcr.PcrOpProcFactory$FilterPCR.proces= s(PcrOpProcFactory.java:112) > >> > >> ... 23 more > >> > >> Caused by: java.lang.RuntimeException: cannot find field > input__file__name > >> from > >> > [org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector= $MyF > >> > >> at > >> > org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getSta= ndardStructFieldRef(ObjectInspectorUtils.java:344) > >> > >> at > >> > org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector.= getStructFieldRef(UnionStructObjectInspector.java:100) > >> > >> at org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.init > >> > >> > >> Thanks and Regards, > >> -- > >> Jitendra Kumar Singh > >> Mobile: (+91) 9891314709 > > > > > > > > > > -- > > Nitin Pawar > --089e01628446c7d66f04df542ae2 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Thanks guys for reply.

Following query also did not work=
hive> select count(*), filename from (select INPUT__FILE__= NAME as filename from netflow) tmp =C2=A0where filename=3D'vzb.13517946= 00.0' group by filename;=C2=A0
FAILED: SemanticException java.lang.RuntimeException: cannot find fiel= d input__file__name from [org.apache.hadoop.hive.serde2.objectinspector.Uni= onStructObjectInspector$MyField@1d264bf5, org.apache.hadoop.hive.serde2.obj= ectinspector.UnionStructObjectInspector$MyField@3d44d0c6

I forgot to mention that my table uses partitions= .=C2=A0

Do you guys know any other way to filter f= iles?

Thanks and Regards,
--
Jitendr= a Kumar Singh
Mobile: (+91) 9891314709


On Sat, Jun 15, 2013 at 12:33 PM, Navis= =EB=A5=98=EC=8A=B9=EC=9A=B0 <navis.ryu@nexr.com> wrote:
=
Firstly, the exception seemed https://issues.apache.org/jira/browse/HIVE= -3926.

Secondly, file selection on vc (file-name, etc.) is
https://issues.apache.org/jira/browse/HIVE-1662

Both of them are not fixed yet.

2013/6/14 Nitin Pawar <nitinp= awar432@gmail.com>:
> Jitendra,
> I am really not sure you can use virtual columns in where clause. =C2= =A0(I never
> tried it so I may be wrong as well).
>
> can you try executing your query as below
>
> select count(*), filename from (select INPUT__FILE__NAME as filename f= rom
> netflow)tmp =C2=A0where filename=3D'vzb.1351794600.0';
>
> please check for query syntax. I am giving an idea and have not verifi= ed the
> query
>
>
> On Fri, Jun 14, 2013 at 4:57 PM, Jitendra Kumar Singh
> <jksingh26jun@gmail.com> wrote:
>>
>> Hi Guys,
>>
>> Executing hive query with filter on virtual column INPUT_FILE_NAME= result
>> in following exception.
>>
>> hive> select count(*) from netflow where
>> INPUT__FILE__NAME=3D'vzb.1351794600.0';
>>
>> FAILED: SemanticException java.lang.RuntimeException: cannot find = field
>> input__file__name from
>> [org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectIn= spector$MyField@1d264bf5,
>> org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectIns= pector$MyField@3d44d0c6,
>>
>> .
>>
>> .
>>
>> .
>>
>>
>> org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectIns= pector$MyField@7e6bc5aa]
>>
>> This error is different from the one we get when column name is wr= ong
>>
>> hive> select count(*) from netflow where
>> INPUT__FILE__NAM=3D'vzb.1351794600.0';
>>
>> FAILED: SemanticException [Error 10004]: Line 1:35 Invalid table a= lias or
>> column reference 'INPUT__FILE__NAM': (possible column name= s are: first,
>> last, ....)
>>
>> But using this virtual column in select clause works fine.
>>
>> hive> select INPUT__FILE__NAME from netflow group by INPUT__FIL= E__NAME;
>>
>> Total MapReduce jobs =3D 1
>>
>> Launching Job 1 out of 1
>>
>> Number of reduce tasks not specified. Estimated from input data si= ze: 4
>>
>> In order to change the average load for a reducer (in bytes):
>>
>> =C2=A0 set hive.exec.reducers.bytes.per.reducer=3D<number> >>
>> In order to limit the maximum number of reducers:
>>
>> =C2=A0 set hive.exec.reducers.max=3D<number>
>>
>> In order to set a constant number of reducers:
>>
>> =C2=A0 set mapred.reduce.tasks=3D<number>
>>
>> Starting Job =3D job_201306041359_0006, Tracking URL =3D
>>
http://192.168.0.224:50030/jobdetails.j= sp?jobid=3Djob_201306041359_0006
>>
>> Kill Command =3D /opt/hadoop/bin/../bin/hadoop job =C2=A0-kill
>> job_201306041359_0006
>>
>> Hadoop job information for Stage-1: number of mappers: 12; number = of
>> reducers: 4
>>
>> 2013-06-14 18:20:10,265 Stage-1 map =3D 0%, =C2=A0reduce =3D 0% >>
>> 2013-06-14 18:20:33,363 Stage-1 map =3D 8%, =C2=A0reduce =3D 0% >>
>> .
>>
>> .
>>
>> .
>>
>> 2013-06-14 18:21:15,554 Stage-1 map =3D 100%, =C2=A0reduce =3D 100= %
>>
>> Ended Job =3D job_201306041359_0006
>>
>> MapReduce Jobs Launched:
>>
>> Job 0: Map: 12 =C2=A0Reduce: 4 =C2=A0 HDFS Read: 3107826046 HDFS W= rite: 55 SUCCESS
>>
>> Total MapReduce CPU Time Spent: 0 msec
>>
>> OK
>>
>> hdfs://192.168.0.224:9000/data/jk/vzb/vzb.1351794600.0
>>
>> Time taken: 78.467 seconds
>>
>> I am trying to create external hive table on already present HDFS = data.
>> And I have extra files in the folder that I want to ignore. Simila= r to what
>> is asked and suggested in following stackflow questions how to mak= e hive
>> take only specific files as input from hdfs folder when creating a= n external
>> table in hive can I point the location to specific files in a dire= cotry?
>>
>> Any help would be appreciated. Full stack trace I am getting is as= follows
>>
>> 2013-06-14 15:01:32,608 ERROR ql.Driver
>> (SessionState.java:printError(401)) - FAILED: SemanticException >> java.lang.RuntimeException: cannot find field input__
>>
>> org.apache.hadoop.hive.ql.parse.SemanticException:
>> java.lang.RuntimeException: cannot find field input__file__name fr= om
>> [org.apache.hadoop.hive.serde2.object
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.optimizer.pcr.PcrOpProcFactory$FilterPCR= .process(PcrOpProcFactory.java:122)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(Defau= ltRuleDispatcher.java:89)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultG= raphWalker.java:87)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraph= Walker.java:124)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(Defa= ultGraphWalker.java:101)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.optimizer.pcr.PartitionConditionRemover.= transform(PartitionConditionRemover.java:86)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.j= ava:102)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(S= emanticAnalyzer.java:8163)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseS= emanticAnalyzer.java:258)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInt= ernal(ExplainSemanticAnalyzer.java:50)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseS= emanticAnalyzer.java:258)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hive.ql.Driver.co= mpile(Driver.java:431)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hive.ql.Driver.co= mpile(Driver.java:335)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hive.ql.Driver.ru= n(Driver.java:893)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.jav= a:259)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216= )
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:41= 2)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hive.cli.CliDrive= r.run(CliDriver.java:755)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hive.cli.CliDrive= r.main(CliDriver.java:613)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.reflect.NativeMethodAccessorImp= l.invoke0(Native Method)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.reflect.NativeMethodAccessorImp= l.invoke(Unknown Source)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.reflect.DelegatingMethodAccesso= rImpl.invoke(Unknown Source)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.lang.reflect.Method.invoke(Unk= nown Source)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.util.RunJar.main(= RunJar.java:156)
>>
>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
>> java.lang.RuntimeException: cannot find field input__file__name fr= om
>> [org.apache.hadoop.hive.ser
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(Part= itionPruner.java:231)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.ql.optimizer.pcr.PcrOpProcFactory$FilterPCR= .process(PcrOpProcFactory.java:112)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 ... 23 more
>>
>> Caused by: java.lang.RuntimeException: cannot find field input__fi= le__name
>> from
>> [org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectIn= spector$MyF
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils= .getStandardStructFieldRef(ObjectInspectorUtils.java:344)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at
>> org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectIns= pector.getStructFieldRef(UnionStructObjectInspector.java:100)
>>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hive.ql.exec.Expr= NodeColumnEvaluator.init
>>
>>
>> Thanks and Regards,
>> --
>> Jitendra Kumar Singh
>> Mobile: (+91) 9891314709
>
>
>
>
> --
> Nitin Pawar

--089e01628446c7d66f04df542ae2--