Return-Path: X-Original-To: apmail-pig-dev-archive@www.apache.org Delivered-To: apmail-pig-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3C2879A88 for ; Tue, 31 Jan 2012 13:24:39 +0000 (UTC) Received: (qmail 31941 invoked by uid 500); 31 Jan 2012 13:24:39 -0000 Delivered-To: apmail-pig-dev-archive@pig.apache.org Received: (qmail 31858 invoked by uid 500); 31 Jan 2012 13:24:38 -0000 Mailing-List: contact dev-help@pig.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@pig.apache.org Delivered-To: mailing list dev@pig.apache.org Received: (qmail 31850 invoked by uid 500); 31 Jan 2012 13:24:38 -0000 Delivered-To: apmail-hadoop-pig-dev@hadoop.apache.org Received: (qmail 31847 invoked by uid 99); 31 Jan 2012 13:24:38 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 31 Jan 2012 13:24:38 +0000 X-ASF-Spam-Status: No, hits=-1998.7 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD,URI_HEX X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 31 Jan 2012 13:24:33 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id 11EC4181028 for ; Tue, 31 Jan 2012 13:24:11 +0000 (UTC) Date: Tue, 31 Jan 2012 13:24:11 +0000 (UTC) From: "Kevin Lion (Updated) (JIRA)" To: pig-dev@hadoop.apache.org Message-ID: <366800111.11542.1328016251074.JavaMail.tomcat@hel.zones.apache.org> In-Reply-To: <1297599505.7882.1327941130302.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Updated] (PIG-2495) Using merge JOIN from a HBaseStorage produces an error MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/PIG-2495?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Lion updated PIG-2495: ---------------------------- Summary: Using merge JOIN from a HBaseStorage produces an error (was: = Using merge JOIN from a HBaseStorage oproduce an error) =20 > Using merge JOIN from a HBaseStorage produces an error > ------------------------------------------------------ > > Key: PIG-2495 > URL: https://issues.apache.org/jira/browse/PIG-2495 > Project: Pig > Issue Type: Bug > Affects Versions: 0.9.1, 0.9.2 > Environment: HBase 0.90.3, Hadoop 0.20-append > Reporter: Kevin Lion > > To increase performance of my computation, I would like to use a merge jo= in between two tables to increase speed computation but it produces an erro= r. > Here is the script: > {noformat} > start_sessions =3D LOAD 'hbase://startSession.bea000000.dev.ubithere.com'= USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('meta:infoid meta:i= mei meta:timestamp', '-loadKey') AS (sid:chararray, infoid:chararray, imei:= chararray, start:long); > end_sessions =3D LOAD 'hbase://endSession.bea000000.dev.ubithere.com' USI= NG org.apache.pig.backend.hadoop.hbase.HBaseStorage('meta:timestamp meta:lo= cid', '-loadKey') AS (sid:chararray, end:long, locid:chararray); > sessions =3D JOIN start_sessions BY sid, end_sessions BY sid USING 'merge= '; > STORE sessions INTO 'sessionsTest' USING PigStorage ('*'); > {noformat}=20 > Here is the result of this script : > {noformat} > 2012-01-30 16:12:43,920 [main] INFO org.apache.pig.Main - Logging error = messages to: /root/pig_1327939963919.log > 2012-01-30 16:12:44,025 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://lx= c233:9000 > 2012-01-30 16:12:44,102 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.HExecutionEngine - Connecting to map-reduce job tracker at: lxc23= 3:9001 > 2012-01-30 16:12:44,760 [main] INFO org.apache.pig.tools.pigstats.Script= State - Pig features used in the script: MERGE_JION > 2012-01-30 16:12:44,923 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 opt= imistic? false > 2012-01-30 16:12:44,982 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimiza= tion: 2 > 2012-01-30 16:12:44,982 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimizat= ion: 2 > 2012-01-30 16:12:45,001 [main] INFO org.apache.pig.tools.pigstats.Script= State - Pig script settings are added to the job > 2012-01-30 16:12:45,006 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.b= uffer.percent is not set, set to default 0.3 > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:zookeeper.version=3D3.3.2-1031432, built on 11/05/2010 05:3= 2 GMT > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:host.name=3Dlxc233.machine.com > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:java.version=3D1.6.0_22 > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:java.vendor=3DSun Microsystems Inc. > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:java.home=3D/usr/lib/jvm/java-6-sun-1.6.0.22/jre > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:java.class.path=3D/opt/hadoop/conf:/usr/lib/jvm/java-6-sun/= jre/lib/tools.jar:/opt/hadoop:/opt/hadoop/hadoop-0.20-append-core.jar:/opt/= hadoop/lib/commons-cli-1.2.jar:/opt/hadoop/lib/commons-codec-1.3.jar:/opt/h= adoop/lib/commons-el-1.0.jar:/opt/hadoop/lib/commons-httpclient-3.0.1.jar:/= opt/hadoop/lib/commons-logging-1.0.4.jar:/opt/hadoop/lib/commons-logging-ap= i-1.0.4.jar:/opt/hadoop/lib/commons-net-1.4.1.jar:/opt/hadoop/lib/core-3.1.= 1.jar:/opt/hadoop/lib/hadoop-fairscheduler-0.20-append.jar:/opt/hadoop/lib/= hadoop-gpl-compression-0.2.0-dev.jar:/opt/hadoop/lib/hadoop-lzo-0.4.14.jar:= /opt/hadoop/lib/hsqldb-1.8.0.10.jar:/opt/hadoop/lib/jasper-compiler-5.5.12.= jar:/opt/hadoop/lib/jasper-runtime-5.5.12.jar:/opt/hadoop/lib/jets3t-0.6.1.= jar:/opt/hadoop/lib/jetty-6.1.14.jar:/opt/hadoop/lib/jetty-util-6.1.14.jar:= /opt/hadoop/lib/junit-4.5.jar:/opt/hadoop/lib/kfs-0.2.2.jar:/opt/hadoop/lib= /log4j-1.2.15.jar:/opt/hadoop/lib/mockito-all-1.8.2.jar:/opt/hadoop/lib/oro= -2.0.8.jar:/opt/hadoop/lib/servlet-api-2.5-6.1.14.jar:/opt/hadoop/lib/slf4j= -api-1.4.3.jar:/opt/hadoop/lib/slf4j-log4j12-1.4.3.jar:/opt/hadoop/lib/xmle= nc-0.52.jar:/opt/hadoop/lib/jsp-2.1/jsp-2.1.jar:/opt/hadoop/lib/jsp-2.1/jsp= -api-2.1.jar:/opt/pig/bin/../conf:/usr/lib/jvm/java-6-sun/jre/lib/tools.jar= :/opt/hadoop/lib/commons-codec-1.3.jar:/opt/hbase/lib/guava-r06.jar:/opt/hb= ase/hbase-0.90.3.jar:/opt/hadoop/lib/log4j-1.2.15.jar:/opt/hadoop/lib/commo= ns-cli-1.2.jar:/opt/hadoop/lib/commons-logging-1.0.4.jar:/opt/pig/pig-witho= uthadoop.jar:/opt/hadoop/conf_computation:/opt/hbase/conf:/opt/pig/bin/../l= ib/hadoop-0.20-append-core.jar:/opt/pig/bin/../lib/hadoop-gpl-compression-0= .2.0-dev.jar:/opt/pig/bin/../lib/hbase-0.90.3.jar:/opt/pig/bin/../lib/pigud= fs.jar:/opt/pig/bin/../lib/zookeeper-3.3.2.jar:/opt/pig/bin/../pig-withouth= adoop.jar: > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:java.library.path=3D/opt/hadoop/lib/native/Linux-amd64-64 > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:java.io.tmpdir=3D/tmp > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:java.compiler=3D > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:os.name=3DLinux > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:os.arch=3Damd64 > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:os.version=3D2.6.32-5-amd64 > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:user.name=3Droot > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:user.home=3D/root > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Cli= ent environment:user.dir=3D/root > 2012-01-30 16:12:45,039 [main] INFO org.apache.zookeeper.ZooKeeper - Ini= tiating client connection, connectString=3Dlxc233.machine.com:2222,lxc231.m= achine.com:2222,lxc234.machine.com:2222 sessionTimeout=3D180000 watcher=3Dh= connection > 2012-01-30 16:12:45,048 [main-SendThread()] INFO org.apache.zookeeper.Cl= ientCnxn - Opening socket connection to server lxc231.machine.com/192.168.1= .231:2222 > 2012-01-30 16:12:45,049 [main-SendThread(lxc231.machine.com:2222)] INFO = org.apache.zookeeper.ClientCnxn - Socket connection established to lxc231.m= achine.com/192.168.1.231:2222, initiating session > 2012-01-30 16:12:45,081 [main-SendThread(lxc231.machine.com:2222)] INFO = org.apache.zookeeper.ClientCnxn - Session establishment complete on server = lxc231.machine.com/192.168.1.231:2222, sessionid =3D 0x134c294771a073f, neg= otiated timeout =3D 180000 > 2012-01-30 16:12:46,569 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.JobControlCompiler - Setting up single store job > 2012-01-30 16:12:46,590 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting fo= r submission. > 2012-01-30 16:12:46,870 [Thread-13] INFO org.apache.zookeeper.ZooKeeper = - Initiating client connection, connectString=3Dlxc233.machine.com:2222,lxc= 231.machine.com:2222,lxc234.machine.com:2222 sessionTimeout=3D180000 watche= r=3Dhconnection > 2012-01-30 16:12:46,871 [Thread-13-SendThread()] INFO org.apache.zookeep= er.ClientCnxn - Opening socket connection to server lxc233.machine.com/192.= 168.1.233:2222 > 2012-01-30 16:12:46,871 [Thread-13-SendThread(lxc233.machine.com:2222)] I= NFO org.apache.zookeeper.ClientCnxn - Socket connection established to lxc= 233.machine.com/192.168.1.233:2222, initiating session > 2012-01-30 16:12:46,872 [Thread-13-SendThread(lxc233.machine.com:2222)] I= NFO org.apache.zookeeper.ClientCnxn - Session establishment complete on se= rver lxc233.machine.com/192.168.1.233:2222, sessionid =3D 0x2343822449935e1= , negotiated timeout =3D 180000 > 2012-01-30 16:12:46,880 [Thread-13] INFO org.apache.zookeeper.ZooKeeper = - Initiating client connection, connectString=3Dlxc233.machine.com:2222,lxc= 231.machine.com:2222,lxc234.machine.com:2222 sessionTimeout=3D180000 watche= r=3Dhconnection > 2012-01-30 16:12:46,880 [Thread-13-SendThread()] INFO org.apache.zookeep= er.ClientCnxn - Opening socket connection to server lxc233.machine.com/192.= 168.1.233:2222 > 2012-01-30 16:12:46,880 [Thread-13-SendThread(lxc233.machine.com:2222)] I= NFO org.apache.zookeeper.ClientCnxn - Socket connection established to lxc= 233.machine.com/192.168.1.233:2222, initiating session > 2012-01-30 16:12:46,882 [Thread-13-SendThread(lxc233.machine.com:2222)] I= NFO org.apache.zookeeper.ClientCnxn - Session establishment complete on se= rver lxc233.machine.com/192.168.1.233:2222, sessionid =3D 0x2343822449935e2= , negotiated timeout =3D 180000 > 2012-01-30 16:12:47,091 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - 0% complete > 2012-01-30 16:12:47,703 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201201201546_= 0890 > 2012-01-30 16:12:47,703 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - More information at: http://lx= c233:50030/jobdetails.jsp?jobid=3Djob_201201201546_0890 > 2012-01-30 16:12:55,723 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - 25% complete > 2012-01-30 16:13:49,312 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - 33% complete > 2012-01-30 16:13:55,322 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - 50% complete > 2012-01-30 16:13:57,327 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - job job_201201201546_0890 has = failed! Stop running all dependent jobs > 2012-01-30 16:13:57,327 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - 100% complete > 2012-01-30 16:13:57,337 [main] ERROR org.apache.pig.tools.pigstats.Simple= PigStats - ERROR: Could create instance of class org.apache.pig.backend.had= oop.hbase.HBaseStorage$1, while attempting to de-serialize it. (no default = constructor ?) > 2012-01-30 16:13:57,337 [main] ERROR org.apache.pig.tools.pigstats.PigSta= tsUtil - 1 map reduce job(s) failed! > 2012-01-30 16:13:57,338 [main] INFO org.apache.pig.tools.pigstats.Simple= PigStats - Script Statistics:=20 > HadoopVersion=09PigVersion=09UserId=09StartedAt=09FinishedAt=09Features > 0.20-append=090.9.2-SNAPSHOT=09root=092012-01-30 16:12:44=092012-01-30 16= :13:57=09MERGE_JION > Failed! > Failed Jobs: > JobId=09Alias=09Feature=09Message=09Outputs > job_201201201546_0890=09end_sessions=09INDEXER=09Message: Job failed!=09 > Input(s): > Failed to read data from "hbase://endSession.bea000000.dev.ubithere.com" > Output(s): > Counters: > Total records written : 0 > Total bytes written : 0 > Spillable Memory Manager spill count : 0 > Total bags proactively spilled: 0 > Total records proactively spilled: 0 > Job DAG: > job_201201201546_0890=09->=09null, > null > 2012-01-30 16:13:57,338 [main] INFO org.apache.pig.backend.hadoop.execut= ionengine.mapReduceLayer.MapReduceLauncher - Failed! > 2012-01-30 16:13:57,339 [main] ERROR org.apache.pig.tools.grunt.GruntPars= er - ERROR 2997: Encountered IOException. Could create instance of class or= g.apache.pig.backend.hadoop.hbase.HBaseStorage$1, while attempting to de-se= rialize it. (no default constructor ?) > Details at logfile: /root/pig_1327939963919.log > 2012-01-30 16:13:57,339 [main] ERROR org.apache.pig.tools.grunt.GruntPars= er - ERROR 2244: Job failed, hadoop does not return any error message > Details at logfile: /root/pig_1327939963919.log > {noformat}=20 > And here is the result in the log file : > {noformat} > Backend error message > --------------------- > java.io.IOException: Could create instance of class org.apache.pig.backen= d.hadoop.hbase.HBaseStorage$1, while attempting to de-serialize it. (no def= ault constructor ?) > =09at org.apache.pig.data.BinInterSedes.readWritable(BinInterSedes.java:2= 35) > =09at org.apache.pig.data.BinInterSedes.readDatum(BinInterSedes.java:336) > =09at org.apache.pig.data.BinInterSedes.readDatum(BinInterSedes.java:251) > =09at org.apache.pig.data.BinInterSedes.addColsToTuple(BinInterSedes.java= :556) > =09at org.apache.pig.data.BinSedesTuple.readFields(BinSedesTuple.java:64) > =09at org.apache.pig.impl.io.PigNullableWritable.readFields(PigNullableWr= itable.java:114) > =09at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeser= ializer.deserialize(WritableSerialization.java:67) > =09at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeser= ializer.deserialize(WritableSerialization.java:40) > =09at org.apache.hadoop.mapreduce.ReduceContext.nextKeyValue(ReduceContex= t.java:113) > =09at org.apache.hadoop.mapreduce.ReduceContext.nextKey(ReduceContext.jav= a:92) > =09at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:175) > =09at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:5= 66) > =09at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408) > =09at org.apache.hadoop.mapred.Child.main(Child.java:170) > Caused by: java.lang.InstantiationException: org.apache.pig.backend.hadoo= p.hbase.HBaseStorage$1 > =09at java.lang.Class.newInstance0(Class.java:340) > =09at java.lang.Class.newInstance(Class.java:308) > =09at org.apache.pig.data.BinInterSedes.readWritable(BinInterSedes.java:2= 31) > =09... 13 more > Pig Stack Trace > --------------- > ERROR 2997: Encountered IOException. Could create instance of class org.a= pache.pig.backend.hadoop.hbase.HBaseStorage$1, while attempting to de-seria= lize it. (no default constructor ?) > java.io.IOException: Could create instance of class org.apache.pig.backen= d.hadoop.hbase.HBaseStorage$1, while attempting to de-serialize it. (no def= ault constructor ?) > =09at org.apache.pig.data.BinInterSedes.readWritable(BinInterSedes.java:2= 35) > =09at org.apache.pig.data.BinInterSedes.readDatum(BinInterSedes.java:336) > =09at org.apache.pig.data.BinInterSedes.readDatum(BinInterSedes.java:251) > =09at org.apache.pig.data.BinInterSedes.addColsToTuple(BinInterSedes.java= :556) > =09at org.apache.pig.data.BinSedesTuple.readFields(BinSedesTuple.java:64) > =09at org.apache.pig.impl.io.PigNullableWritable.readFields(PigNullableWr= itable.java:114) > =09at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeser= ializer.deserialize(WritableSerialization.java:67) > =09at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeser= ializer.deserialize(WritableSerialization.java:40) > =09at org.apache.hadoop.mapreduce.ReduceContext.nextKeyValue(ReduceContex= t.java:113) > =09at org.apache.hadoop.mapreduce.ReduceContext.nextKey(ReduceContext.jav= a:92) > =09at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:175) > =09at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:5= 66) > =09at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408) > =09at org.apache.hadoop.mapred.Child.main(Child.java:170) > Caused by: java.lang.InstantiationException: org.apache.pig.backend.hadoo= p.hbase.HBaseStorage$1 > =09at java.lang.Class.newInstance0(Class.java:340) > =09at java.lang.Class.newInstance(Class.java:308) > =09at org.apache.pig.data.BinInterSedes.readWritable(BinInterSedes.java:2= 31) > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D > Pig Stack Trace > --------------- > ERROR 2244: Job failed, hadoop does not return any error message > org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job fai= led, hadoop does not return any error message > =09at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.jav= a:139) > =09at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser= .java:192) > =09at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser= .java:164) > =09at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81) > =09at org.apache.pig.Main.run(Main.java:561) > =09at org.apache.pig.Main.main(Main.java:111) > =09at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > =09at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImp= l.java:39) > =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc= essorImpl.java:25) > =09at java.lang.reflect.Method.invoke(Method.java:597) > =09at org.apache.hadoop.util.RunJar.main(RunJar.java:156) > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D > {noformat} > The same script without using merge works without any problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrato= rs: https://issues.apache.org/jira/secure/ContactAdministrators!default.jsp= a For more information on JIRA, see: http://www.atlassian.com/software/jira