From dev-return-60309-archive-asf-public=cust-asf.ponee.io@pig.apache.org Tue Mar 12 04:29:05 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 2E3C6180657 for ; Tue, 12 Mar 2019 05:29:05 +0100 (CET) Received: (qmail 10612 invoked by uid 500); 12 Mar 2019 04:29:04 -0000 Mailing-List: contact dev-help@pig.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@pig.apache.org Delivered-To: mailing list dev@pig.apache.org Received: (qmail 10601 invoked by uid 500); 12 Mar 2019 04:29:04 -0000 Delivered-To: apmail-hadoop-pig-dev@hadoop.apache.org Received: (qmail 10598 invoked by uid 99); 12 Mar 2019 04:29:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Mar 2019 04:29:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id B4663180D82 for ; Tue, 12 Mar 2019 04:29:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.301 X-Spam-Level: X-Spam-Status: No, score=-110.301 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id dYNIzHll9iKu for ; Tue, 12 Mar 2019 04:29:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 7BF8361148 for ; Tue, 12 Mar 2019 04:29:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 88820E2608 for ; Tue, 12 Mar 2019 04:29:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 36D3D2580E for ; Tue, 12 Mar 2019 04:29:00 +0000 (UTC) Date: Tue, 12 Mar 2019 04:29:00 +0000 (UTC) From: "Koji Noguchi (JIRA)" To: pig-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (PIG-5383) OrcStorage fails when "bytearray" represents unknown type MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/PIG-5383?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Noguchi updated PIG-5383: ------------------------------ Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 0.18.0 Status: Resolved (was: Patch Available) Thanks for the review Rohini!=C2=A0 Committed to trunk (0.18). > OrcStorage fails when "bytearray" represents unknown type > --------------------------------------------------------- > > Key: PIG-5383 > URL: https://issues.apache.org/jira/browse/PIG-5383 > Project: Pig > Issue Type: Bug > Reporter: Koji Noguchi > Assignee: Koji Noguchi > Priority: Minor > Fix For: 0.18.0 > > Attachments: pig-5383-v01.patch > > > In Pig, "bytearray" can be array of bytes OR unknown type. =20 > OrcStorage cannot handle the latter for writes and fails with=20 > {noformat} > 2019-02-14 05:45:43,855 [PigTezLauncher-0] INFO org.apache.pig.backend.h= adoop.executionengine.tez.TezJob - DAG Status: status=3DFAILED, progress= =3DTotalTasks: 39549 Succeeded: 31451 Running: 0 Failed: 1 Killed: 8097 Fai= ledTaskAttempts: 2865 KilledTaskAttempts: 1305, diagnostics=3DVertex failed= , vertexName=3Dscope-56672, vertexId=3Dvertex_, diagnostics=3D[Task failed,= taskId=3Dtask_, diagnostics=3D[TaskAttempt 0 failed, info=3D[Error: Error = while running task ( failure ) : attempt_:java.lang.ClassCastException: jav= a.lang.Boolean cannot be cast to [B > =09at org.apache.pig.impl.util.orc.OrcUtils$PigDataByteArrayObjectInspect= or.getPrimitiveWritableObject(OrcUtils.java:648) > =09at org.apache.hadoop.hive.ql.io.orc.WriterImpl$BinaryTreeWriter.write(= WriterImpl.java:1547) > =09at org.apache.hadoop.hive.ql.io.orc.WriterImpl$MapTreeWriter.write(Wri= terImpl.java:1933) > =09at org.apache.hadoop.hive.ql.io.orc.WriterImpl$StructTreeWriter.write(= WriterImpl.java:1805) > =09at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:= 2477) > =09at org.apache.hadoop.hive.ql.io.orc.OrcNewOutputFormat$OrcRecordWriter= .write(OrcNewOutputFormat.java:53) > =09at org.apache.hadoop.hive.ql.io.orc.OrcNewOutputFormat$OrcRecordWriter= .write(OrcNewOutputFormat.java:37) > =09at org.apache.pig.builtin.OrcStorage.putNext(OrcStorage.java:262) > =09at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOut= putFormat$PigRecordWriter.write(PigOutputFormat.java:136) > =09at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOut= putFormat$PigRecordWriter.write(PigOutputFormat.java:95) > =09at org.apache.tez.mapreduce.output.MROutput$1.write(MROutput.java:557) > =09at org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POS= toreTez.getNextTuple(POStoreTez.java:129) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.runPipeline(POSplit.java:254) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.processPlan(POSplit.java:235) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.getNextTuple(POSplit.java:227) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.runPipeline(POSplit.java:254) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.processPlan(POSplit.java:235) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.processPlan(POSplit.java:240) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.processPlan(POSplit.java:240) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.processPlan(POSplit.java:240) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.processPlan(POSplit.java:240) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.processPlan(POSplit.java:240) > =09at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relatio= nalOperators.POSplit.getNextTuple(POSplit.java:227) > =09at org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProces= sor.runPipeline(PigProcessor.java:382) > =09at org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProces= sor.run(PigProcessor.java:244) > =09at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOP= rocessorRuntimeTask.java:374) > =09at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Ca= llable.java:73) > =09at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Ca= llable.java:61) > =09at java.security.AccessController.doPrivileged(Native Method) > =09at javax.security.auth.Subject.doAs(Subject.java:422) > =09at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfor= mation.java:1953) > =09at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRu= nner2Callable.java:61) > =09at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRu= nner2Callable.java:37) > =09at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:266) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1149) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:624) > =09at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)