Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AB7EF17C4C for ; Fri, 13 Mar 2015 00:26:52 +0000 (UTC) Received: (qmail 21137 invoked by uid 500); 13 Mar 2015 00:26:50 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 21070 invoked by uid 500); 13 Mar 2015 00:26:50 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 21060 invoked by uid 99); 13 Mar 2015 00:26:50 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Mar 2015 00:26:50 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of stevedhoward@gmail.com designates 209.85.213.52 as permitted sender) Received: from [209.85.213.52] (HELO mail-yh0-f52.google.com) (209.85.213.52) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Mar 2015 00:26:45 +0000 Received: by yhoa41 with SMTP id a41so10135699yho.4 for ; Thu, 12 Mar 2015 17:24:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=5LWY4/flowXdSXfMCpxNEtseKklZq/de6EAGbKlWavM=; b=W87reQ1ZdcJupncArINACv+QPZr7O0LpZodEAnMxg1GWQU+0ObKFfhvISy1IAmV9Td ySs3s2qSwd74Ymacuzl8Lul7YXqlU7dQUpqCBm5qsaORLxd1zncnhx0przpC2iRZKgel yqGp8+jHtcwz2JKn0DGBWMJTK0bJ13K0OEwgKEyDQfAFLzsyy0YdpVvIRqe6oHoZjeOl 6RSmHR1cAqcawxHRMK8zkXEXimgu2eUg8agafFdo0e3BGrnYja5HR44vomg3L8Sx427F ZfRZjrbhCpIukYaVuZoaqmKfWpAYIpxhwBSHPiNnkq3yjxKXrJkz55KlOh0Xz4+qpQ3R 8xeA== MIME-Version: 1.0 X-Received: by 10.170.140.86 with SMTP id h83mr7824452ykc.108.1426206249321; Thu, 12 Mar 2015 17:24:09 -0700 (PDT) Received: by 10.170.32.207 with HTTP; Thu, 12 Mar 2015 17:24:09 -0700 (PDT) Date: Thu, 12 Mar 2015 20:24:09 -0400 Message-ID: Subject: sqoop import to hive being killed by resource manager From: Steve Howard To: user@hive.apache.org Content-Type: multipart/alternative; boundary=001a1139df16de095f0511208479 X-Virus-Checked: Checked by ClamAV on apache.org --001a1139df16de095f0511208479 Content-Type: text/plain; charset=UTF-8 Hi All, We have not been able to get what is in the subject line to run. This is on hive 0.14. While pulling a billion row table from Oracle using 12 splits on the primary key, each job continually runs out of memory such as below... 15/03/13 00:22:23 INFO mapreduce.Job: Task Id : attempt_1426097251374_0011_m_000011_0, Status : FAILED Container [pid=27919,containerID=container_1426097251374_0011_01_000013] is running beyond physical memory limits. Current usage: 513.5 MB of 512 MB physical memory used; 879.3 MB of 1.0 GB virtual memory used. Killing container. Dump of the process-tree for container_1426097251374_0011_01_000013 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 28078 27919 27919 27919 (java) 63513 834 912551936 131129 /usr/jdk64/jdk1.7.0_45/bin/java -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.2.0.0-2041 -Xmx410m -Djava.io.tmpdir=/mnt/hdfs/hadoop/yarn/local/usercache/hdfs/appcache/application_1426097251374_0011/container_1426097251374_0011_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/mnt/hdfs/hadoop/yarn/log/application_1426097251374_0011/container_1426097251374_0011_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 172.27.2.57 52335 attempt_1426097251374_0011_m_000011_0 13 |- 27919 27917 27919 27919 (bash) 1 2 9424896 317 /bin/bash -c /usr/jdk64/jdk1.7.0_45/bin/java -server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.2.0.0-2041 -Xmx410m -Djava.io.tmpdir=/mnt/hdfs/hadoop/yarn/local/usercache/hdfs/appcache/application_1426097251374_0011/container_1426097251374_0011_01_000013/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/mnt/hdfs/hadoop/yarn/log/application_1426097251374_0011/container_1426097251374_0011_01_000013 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 172.27.2.57 52335 attempt_1426097251374_0011_m_000011_0 13 1>/mnt/hdfs/hadoop/yarn/log/application_1426097251374_0011/container_1426097251374_0011_01_000013/stdout 2>/mnt/hdfs/hadoop/yarn/log/application_1426097251374_0011/container_1426097251374_0011_01_000013/stderr Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 We have tried several different sizes for various switches, but the job always fails. Is this simply a function of the data, or is there another issue? Thanks, Steve --001a1139df16de095f0511208479 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi All,

We have not been able = to get what is in the subject line to run.=C2=A0 This is on hive 0.14.=C2= =A0 While pulling a billion row table from Oracle using 12 splits on the pr= imary key, each job continually runs out of memory such as below...

= 15/03/13 00:22:23 INFO mapreduce.Job: Task Id : attempt_1426097251374_0011_= m_000011_0, Status : FAILED
Container [pid=3D27919,containerID=3Dcontain= er_1426097251374_0011_01_000013] is running beyond physical memory limits. = Current usage: 513.5 MB of 512 MB physical memory used; 879.3 MB of 1.0 GB = virtual memory used. Killing container.
Dump of the process-tree for con= tainer_1426097251374_0011_01_000013 :
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_= TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |- 28078 27919 27919 27919 (java) 6351= 3 834 912551936 131129 /usr/jdk64/jdk1.7.0_45/bin/java -server -XX:NewRatio= =3D8 -Djava.net.preferIPv4Stack=3Dtrue -Dhdp.version=3D2.2.0.0-2041 -Xmx410= m -Djava.io.tmpdir=3D/mnt/hdfs/hadoop/yarn/local/usercache/hdfs/appcache/ap= plication_1426097251374_0011/container_1426097251374_0011_01_000013/tmp -Dl= og4j.configuration=3Dcontainer-log4j.properties -Dyarn.app.container.log.di= r=3D/mnt/hdfs/hadoop/yarn/log/application_1426097251374_0011/container_1426= 097251374_0011_01_000013 -Dyarn.app.container.log.filesize=3D0 -Dhadoop.roo= t.logger=3DINFO,CLA org.apache.hadoop.mapred.YarnChild 172.27.2.57 52335 at= tempt_1426097251374_0011_m_000011_0 13
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 |- 27919 27917 27919 27919 (bash) 1 2 9424896 317 /bin/bash -c /u= sr/jdk64/jdk1.7.0_45/bin/java -server -XX:NewRatio=3D8 -Djava.net.preferIPv= 4Stack=3Dtrue -Dhdp.version=3D2.2.0.0-2041 -Xmx410m -Djava.io.tmpdir=3D/mnt= /hdfs/hadoop/yarn/local/usercache/hdfs/appcache/application_1426097251374_0= 011/container_1426097251374_0011_01_000013/tmp -Dlog4j.configuration=3Dcont= ainer-log4j.properties -Dyarn.app.container.log.dir=3D/mnt/hdfs/hadoop/yarn= /log/application_1426097251374_0011/container_1426097251374_0011_01_000013 = -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA org.a= pache.hadoop.mapred.YarnChild 172.27.2.57 52335 attempt_1426097251374_0011_= m_000011_0 13 1>/mnt/hdfs/hadoop/yarn/log/application_1426097251374_0011= /container_1426097251374_0011_01_000013/stdout 2>/mnt/hdfs/hadoop/yarn/l= og/application_1426097251374_0011/container_1426097251374_0011_01_000013/st= derr

Container killed on request. Exit code is 143
Container exit= ed with a non-zero exit code 143

We have tried several di= fferent sizes for various switches, but the job always fails.

=
Is this simply a function of the data, or is there another issue?
<= /div>

Thanks,

Steve
--001a1139df16de095f0511208479--