hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mich Talebzadeh" <m...@peridale.co.uk>
Subject RE: A simple insert stuck in hive
Date Wed, 08 Apr 2015 09:36:39 GMT
Hi Sanjiv,

 

I can see that this can be a major problem as fundamentally we don’t have any real clue
about the cause of the issue logged somewhere

 

I did the following to “try” to resolve the issue

 

1.     Rebooted the cluster. No luck

2.     Reformatted the Namenode and cleaned up datanode directories. Fresh start no luck

3.     Got rid of all files under $HADOOP_HOME/logs as sometimes a near full directory can
cause this issue. No luck

4.     I could not even use sqoop to gat data from an Oracle table into a text file. So it
was not hive. Increasingly it was pointing to resource manager and mapreduce set up

5.     So I looked at both yarn-site and mapred-site. I cleaned up everything from yarn-site
except the following lines

<configuration>

    <property>

        <name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

    </property>

<property>

   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

   <value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

</configuration>

 

6.     In mapred-site I have the following:

 

<configuration>

<property>

   <name>mapreduce.framework.name</name>

   <value>yarn</value>

</property>

<property>

<name>mapreduce.job.tracker</name>

<value>rhes564:54311</value>

</property>

<property>

<name>mapreduce.job.tracker.reserved.physicalmemory.mb</name>

<value>1024</value>

</property>

<property>

<name>mapreduce.map.memory.mb</name>

<value>2048</value>

</property>

<property>

<name>mapreduce.reduce.memory.mb</name>

<value>2048</value>

</property>

<property>

<name>mapreduce.map.java.opts</name>

<value>-Xmx3072m</value>

</property>

<property>

<name>mapreduce.reduce.java.opts</name>

<value>-Xmx6144m</value>

</property>

<property>

<name>yarn.app.mapreduce.am.resource.mb</name>

<value>400</value>

</property>

</configuration>

 

I did not touch anything in mapred-site! I recycled Hadoop and it is now working. 

 

HTH

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Ltd,
its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd,
its subsidiaries nor their employees accept any responsibility.

 

From: @Sanjiv Singh [mailto:sanjiv.is.on@gmail.com] 
Sent: 08 April 2015 07:21
To: user@hive.apache.org
Subject: Re: A simple insert stuck in hive

 

Hi Mich,

 

I have faced same issue while inserting into table .....nothing had helped me on that.

Surprisingly , restarting cluster had worked for me.  I am also interested in solution to
this. 

 

Also can you please check if there is any lock on table causing mapper waiting for lock on
table.

 

·         SHOW LOCKS <TABLE_NAME>;

 

 

Regards,

Sanjiv Singh




Regards
Sanjiv Singh
Mob :  +091 9990-447-339

 

On Wed, Apr 8, 2015 at 2:39 AM, Mich Talebzadeh <mich@peridale.co.uk> wrote:

Hi,

 

Today I have noticed the following issue.

 

A simple insert into a table is sting there throwing the following

 

hive> insert into table mytest values(1,'test');

Query ID = hduser_20150407215959_bc030fac-258f-4996-b50f-3d2d49371cca

Total jobs = 3

Launching Job 1 out of 3

Number of reduce tasks is set to 0 since there's no reduce operator

Starting Job = job_1428439695331_0002, Tracking URL = http://rhes564:8088/proxy/application_1428439695331_0002/

Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill job_1428439695331_0002

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0

2015-04-07 21:59:35,068 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:00:35,545 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:01:35,832 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:02:36,058 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:03:36,279 Stage-1 map = 0%,  reduce = 0%

2015-04-07 22:04:36,486 Stage-1 map = 0%,  reduce = 0%

 

I have been messing around with concurrency for hive. That did not work. My metastore is built
in Oracle. So I drooped that schema and recreated from scratch. Got rid of concurrency parameters.
First I was getting “container is running beyond virtual memory limits” for the task.
I changed the following parameters in yarn-site.xml 

 

 

<property>

  <name>yarn.nodemanager.resource.memory-mb</name>

  <value>2048</value>

  <description>Amount of physical memory, in MB, that can be allocated for containers.</description>

</property>

<property>

  <name>yarn.scheduler.minimum-allocation-mb</name>

  <value>1024</value>

</property>

 

and mapred-site.xml 

 

<property>

<name>mapreduce.map.memory.mb</name>

<value>4096</value>

</property>

<property>

<name>mapreduce.reduce.memory.mb</name>

<value>4096</value>

</property>

<property>

<name>mapreduce.map.java.opts</name>

<value>-Xmx3072m</value>

</property>

<property>

<name>mapreduce.recduce.java.opts</name>

<value>-Xmx6144m</value>

</property>

<property>

<name>yarn.app.mapreduce.am.resource.mb</name>

<value>400</value>

</property>

 

However, nothing has helped except that virtual memory error has gone. Any ideas appreciated.

 

Thanks

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Ltd,
its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd,
its subsidiaries nor their employees accept any responsibility.

 

 


Mime
View raw message