hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nimra Choudhary <nim...@microsoft.com>
Subject RE: dynamic partition import
Date Tue, 29 May 2012 09:41:58 GMT
All my data nodes are up and running with none blacklisted.

Regards,
Nimra

From: Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: Tuesday, May 29, 2012 3:07 PM
To: user@hive.apache.org
Subject: Re: dynamic partition import

can you check atleast one datanode is running and is not part of blacklisted nodes


On Tue, May 29, 2012 at 3:01 PM, Nimra Choudhary <nimrac@microsoft.com<mailto:nimrac@microsoft.com>>
wrote:

We are using Dynamic partitioning and facing the similar problem. Below is the jobtracker
error log. We have a hadoop cluster of 6 nodes, 1.16 TB capacity with over 700GB still free.

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.ipc.RemoteException:
java.io.IOException: File /tmp/hive-nimrac/hive_2012-05-29_10-32-06_332_4238693577104368640/_tmp.-ext-10000/createddttm=2011-04-24/_tmp.000001_2
could only be replicated to 0 nodes, instead of 1
               at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1421)
               at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
               at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
               at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
               at java.lang.reflect.Method.invoke(Method.java:601)
               at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
               at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
               at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
               at java.security.AccessController.doPrivileged(Native Method)
               at javax.security.auth.Subject.doAs(Subject.java:415)
               at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
               at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)

               at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:576)
               at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
               at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
               at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
               at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
               at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
               at org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
               at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
               at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)
               ...

Is there any workaround or fix for this?

Regards,
Nimra




--
Nitin Pawar

Mime
View raw message