hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nitin Pawar <nitinpawar...@gmail.com>
Subject Re: store file gives exception
Date Wed, 06 Mar 2013 12:35:09 GMT
in hadoop you don't have to worry about data locality. Hadoop job tracker
will by default try to schedule the job where the data is located in case
it has enough compute capacity. Also note that datanode just store the
blocks of file and multiple datanodes will have different blocks of the
file.


On Wed, Mar 6, 2013 at 5:52 PM, AMARNATH, Balachandar <
BALACHANDAR.AMARNATH@airbus.com> wrote:

> Hi all,****
>
> ** **
>
> I thought the below issue is coming because of non availability of enough
> space. Hence, I replaced the datanodes with other nodes with more space and
> it worked. ****
>
> ** **
>
> Now, I have a working HDFS cluster. I am thinking of my application where
> I need to execute ‘a set of similar instructions’  (job) over large number
> of files. I am planning to do this in parallel in different machines. I
> would like to schedule this job to the datanode that already has data input
> file in it. At first, I shall store the files in HDFS.  Now, to complete my
> task, Is there a scheduler available in hadoop framework that given the
> input file required for a job, can return the data node name where the file
> is actually stored?  Am I making sense here?****
>
> ** **
>
> Regards****
>
> Bala ****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 16:49
> *To:* user@hadoop.apache.org
> *Subject:* RE: store file gives exception****
>
> ** **
>
> Hi, ****
>
> ** **
>
> I could successfully install hadoop cluster with three nodes (2 datanodes
> and 1 namenode). However, when I tried to store a file, I get the following
> error.****
>
> ** **
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Error Recovery for block null bad
> datanode[0] nodes == null****
>
> 13/03/06 16:45:56 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/user/bala/kumki/hosts" - Aborting...****
>
> put: java.io.IOException: File /user/bala/kumki/hosts could only be
> replicated to 0 nodes, instead of 1****
>
> 13/03/06 16:45:56 ERROR hdfs.DFSClient: Exception closing file
> /user/bala/kumki/hosts : org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File /user/bala/kumki/hosts could only be replicated
> to 0 nodes, instead of 1****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> ****
>
>             at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> ****
>
>             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ****
>
>             at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> ****
>
>             at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> ****
>
>             at java.lang.reflect.Method.invoke(Method.java:597)****
>
>             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> ****
>
>             at java.security.AccessController.doPrivileged(Native Method)*
> ***
>
>             at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>             at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> ****
>
>             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)*
> ***
>
> ** **
>
> Any hint to fix this,****
>
> ** **
>
> ** **
>
> Regards****
>
> Bala****
>
> *From:* AMARNATH, Balachandar [mailto:BALACHANDAR.AMARNATH@airbus.com]
> *Sent:* 06 March 2013 15:29
> *To:* user@hadoop.apache.org
> *Subject:* store file gives exception****
>
> ** **
>
> Now I came out of the safe mode through admin command. I tried to put a
> file into hdfs and encountered this error.****
>
>  ****
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/hadoop/hosts could only be replicated to 0 nodes, instead of 1****
>
>  ****
>
> Any hint to fix this,****
>
>  ****
>
> This happens when the namenode is not datanode. Am I making sense?****
>
>  ****
>
> With thanks and regards****
>
> Balachandar****
>
>  ****
>
>  ****
>
>  ****
>
> The information in this e-mail is confidential. The contents may not be disclosed or
used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this
e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail
as it has been sent over public networks. If you have any concerns over the content of this
message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software
but you should take whatever measures you deem to be appropriate to ensure that this message
and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or
used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.****
>
> If you are not the intended recipient, please notify Airbus immediately and delete this
e-mail.****
>
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail
as it has been sent over public networks. If you have any concerns over the content of this
message or its Accuracy or Integrity, please contact Airbus immediately.****
>
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software
but you should take whatever measures you deem to be appropriate to ensure that this message
and any attachments are virus free.****
>
> The information in this e-mail is confidential. The contents may not be disclosed or
used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
> If you are not the intended recipient, please notify Airbus immediately and delete this
e-mail.
> Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail
as it has been sent over public networks. If you have any concerns over the content of this
message or its Accuracy or Integrity, please contact Airbus immediately.
> All outgoing e-mails from Airbus are checked using regularly updated virus scanning software
but you should take whatever measures you deem to be appropriate to ensure that this message
and any attachments are virus free.
>
>


-- 
Nitin Pawar

Mime
View raw message