hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Uma Mahesh <mahesw...@huawei.com>
Subject RE: Error when Using URI in -put command
Date Thu, 21 Jul 2011 10:33:12 GMT
Hi Cheny,
  When its creating the file, it should talk with NN. Here, since you
mentioned destination file path with DN ip port as full URI, it may treat
that as NN ip port and try to connect. So, it is failing .....

 Absolute paths in DFS will be hdfs://NN_IP:NN_Port/fileneme. This will be
treated as separte file in DFS.

Address: Huawei Industrial Base
Bantian Longgang
Shenzhen 518129, P.R.China
This e-mail and its attachments contain confidential information from
HUAWEI, which 
is intended only for the person or entity whose address is listed above. Any
use of the 
information contained herein in any way (including, but not limited to,
total or partial 
disclosure, reproduction, or dissemination) by persons other than the
recipient(s) is prohibited. If you receive this e-mail in error, please
notify the sender by 
phone or email immediately and delete it!

-----Original Message-----
From: Cheny [mailto:coconuttree9999@gmail.com] 
Sent: Thursday, July 21, 2011 7:04 AM
To: core-user@hadoop.apache.org
Subject: Error when Using URI in -put command

Not considering replication, if I use following command from a hadoop client
outside the cluster(the client is not a datanode)

hadoop dfs -put <localfilename> hdfs://<datanode ip>:50010/<filename>

Can I make HDFS to locate the first block of the file on that specific

I tried to do that and I got this error:

put: Call to /xxx.xxx.xxx.xxx(ip of my datanode):50010 failed on local
exception: java.io.EOFException

Any help is greatly appreciated.

View this message in context:
Sent from the Hadoop core-user mailing list archive at Nabble.com.

View raw message