hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Habermaas, William" <William.Haberm...@fatwire.com>
Subject RE: remotely downloading file
Date Fri, 03 Jun 2011 21:05:31 GMT
You can access HDFS for reading and writing from other machines. The API works through the
HDFS client which can be anywhere on the network and not just on the namenode. You just have
to have the Hadoop core jar with your application wherever it is going to run. 


-----Original Message-----
From: Joe Greenawalt [mailto:joe.greenawalt@gmail.com] 
Sent: Friday, June 03, 2011 4:55 PM
To: common-user@hadoop.apache.org
Subject: remotely downloading file

We're interested in using HDFS to store several large file sets to be
available for download from our customers  in the following paradigm:

Customer  <-  | APPSERVER-CLUSTER {app1,app2,app3} |  <-  | HDFS |

I had assumed that pulling the file from HDFS to the APPSERVER-CLUSTER could
be done program-ably remotely after browsing the documentation.  But after
reading the API, it seems that writing Java code to interface with HDFS
needs to happen locally?  Is that correct?

If it is correct, what is the best/recommended way to
deliver downloadables to the APPSERVERS (and vice versa) which are hosted in
the same network but on different machines?

View raw message