hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Albert Strasheim" <full...@gmail.com>
Subject Using Hadoop as a shared file system
Date Mon, 11 Feb 2008 22:12:30 GMT
Hello all

I have a slightly unusual use case that I hope I can use Hadoop for (maybe 
after writing a bit of code).

My setup looks as follows:

I have one machine containing somewhere from 750 GB to 2 TB of data on a 
disk or some kind of RAID-0 array. I am in full control of this machine. I 
have a backup of the part of the data that I haven't generated (maybe 30% of 
the data, the rest is calculated from this original data).

I have another disparate collection of machines running any number of 
operating systems (Windows, Linux, Solaris, etc.). I might not have 
administrative access on these machines. I can probably run a JVM on them. 
These machines should not be considered reliable in any sense of the word. 
Hosting a HDFS on them would be a bad idea -- on a bad day all of them might 
be down, they might get reformatted, you name it.

I need to process this data using legacy applications (e.g., C++ programs, 
MATLAB scripts, Python scripts) and Java applications.

The legacy applications typically perform operations that would be hard to 
parallelise (e.g. a program that can't easily be compiled on all the 
different machines, MATLAB licenses not being available for all the 
machines, etc.), and as such I would like to run these legacy apps only on 
the machine that directly has access to the data. I would prefer if I didn't 
have to teach these legacy apps about getting data out of HDFS.

Through the magic of the JVM, I can can consider parallelising the 
processing of the data done with Java programs.

These Java programs (parallelised with Hadoop's MapReduce, or GridGain, or 
whatever) need an easy way to access the data on this single machine.

Setting up a traditional shared file system (NFS and Samba come to mind) 
would be a pain for various reasons. A shared file system probably wouldn't 
be a bottleneck, since the amount of processing time required typically far 
outweighs the time it takes to access the data over the network (at least 
for the number of nodes I'm dealing with).

What I'm hoping to do is use Hadoop as my shared file system.

I would imagine that I would have to run the equivalent of a 
namenode+datanode+etc. on the machine that has direct access to the data so 
that it appears as a HDFS to the other machines. Making a copy of the data 
into a HDFS instance isn't really an option, due to the size of the data I'm 
dealing with, so I'm thinking along the lines of exposing a LocalFileSystem 
on this machine as a DistributedFileSystem to the other machines.

This has the added advantage that if I ever do get a stable HDFS setup 
going, my programs will be ready to deal with it.

Is something like this doable already? If not, where would one need to start 
filling in code to make it possible?

Thanks for reading.



View raw message