hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Snehal Nagmote <nagmote.sne...@gmail.com>
Subject Re: Using HDFS to serve www requests
Date Sun, 29 Mar 2009 09:24:42 GMT

Hello Sir,
I am doing mtech in iiit hyderabad and in our project we  have similar
requirement of accessing the hdfs from apache server(tomcat) directly, Can
you please explain how to access the same with some example, probably the
same you modified,Does it require hadoop installation directory to sit on
same machine or just jars and config file copying would do.

Thanks in advance

Edward Capriolo wrote:
> It is a little more natural to connect to HDFS from apache tomcat.
> This will allow you to skip the FUSE mounts and just use the HDFS-API.
> I have modified this code to run inside tomcat.
> http://wiki.apache.org/hadoop/HadoopDfsReadWriteExample
> I will not testify to how well this setup will perform under internet
> traffic, but it does work.
> GlusterFS is more like a traditional POSIX filesystem. It supports
> locking and appends and you can do things like put the mysql data
> directory on it.
> GLUSTERFS is geared for storing data to be accessed with low latency.
> Nodes (Bricks) are normally connected via GIG-E or infiniban. The
> GlusterFS volume is mounted directly on a unix system.
> Hadoop is a user space file system. The latency is higher. Nodes are
> connected by GIG-E. It is closely coupled with MAP/REDUCE.
> You can use the API or the FUSE module to mount hadoop but that is not
> a direct goal of hadoop. Hope that helps.

View this message in context: http://www.nabble.com/Using-HDFS-to-serve-www-requests-tp22725659p22765743.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

View raw message