hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nikolay Grebnev" <nikolaygreb...@gmail.com>
Subject configuration questions
Date Fri, 28 Nov 2008 23:12:12 GMT
Hello,

I installed hadoop 0.19.0 on several servers.
Please help me in configuration of the cluster.

1 How can I set that hadoop will place all files on HDFS into big
files on local filesystem. By default, all files are placed separately
to
~/data/hadoop/dfs/data/current
-rw-r--r--  1 hadoop users 715542 2008-11-29 01:01 blk_-8690966142665497288
-rw-r--r--  1 hadoop users     71 2008-11-29 01:16
blk_8457110993060324288_1062.meta
drwxr-xr-x  2 hadoop users  12288 2008-11-29 01:18 subdir0/
drwxr-xr-x  2 hadoop users  12288 2008-11-29 01:18 subdir1/

2  At present when one datanode dies I can't put any new files to the
cluster. Can I set that when one (several) datanodes die the cluster
works without delays, and when datanodes are up then they eventually
receive all missing information

3 When datanode looses network connection can a local client work with
data on this datanode?

Best,
Nik

Mime
View raw message