hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rohith Sharma K S <rohithsharm...@huawei.com>
Subject RE: Change the blocksize in 2.5.1
Date Thu, 20 Nov 2014 10:32:30 GMT
It seems HADOOP_CONF_DIR is poiniting different location!!?
May be you can check hdfs-site.xml is in classpath when you execute hdfs command.


Thanks & Regards
Rohith Sharma K S

-----Original Message-----
From: Tomás Fernández Pena [mailto:tf.pena@gmail.com] On Behalf Of Tomás Fernández Pena
Sent: 20 November 2014 15:41
To: user@hadoop.apache.org
Subject: Change the blocksize in 2.5.1

Hello everyone,

I've just installed Hadoop 2.5.1 from source code, and I have problems changing the default
block size. My hdfs-site.xml file I've set the property

  <property>
     <name>dfs.blocksize</name>
     <value>67108864</value>
  </property>

to have blocks of 64 MB, but it seems that the system ignore this setting. When I copy a new
file, it uses a block size of 128M. Only if I specify the block size when the file is created
(ie hdfs dfs
-Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
64 MB.

Any idea?

Best regards

Tomas
--
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A


Mime
View raw message