hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "omprakash" <ompraka...@cdac.in>
Subject RE: Namenode not able to come out of SAFEMODE
Date Mon, 28 Aug 2017 06:15:51 GMT
Hi,

 

In our hdfs, there are lots of(10 millions) ~4 KB files. That is why we have
to increase the ipc.maximum.data.length to hold that much block report. 

 

And yes this is a problem we are facing in our scenario. 

 

We have opted for HAR archives to bundle 100 k files in one HAR. Thus
reducing the total block count as a work around to the problem.

 

Regards

Om Prakash

 

 

From: Gurmukh Singh [mailto:gurmukh.dhillon@yahoo.com] 
Sent: 25 August 2017 17:22
To: omprakash <omprakashp@cdac.in>; brahmareddy.battula@huawei.com
Cc: 'surendra lilhore' <surendra.lilhore@huawei.com>; user@hadoop.apache.org
Subject: Re: Namenode not able to come out of SAFEMODE

 

Hi Om,

Although you solved this issue by bumping up the ipc max length which is by
default set to 64MB.

$ hdfs getconf -confkey ipc.maximum.data.length
67108864

So, it means the disk you are using is having more then 1 million blocks.
Are you using HDFS block size to be very small ?

If you are keeping the HDFS block size to 64MB, you can a disk of size 64TB
before you start seeing this error. Looks like you are doing something
wrong.

Having a large value of ipc max will increase the serialization and
de-serialization time for protobuf.




On 20/7/17 7:01 pm, omprakash wrote:

Hi,

 

Thanks for the quick reply. 

 

That was exactly the problem. The datanodes were throwing error of "maximum
IPC message length size not enough" but we were focusing on NameNodes only. 

 

I changed the "ipc.maximum.data.length" property in core-site.xml file(I
found this solution of above error after searcing over internet) and
restarted the namenode and datanode. After an hour all the blocks get loaded
successfully.

 

Thanks for the help again.

 

Regards

Om

 

From: surendra lilhore [mailto:surendra.lilhore@huawei.com] 
Sent: 20 July 2017 12:12
To: omprakash  <mailto:omprakashp@cdac.in> <omprakashp@cdac.in>;
user@hadoop.apache.org <mailto:user@hadoop.apache.org> 
Subject: RE: Namenode not able to come out of SAFEMODE

 

Hi Omprakash, 

 

 

The reported blocks 0 needs additional 6132675 blocks to reach the threshold
0.9990 of total blocks 6138814. The number of live datanodes 0 has reached
the minimum number 0. 

 

-----------> By seeing this message looks like NameNode loaded the block
info into memory but the reported blocks from the Datanodes are "0". Maybe
datanodes are not running, if it is running then please check why it's not
registered with namenode. You can check the datanode log for more details.

 

 

Regards,

Surendra 


  _____  


From: omprakash [omprakashp@cdac.in <mailto:omprakashp@cdac.in> ]
Sent: Thursday, July 20, 2017 12:12 AM
To:  <mailto:user@hadoop.apache.org> user@hadoop.apache.org
Subject: Namenode not able to come out of SAFEMODE

Hi all,

 

I have a setup of 3 node Hadoop cluster(Hadoop-2.8.0). I have deployed 2
namenodes that are configured in HA mode using QJM. 2 datanodes are
configured on the same machine where namenode are installed. 3rd node is
used for quorum purpose only. 

 

Setup

Node1 -> nn1, dn1, jn1, zkfc1, zkServer1

Node2 -> nn2, dn2, jn2, zkfc2, zkServer2

Node3 -> jn3,  zkServer3

 

I stopped the cluster for some reason(power recycled the servers)  and since
them I am not able to start the cluster successfully. After examining the
logs I found that the namenodes are in safe mode and none of them are able
to load the block in memory. Below is the status of namenode from namenode
UI. 

 

Safe mode is ON. The reported blocks 0 needs additional 6132675 blocks to
reach the threshold 0.9990 of total blocks 6138814. The number of live
datanodes 0 has reached the minimum number 0. Safe mode will be turned off
automatically once the thresholds have been reached.

61,56,984 files and directories, 61,38,814 blocks = 1,22,95,798 total
filesystem object(s).

Heap Memory used 5.6 GB of 7.12 GB Heap Memory. Max Heap Memory is 13.33 GB.

Non Heap Memory used 45.19 MB of 49.75 MB Commited Non Heap Memory. Max Non
Heap Memory is 130 MB.

 

I have tried increasing the HADOOP_HEAPSIZE, increasing the heap size in
HADOOP_NAMENODE_OPTS but no success. 

Need help.

 

 

Regards

Omprakash Paliwal

HPC-Medical and Bioinformatics Applications Group

Centre for Development of Advanced Computing (C-DAC)

Pune University campus,

PUNE-411007

Maharashtra, India

email:omprakashp@cdac.in <mailto:omprakashp@cdac.in> 

Contact : +91-20-25704231

 


----------------------------------------------------------------------------
--------------------------------------------------- 
[ C-DAC is on Social-Media too. Kindly follow us at: 
Facebook:  <https://www.facebook.com/CDACINDIA>
https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ] 

This e-mail is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. If you are not the 
intended recipient, please contact the sender by reply e-mail and destroy 
all copies and the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email 
is strictly prohibited and appropriate legal action will be taken. 
----------------------------------------------------------------------------
--------------------------------------------------- 


----------------------------------------------------------------------------
--------------------------------------------------- 
[ C-DAC is on Social-Media too. Kindly follow us at: 
Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ] 

This e-mail is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. If you are not the 
intended recipient, please contact the sender by reply e-mail and destroy 
all copies and the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email 
is strictly prohibited and appropriate legal action will be taken. 
----------------------------------------------------------------------------
--------------------------------------------------- 

 


-------------------------------------------------------------------------------------------------------------------------------
[ C-DAC is on Social-Media too. Kindly follow us at:
Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]

This e-mail is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy
all copies and the original message. Any unauthorized review, use,
disclosure, dissemination, forwarding, printing or copying of this email
is strictly prohibited and appropriate legal action will be taken.
-------------------------------------------------------------------------------------------------------------------------------


Mime
View raw message