hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Suraj Varma <svarma...@gmail.com>
Subject Re: HMaster not failing over dead RegionServers
Date Mon, 02 Jul 2012 23:56:50 GMT
This looks like it is trying to reach a datanode ... doesn't it?
> 12/06/30 00:07:22 INFO ipc.Client: Retrying connect to server: /
Already tried 14 time(s).

Is this from a master log or from a region server log? (I'm guess the
above is from a region server log while trying to replay hlogs)

Sometime back, we had a similar symptom (HLog splitting takes the long
time due to the retries) and found that even though the datanode died,
it was not being detected by the namenode. This leads to the region
server retrying over dead datanodes over and over stretching out the
splitting process.

See this thread:

We found that by default, it takes 15 mins for a datanode death to be
detected by a NN ... and this seems to cause the NN serving back the
dead DN as a valid one when RS tries to read the hlogs.
The parameters in question are: dfs.heartbeat.recheck.interval and
heartbeat.recheck.interval ... tweaking this down caused the recovery
to be much faster.
Also - hbase.rpc.timeout and zookeeper.session.timeout are two other
configurations that need to be tweaked down from defaults for quick

Not sure if this is the case in your error - but, might be something
to investigate ...

On Sat, Jun 30, 2012 at 8:53 AM, Jimmy Xiang <jxiang@cloudera.com> wrote:
> Bryan,
> The master could not detect if the region server is dead.
> How do you set the zookeeper session timeout?
> Thanks,
> Jimmy
> On Sat, Jun 30, 2012 at 8:09 AM, Stack <stack@duboce.net> wrote:
>> On Sat, Jun 30, 2012 at 7:04 AM, Bryan Beaudreault
>> <bbeaudreault@hubspot.com> wrote:
>>> 12/06/30 00:07:22 INFO ipc.Client: Retrying connect to server: /
>>> Already tried 14 time(s).
>> This was one of the servers that went down?
>>> It was not following through the splitting of HLog files and didn't appear
>>> to be moving regions off failed hosts.  After giving it about 20 minutes to
>>> try to right itself, I tried restarting the service.  The restart script
>>> just hung for a while printing dots and nothing apparent was happening on
>>> the logs at the time.
>> Can we see the log  Bryan?
>> You might thread dump when its hung-up the next time Bryan (Would be
>> something for us to do a looksee on).
>>> Finally I kill -9 the process, so that another
>>> master could take over.  The new master seemed to start splitting logs, but
>>> eventually got into the same state of printing the above message.
>> You think it a particular log?
>>> Eventually it all worked out, but it took WAY too long (almost an hour, all
>>> said).  Is this something that is tunable?
>> Have RS carry less WALs?  Its a configuration.
>>> They should have instantly been
>>> removed from the list instead of retrying so many times.  Each server was
>>> retried upwards of 30-40 times.
>> Yeah, thats a bit silly.
>> We're working on the MTTR in general.  You logs would be of interest
>> to a few of us if its ok that someone else can take a look.
>> St.Ack
>>> I am running cdh3u2 (0.90.4).
>>> Thanks,
>>> Bryan

View raw message