hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gagandeep Singh <gagandeep.si...@paxcel.net>
Subject Re: Data loss due to region server failure
Date Thu, 02 Sep 2010 10:34:18 GMT
Hi Daniel

I have downloaded hadoop-0.20.2+320.tar.gz from this location
http://archive.cloudera.com/cdh/3/
And also changed the *dfs.support.append* flag to *true* in your *
hdfs-site.xml* as mentioned here
http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport.

But data loss is still happening. Am I using the right version?
Is there any other settings that I need to make so that data gets flushed to
HDFS.

Thanks,
Gagan



On Thu, Aug 26, 2010 at 11:57 PM, Jean-Daniel Cryans <jdcryans@apache.org>wrote:

> That, or use CDH3b2.
>
> J-D
>
> On Thu, Aug 26, 2010 at 11:22 AM, Gagandeep Singh
> <gagandeep.singh@paxcel.net> wrote:
> > Thanks Daniel
> >
> > It means I have to checkout the code from branch and build it on my local
> > machine.
> >
> > Gagan
> >
> >
> > On Thu, Aug 26, 2010 at 9:51 PM, Jean-Daniel Cryans <jdcryans@apache.org
> >wrote:
> >
> >> Then I would expect some form of dataloss yes, because stock hadoop
> >> 0.20 doesn't have any form of fsync so HBase doesn't know whether the
> >> data made it to the datanodes when appending to the WAL. Please use
> >> the 0.20-append hadoop branch with HBase 0.89 or cloudera's CDH3b2.
> >>
> >> J-D
> >>
> >> On Thu, Aug 26, 2010 at 7:22 AM, Gagandeep Singh
> >> <gagandeep.singh@paxcel.net> wrote:
> >> > HBase - 0.20.5
> >> > Hadoop - 0.20.2
> >> >
> >> > Thanks,
> >> > Gagan
> >> >
> >> >
> >> >
> >> > On Thu, Aug 26, 2010 at 7:11 PM, Jean-Daniel Cryans <
> jdcryans@apache.org
> >> >wrote:
> >> >
> >> >> Hadoop and HBase version?
> >> >>
> >> >> J-D
> >> >>
> >> >> On Aug 26, 2010 5:36 AM, "Gagandeep Singh" <
> gagandeep.singh@paxcel.net>
> >> >> wrote:
> >> >>
> >> >> Hi Group,
> >> >>
> >> >> I am checking HBase/HDFS fail over. I am inserting 1M records from
my
> >> HBase
> >> >> client application. I am clubbing my Put operation such that 10
> records
> >> get
> >> >> added into the List<Put> and then I call the table.put(). I have
not
> >> >> modified the default setting of Put operation which means all data
is
> >> >> written in WAL and in case of server failure my data should not be
> lost.
> >> >>
> >> >> But I noticed somewhat strange behavior, while adding records if I
> kill
> >> my
> >> >> Region Server then my application waits till the time region data is
> >> moved
> >> >> to another region. But I noticed while doing so all my data is lost
> and
> >> my
> >> >> table is emptied.
> >> >>
> >> >> Could you help me understand the behavior. Is there some kind of
> Cache
> >> also
> >> >> involved while writing because of which my data is lost.
> >> >>
> >> >>
> >> >> Thanks,
> >> >> Gagan
> >> >>
> >> >
> >>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message