hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Never ending major compaction?
Date Wed, 02 Jan 2013 06:07:36 GMT
bq. It took about 6h to complete.

If the above behavior is reproducible, we should investigate more deeply.

Thanks for sharing.

On Tue, Jan 1, 2013 at 6:14 PM, Jean-Marc Spaggiari <jean-marc@spaggiari.org
> wrote:

> Yes, I'm running on 0.94.3. The last major compaction ran yesterday.
> It's almost daily. That's why I was surprise it took so long. I mean,
> I'm only compacting regions who moved. So it should be pretty quick.
> But was not the case. It took about 6h to complete. Strange. Maybe
> something went wrong when I stopped/started hbase.
>
> also, there was almost no activity on the network nor on the CPUs. I
> will have to add disks monitoring on Ganglia to see if I was limited
> by IOs...
>
> I looked at the regions logs and everything was fine. It was showing
> some compaction information every few seconds.
>
> JM
>
> 2013/1/1, Ted <yuzhihong@gmail.com>:
> > You're on hbase 0.94.3 , right ?
> >
> > When was the last time major compaction ran ?
> >
> > Compaction is region server activity so you should be able to find some
> clue
> > in region server log.
> >
> > Cheers
> >
> > On Jan 1, 2013, at 11:42 AM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org>
> > wrote:
> >
> >> Hi,
> >>
> >> I have a table, with about 200 000 raws, keys is string[64] (ish) and
> >> value is string[512].
> >>
> >> It's splitted over 16 regions located on 7 regionservers.
> >>
> >> So it's not a big table, and there is a lot of horsepower behind it.
> >>
> >> I asked a major_compaction few hours ago. Let's say, about 5 hours
> >> ago. and it's still compacting! But all servers activities seems to be
> >> null. CPU usage is almost 0.
> >>
> >> There is nothing on the master logs.
> >>
> >> How can I see what's going on? Is there a way to see the compaction
> >> queue?
> >>
> >> JM
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message