hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "聪聪" <175998...@qq.com>
Subject 回复: How to limit a single row being filled with millions of columus?
Date Tue, 06 Dec 2016 12:55:45 GMT
I have many cells in one row but each column is not too large ,I hope to find the row which
being filled with many columns in the table .Do you have any suggestions?
------------------ 原始邮件 ------------------
发件人: "Phil Yang";<ud1937@gmail.com>;
发送时间: 2016年12月6日(星期二) 下午5:52
收件人: "hbase-user"<user@hbase.apache.org>; 

主题: Re: How to limit a single row being filled with millions of columus?



10 is column-level, if you have many cells in one row but each column is
not too large, I think it will not increase the pressure of GC. You may
need check if your single column's value is too large.

Thanks,
Phil


2016-12-06 17:24 GMT+08:00 聪聪 <175998806@qq.com>:

> 1、The version of my cluster is hbase-0.98.6-cdh5.2.0. Because the table
> have a large of date and many rows, so do you have any other ways to find
> big rows better?
> 2、The parameter default value of hbase.hstore.compaction.kv.max is 10.If
> there has big row, how much should we set the value ?
>
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "Phil Yang";<yangzhe1991@apache.org>;
> 发送时间: 2016年12月6日(星期二) 下午3:46
> 收件人: "hbase-user"<user@hbase.apache.org>;
>
> 主题: Re: How to limit a single row being filled with millions of columus?
>
>
>
> We have hbase.hstore.compaction.kv.max to setBatch on compaction and
> default value is 10, which means we will write each 10 cells to writer. I
> think it can prevent using too much heap while compacting?
>
> Thanks,
> Phil
>
>
> 2016-12-06 15:29 GMT+08:00 Guanghao Zhang <zghaobac@gmail.com>:
>
> > Now the scan context has size limit. And scan can break between cells.
> This
> > should help for this. What is the version of your cluster?
> >
> > 2016-12-06 13:35 GMT+08:00 聪聪 <175998806@qq.com>:
> >
> > > I am glad to receive your reply!How can I find a big row quickly? If
> not
> > > ,when we proceed major compact,the regionserver dose not work all the
> > time .
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > ------------------ 原始邮件 ------------------
> > > 发件人: "Guanghao Zhang";<zghaobac@gmail.com>;
> > > 发送时间: 2016年12月6日(星期二) 中午12:13
> > > 收件人: "user"<user@hbase.apache.org>;
> > >
> > > 主题: Re: How to limit a single row being filled with millions of
> columus?
> > >
> > >
> > >
> > > There are a config hbase.table.max.rowsize but only used for user
> > get/scan.
> > > It will throw RowTooBigException when you scan a big row with millions
> of
> > > columns. But it doesn't work for compact. If i am not wrong, there are
> > not
> > > a way to prevent a single row being filled with millions of columns.
> > >
> > > 2016-12-06 11:52 GMT+08:00 聪聪 <175998806@qq.com>:
> > >
> > > > Recently, I have a problem that confused me a long time. The problem
> is
> > > > that as we all know in HBase,there are millions of columns in a
> single
> > > row.
> > > > Full gc will happen when region proceeds major compact and it results
> > in
> > > > regionserver and hbase not working. Is there any good way to prevent
> a
> > > > single row being put(wrote) columns overmuch?
> > > > Hope to you soon!
> > >
> >
>
Mime
  • Unnamed multipart/alternative (inline, 8-Bit, 0 bytes)
View raw message