hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Guanghao Zhang <zghao...@gmail.com>
Subject Re: How to limit a single row being filled with millions of columus?
Date Tue, 06 Dec 2016 07:29:42 GMT
Now the scan context has size limit. And scan can break between cells. This
should help for this. What is the version of your cluster?

2016-12-06 13:35 GMT+08:00 聪聪 <175998806@qq.com>:

> I am glad to receive your reply!How can I find a big row quickly? If not
> ,when we proceed major compact,the regionserver dose not work all the time .
>
>
>
>
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "Guanghao Zhang";<zghaobac@gmail.com>;
> 发送时间: 2016年12月6日(星期二) 中午12:13
> 收件人: "user"<user@hbase.apache.org>;
>
> 主题: Re: How to limit a single row being filled with millions of columus?
>
>
>
> There are a config hbase.table.max.rowsize but only used for user get/scan.
> It will throw RowTooBigException when you scan a big row with millions of
> columns. But it doesn't work for compact. If i am not wrong, there are not
> a way to prevent a single row being filled with millions of columns.
>
> 2016-12-06 11:52 GMT+08:00 聪聪 <175998806@qq.com>:
>
> > Recently, I have a problem that confused me a long time. The problem is
> > that as we all know in HBase,there are millions of columns in a single
> row.
> > Full gc will happen when region proceeds major compact and it results in
> > regionserver and hbase not working. Is there any good way to prevent a
> > single row being put(wrote) columns overmuch?
> > Hope to you soon!
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message