hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From stack <st...@duboce.net>
Subject Re: HBase Block Size should we change it?
Date Fri, 22 Aug 2008 02:39:42 GMT
On Thu, Aug 21, 2008 at 1:44 PM, Jim Kellerman <jim@powerset.com> wrote:

> HBase uses Hadoop MapFiles. Currently, there is no way to change the block
> size on such a file without changing the block size on the whole cluster.
>
> I would rank this as a premature optimization at this point. There are a
> lot of other areas where HBase spends the majority of its time. Once we
> knock down the big trees, we can see where we should focus our efforts next.
>
> ---
> Jim Kellerman, Senior Engineer; Powerset (a Microsoft Company)
>
> > -----Original Message-----
> > From: news [mailto:news@ger.gmane.org] On Behalf Of Billy Pearson
> > Sent: Thursday, August 21, 2008 1:14 PM
> > To: hbase-user@hadoop.apache.org
> > Subject: HBase Block Size should we change it?
> >
> > Have we looked at the option to set the default block size of the
> > HStoreFile's?
> >
> > Reading over bigtable they use a default 64KB Block Size and
> > 8KB on tables that do heavy random read's.
> >
> > Should make this an option in Hbase to set the block size?
> > If so should be wet it in the hbase-default.xml or on a table level?
> >
> > In the bigtable paper looks like they set it as a setting on the cluster
> > level
> > sense they have different clusters of server for different data
> types/apps.
> >
> > I would like to see this looked at as I thank it might help our
> performance
> > numbers
> > at a cost of using more memory on hadoop namenode for the extra blocks.
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message