hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stuart Scott" <Stuart.Sc...@e-mis.com>
Subject RE: HBase Stability
Date Mon, 21 Mar 2011 20:44:11 GMT


Thanks for getting back to me so promptly.

I will get the latest version installed and see if that helps.


I've tried various methods for inserting the data  but the latest
version was just a simple 'table.put' within a loop-to try and eliminate
other issues. The content of the row was <1kb each. I tried adding
periodic table flushes etc.. and it made no difference. We tried a Java
memory caching patch that we found-that made no difference either.


The data node machines are around 4-5 years old-what sort of minimum
spec would we be looking at to get reasonable performance? I was under
the impression that we could run the cluster with some basic servers and
still see reasonable performance.


Thanks again for your comments... nice to know someone is listening.








From: Ted Dunning [mailto:tdunning@maprtech.com] 
Sent: 21 March 2011 20:20
To: user@hbase.apache.org
Cc: Stuart Scott
Subject: Re: HBase Stability


No, map-reduce is not really necessary to add so few rows.


Our internal tests repeatedly load 10-100 million rows without much
fuss.  And that is on clusters ranging from 3 to 11 nodes.

On Mon, Mar 21, 2011 at 1:17 PM, Stuart Scott <Stuart.Scott@e-mis.com>

Is the only way to upload (say 1,000,000 rows) via map reduce? Or should
we be able to just 'put' new records without the system failing apart?


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message