hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Elliott Clark <ecl...@stumbleupon.com>
Subject Re: Performance Testing
Date Thu, 21 Jun 2012 19:13:36 GMT
I actually think that more measurements are needed than just per release.
 The best I could hope for would be a four node+ cluster(One master and
three slaves) that for every check in on trunk run multiple different perf

   - All Reads (Scans)
   - Large Writes (Should test compactions/flushes)
   - Read Dominated with 10% writes

Then every checkin can be evaluated and large regressions can be treated as
bugs.  And with that we can see the difference between the different
versions as well. http://arewefastyet.com/ is kind of the model that I
would love to see.  And I'm more than willing to help where ever needed.

 However in reality every night will probably be more feasible.   And Four
nodes is probably not going to happen either.

On Thu, Jun 21, 2012 at 11:38 AM, Andrew Purtell <apurtell@apache.org>wrote:

> On Wed, Jun 20, 2012 at 10:37 PM, Ryan Ausanka-Crues
> <ryan@palominolabs.com> wrote:
> > I think it makes sense to start by defining the goals for the
> performance testing project and then deciding what we'd like to accomplish.
> As such, I start by soliciting ideas from everyone on what they would like
> to see from the project. We can then collate those thoughts and prioritize
> the different features. Does that sound like a reasonable approach?
> In terms of defining a goal, the fundamental need I see for us as a
> project is to quantify performance from one release to the next, thus
> be able to avoid regressions by noting adverse changes in release
> candidates.
> In terms of defining what "performance" means... well, that's an
> involved and separate discussion I think.
> Best regards,
>    - Andy
> Problems worthy of attack prove their worth by hitting back. - Piet
> Hein (via Tom White)

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message