cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] Commented: (CASSANDRA-875) Performance regression tests, take 2
Date Mon, 22 Mar 2010 15:25:27 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12848163#action_12848163
] 

Jonathan Ellis commented on CASSANDRA-875:
------------------------------------------

> Would this tool be used as part of the continuous integration process?

The easier it can be integrated w/ Hudson, the more likely this is to happen.  But I would
rather spend more time on making a useful tool at first, than making it Hudson-able, if there
is tension between those goals.

> If so, is it aimed at entire coverage or just basic regression tests to make sure new
feature <x> hasn't caused too much of a problem? Would it take into account different
configurations for read/write heavy nodes?

"Yes."

The right approach is to get something simple working and add features as time permits.

> How generic should the performance test be? 

It should be Cassandra-specific, not generic to other databases.

> If you could provide any other sources that have been successful It would be much appreciated.


There's vpork, a voldemort benchmark, but my impression is it's rather less sophisticated
than our own stress.py.

Brian has indicated that he wants to OSS his stuff, but lawyers are getting in the way; he's
probably willing to offer tips if you contact him.

> Performance regression tests, take 2
> ------------------------------------
>
>                 Key: CASSANDRA-875
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-875
>             Project: Cassandra
>          Issue Type: Test
>          Components: Tools
>            Reporter: Jonathan Ellis
>
> We have a  stress test in contrib/py_stress, and Paul has a tool using libcloud to automate
running it against an ephemeral cluster of rackspace cloud servers, but to really qualify
as "performance regression tests" we need to
>  - test a wide variety of data types (skinny rows, wide rows, different comparator types,
different value byte[] sizes, etc)
>  - produce pretty graphs.  seriously.
>  - archive historical data somewhere for comparison (rackspace can provide a VM to host
a db for this, if the ASF doesn't have something in place for this kind of thing already)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message