cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Goffinet (Updated) (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-3945) Support incremental/batch sizes for BulkRecordWriter, due to GC overhead issues
Date Wed, 22 Feb 2012 08:14:49 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-3945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chris Goffinet updated CASSANDRA-3945:
--------------------------------------

    Description: When loading large amounts of data, currently the BulkRecordWriter will write
out all the sstables, then stream them. This actually caused us GC overhead issues, due to
our heap sizes for reducers. We ran into a problem where the number of SSTables on disk that
had to be open would cause the jvm process to die. We also wanted a way to incrementally stream
them as we created them. I created support for setting this, the default behavior is wait
for them to be created. But if you increase to >= 1, you can determine the batch size.
 (was: When loading large amounts of data, currently the BulkRecordWriter will write out all
the sstables, then stream them. This actually caused us GC overhead issues, due to our heap
sizes for reducers. We ran into a problem where the number of SSTables on disk that had to
be open would cause the jvm process to die. We also wanted a watch to incrementally stream
them as we created them. I created support for setting this, the default behavior is wait
for them to be created. But if you increase to >= 1, you can determine the batch size.)
    
> Support incremental/batch sizes for BulkRecordWriter, due to GC overhead issues
> -------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-3945
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3945
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Chris Goffinet
>            Assignee: Chris Goffinet
>            Priority: Minor
>             Fix For: 1.1.0
>
>
> When loading large amounts of data, currently the BulkRecordWriter will write out all
the sstables, then stream them. This actually caused us GC overhead issues, due to our heap
sizes for reducers. We ran into a problem where the number of SSTables on disk that had to
be open would cause the jvm process to die. We also wanted a way to incrementally stream them
as we created them. I created support for setting this, the default behavior is wait for them
to be created. But if you increase to >= 1, you can determine the batch size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

Mime
View raw message