incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dan Kuebrich <dan.kuebr...@gmail.com>
Subject Re: Upgrade to a different version?
Date Thu, 17 Mar 2011 21:59:03 GMT
Do people have success stories with 0.7.4?  It seems like the list only
hears if there's a major problem with a release, which means that if you're
trying to judge the stability of a release you're looking for silence.  But
maybe that means not many people have tried it yet.  Is there a record of
this anywhere?

On Thu, Mar 17, 2011 at 5:41 PM, Thibaut Britz <
thibaut.britz@trendiction.com> wrote:

> As for the version,
>
> we will wait a few more days, and if nothing really bad shows up, move to
> 0.7.4.
>
>
>
> On Thu, Mar 17, 2011 at 10:40 PM, Thibaut Britz <
> thibaut.britz@trendiction.com> wrote:
>
>> Hi Paul,
>>
>> It's more of a scientific mining app. We crawl websites and extract
>> information from these websites for our clients. For us, it doesn't really
>> matter if one cassandra node replies after 1 second or a few ms, as long as
>> the throughput over time stays high. And so far, this seems to be the case.
>>
>> If you are using hector, be sure to use the latest hector version. There
>> were a few bugs related to error handling in earlier versions. (e.g also
>> threads hanging forever waiting for an answer). I occasionaly see timeouts,
>> but we then just move to another node and retry.
>>
>> Thibaut
>>
>>
>>
>> On Thu, Mar 17, 2011 at 6:53 PM, Paul Pak <ppak@yellowseo.com> wrote:
>>
>>> On 3/17/2011 1:06 PM, Thibaut Britz wrote:
>>> > If it helps you to sleep better,
>>> >
>>> > we use cassandra  (0.7.2 with the flush fix) in production on > 100
>>> > servers.
>>> >
>>> > Thibaut
>>> >
>>>
>>> Thanks Thibaut, believe it or not, it does. :)
>>>
>>> Is your use case a typical web app or something like a scientific/data
>>> mining app?  I ask because I'm wondering how you have managed to deal
>>> with the stop-the-world garbage collection issues that seems to hit most
>>> clusters that have significant load and cause application timeouts.
>>> Have you found that cassandra scales in read/write capacity reasonably
>>> well as you add nodes?
>>>
>>> Also, you may also want to backport these fixes at a minimum?
>>>
>>>  * reduce memory use during streaming of multiple sstables
>>> (CASSANDRA-2301)
>>>  * update memtable_throughput to be a long (CASSANDRA-2158)
>>>
>>>
>>>
>>>
>>
>

Mime
View raw message