incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dir dir <sikerasa...@gmail.com>
Subject Re: Regarding Cassandra Scalability
Date Sun, 18 Apr 2010 16:14:49 GMT
Hi Gary,

>The main reason is that the compaction operation (removing deleted
>values) currently requires that an entire row be read into memory.

Thank you for your explanation. But I still do not understand what do you
mean.

in my opinion, Actually the row contents must fit in available memory.
if row contents are not fit in available memory, our software will raise
exception out of memory. since it is true( "the row contents must fit in
available memory"),
then why you said that is a problem which it (Cassandra) cannot solved??

You say: "compaction operation requires that entire row be read into memory"

whether this is a problem of "out of memory"??  When we need to perform
compaction operation?? In what situation we shall perform compaction
operation??
Thank You.

Dir.


On Sun, Apr 18, 2010 at 7:41 PM, Gary Dusbabek <gdusbabek@gmail.com> wrote:

> On Sat, Apr 17, 2010 at 10:50, dir dir <sikerasakti@gmail.com> wrote:
> >
> > What problems can’t it solve?
> >
> > No flexible indices
> > No querying on non PK values
> > Not good for binary data (>64mb) unless you chunck
> > Row contents must fit in available memory
> >
> > Gary Dusbabek say: Row contents must fit in available memory. Honestly I
> do
> > not understand
> > the meaning from that statement. Thank you.
> >
> > Dir.
> >
>
> The main reason is that the compaction operation (removing deleted
> values) currently requires that an entire row be read into memory.
>
> Gary Dusbabek
>

Mime
View raw message