cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Haddad <...@jonhaddad.com>
Subject Re: Write/read heavy usecase in one cluster
Date Thu, 24 Dec 2015 00:37:10 GMT
While I would normally suggest splitting different systems to different
hardware, you can easily get away with using 3 rather small machines for
this workload.  Just be sure to not use SimpleStrategy so you can split the
keyspaces out to different clusters later if you need to.

On Wed, Dec 23, 2015 at 2:21 PM Robert Wille <rwille@fold3.com> wrote:

> I would personally classify both of those use cases as light, and I
> wouldn’t have any qualms about using a single cluster for both of those.
>
> On Dec 23, 2015, at 3:06 PM, cass savy <casssavy@gmail.com> wrote:
>
> > How do you determine if we can share cluster in prod for 2 different
> applications
> >
> >  1. Has anybody shared cluster in prod a write heavy use case that
> captures user login info (few 100 rpm)  and hardly performs few reads per
> day
> > and
> >  Use case that is read heavy use case that is 92% read with 10k requests
> per min,higher consistency level of quorum
> >
> >
> > 2. Use of  in-memory tables for lookup tables  that will be referred to
> for every request prior to writing to transactional tables. Has anyone ised
> it in prod and what were the issues encountered. what will be tuning/reco
> to follow for prod
> >
> > 3. Use of multiple data directories for different applications like
> having different data partitions for write/read heavy and one separate for
> commitlog/caches
> >
> > 4. plan to use C* 2.1 with vnodes/murmur for above usecases. Need
> feedback of if people have tried tuning heap size, off-heap parameters in
> c* 2.0 and above. in prod
> >
> > 5. Java 8 with c* 2.0 and higher pros/cons especially with G1GC garbage
> collection
>
>

Mime
View raw message