cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Jirsa <jji...@gmail.com>
Subject Re: [EXTERNAL] multiple Cassandra instances per server, possible?
Date Thu, 18 Apr 2019 13:57:35 GMT
Agreed that you can go larger than 1T on ssd

You can do this safely with both instances in the same cluster if you guarantee two replicas
aren’t on the same machine. Cassandra provides a primitive to do this - rack awareness through
the network topology snitch. 

The limitation (until 4.0) is that you’ll need two IPs per machine as both instances have
to run in the same port.


-- 
Jeff Jirsa


> On Apr 18, 2019, at 6:45 AM, Durity, Sean R <SEAN_R_DURITY@homedepot.com> wrote:
> 
> What is the data problem that you are trying to solve with Cassandra? Is it high availability?
Low latency queries? Large data volumes? High concurrent users? I would design the solution
to fit the problem(s) you are solving.
>  
> For example, if high availability is the goal, I would be very cautious about 2 nodes/machine.
If you need the full amount of the disk – you *can* have larger nodes than 1 TB. I agree
that administration tasks (like adding/removing nodes, etc.) are more painful with large nodes
– but not impossible. For large amounts of data, I like nodes that have about 2.5 – 3
TB of usable SSD disk.
>  
> It is possible that your nodes might be under-utilized, especially at first. But if the
hardware is already available, you have to use what you have.
>  
> We have done multiple nodes on single physical hardware, but they were two separate clusters
(for the same application). In that case, we had  a different install location and different
ports for one of the clusters.
>  
> Sean Durity
>  
> From: William R <triole@protonmail.com.INVALID> 
> Sent: Thursday, April 18, 2019 9:14 AM
> To: user@cassandra.apache.org
> Subject: [EXTERNAL] multiple Cassandra instances per server, possible?
>  
> Hi all,
>  
> In our small company we have 10 nodes of (2 x 3 TB HD) 6 TB each, 128 GB ram and 64 cores
and we are thinking to use them as Cassandra nodes. From what I am reading around, the community
recommends that every node should not keep more than 1 TB data so in this case I am wondering
if it is possible to install 2 instances per node using docker so each docker instance can
write to its own physical disk and utilise more efficiently the rest hardware (CPU & RAM).
>  
> I understand with this setup there is the danger of creating a single point of failure
for 2 Cassandra nodes but except that do you think that is a possible setup to start with
the cluster?
>  
> Except the docker solution do you recommend any other way to split the physical node
to 2 instances? (VMWare? or even maybe 2 separate installations of Cassandra? )
>  
> Eventually we are aiming in a cluster consisted of 2 DCs with 10 nodes each (5 baremetal
nodes with 2 Cassandra instances)
>  
> Probably later when we will start introducing more nodes to the cluster we can decommissioning
the "double-instaned" ones and aim for a more homogeneous solution..
>  
> Thank you,
>  
> Wil
> 
> 
> The information in this Internet Email is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this Email by anyone else is unauthorized.
If you are not the intended recipient, any disclosure, copying, distribution or any action
taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed
to our clients any opinions or advice contained in this Email are subject to the terms and
conditions expressed in any applicable governing The  Home Depot terms of business or client
engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy
and content of this attachment and for any damages or losses arising from any inaccuracies,
errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature,
which may be contained in this attachment and shall not be liable for direct, indirect, consequential
or special damages in connection with this e-mail message or its attachment.

Mime
View raw message