From user-return-63700-archive-asf-public=cust-asf.ponee.io@cassandra.apache.org Thu Apr 18 13:57:47 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id AAAA318061A for ; Thu, 18 Apr 2019 15:57:46 +0200 (CEST) Received: (qmail 95605 invoked by uid 500); 18 Apr 2019 13:57:42 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 95595 invoked by uid 99); 18 Apr 2019 13:57:42 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 18 Apr 2019 13:57:42 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 2B086C9462 for ; Thu, 18 Apr 2019 13:57:42 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.803 X-Spam-Level: * X-Spam-Status: No, score=1.803 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=2, MIME_QP_LONG_LINE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id eofzVZVYwnxK for ; Thu, 18 Apr 2019 13:57:39 +0000 (UTC) Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 481F25FD9A for ; Thu, 18 Apr 2019 13:57:39 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id v12so1260046pgq.1 for ; Thu, 18 Apr 2019 06:57:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:content-transfer-encoding:mime-version:date:subject:message-id :references:in-reply-to:to; bh=Sei6nv/PFMtd2sTjfv1tyQutEh0KkUjAxNK1QM3YBDM=; b=C/iND/L8O0NRhk1yXBmbljnAPDVXAC+v8waIIhuOpfcYtzjbIYd0FVTh/W65PSoCYd /AG9YameeyJDHChxOEDMflZ5qQF8E6ONOLBxfIiKKvhtg1iFZlcoot4p7gX3lnT1LHDw yg0Qa4Kp1RO2sC8k5E3nuBW1ZJwTZZkev7IAQqvIJeWvecY1Y00KOsqZwa2H7bfNF5a0 wckRe5KWTUXTSTDKAT7bvTf1VwaV3R6yCFTwDMKciavelyo+1JijvX25On7a02GRRmDa 8WGefo3NEpx18Cn/V2Og/8h3tiunF2c8R8nGKCzE9Ho0thN1gLWB7VaoGd79sYqs8Kxa RC9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:content-transfer-encoding:mime-version:date :subject:message-id:references:in-reply-to:to; bh=Sei6nv/PFMtd2sTjfv1tyQutEh0KkUjAxNK1QM3YBDM=; b=DbriX6/2ltbDBix+VBN9q28OXojXjPiPwKvY6BxUDWqbFDOta7ucwcgp6KrPdKmr8W jDgIc/JGFZImWRxgVYcb/vfTXNLoEYt3rh700fs3F8dUGq5yZotepNeLezfHAf5bp5sS Qf7XUkDk7zn3Zcskb4Btc+mChsMwh2jl2bpfEDGISp1AdAPeMhROYMUkZTgk/Ulzo+ls fd8f2NMetgggtBry4+ZrVh3mHS5xPaNA3k7yBCGU6obnRDDFtv66MCrbgasusiLrR4Jq +lAv8zgHRu32+Q5apEkScfR02QKPGZIGb8XVxtaQg6Yvu8aY7h33OUg9ApbGCD2KDlel CFaA== X-Gm-Message-State: APjAAAVbROCTCYj+jaGsvY++svmXJ86pnWaAW4Xz3CcMdmBjya4Cv3O/ +7Y6ncWKnBusYxHY3fkHclpAIcy2 X-Google-Smtp-Source: APXvYqytMhHwxSl+eGGro/e4LA5lzLW0uZqG1AvVvK4Ha28/lXNMPsuZ2EaAKmCRZ6gn2KEquI185A== X-Received: by 2002:a65:41ca:: with SMTP id b10mr43052754pgq.256.1555595857603; Thu, 18 Apr 2019 06:57:37 -0700 (PDT) Received: from [192.168.0.49] (97-113-107-191.tukw.qwest.net. [97.113.107.191]) by smtp.gmail.com with ESMTPSA id r145sm6107393pgr.84.2019.04.18.06.57.36 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 Apr 2019 06:57:37 -0700 (PDT) From: Jeff Jirsa Content-Type: multipart/alternative; boundary=Apple-Mail-5B3847C8-E8B8-4D1E-8D90-C94D459C6A9F Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (1.0) Date: Thu, 18 Apr 2019 06:57:35 -0700 Subject: Re: [EXTERNAL] multiple Cassandra instances per server, possible? Message-Id: References: <5wB23EqOv0WGObizOA7Yti3ZAayhQQxL64139T2hTQDq56kbNc2GWY--CrMPgh4IyZooL9ICrImckQFF05zpNxug3rm2sqz_TYVrpmgzOJM=@protonmail.com> In-Reply-To: To: user@cassandra.apache.org X-Mailer: iPhone Mail (16E227) --Apple-Mail-5B3847C8-E8B8-4D1E-8D90-C94D459C6A9F Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Agreed that you can go larger than 1T on ssd You can do this safely with both instances in the same cluster if you guaran= tee two replicas aren=E2=80=99t on the same machine. Cassandra provides a pr= imitive to do this - rack awareness through the network topology snitch.=20 The limitation (until 4.0) is that you=E2=80=99ll need two IPs per machine a= s both instances have to run in the same port. --=20 Jeff Jirsa > On Apr 18, 2019, at 6:45 AM, Durity, Sean R w= rote: >=20 > What is the data problem that you are trying to solve with Cassandra? Is i= t high availability? Low latency queries? Large data volumes? High concurren= t users? I would design the solution to fit the problem(s) you are solving. > =20 > For example, if high availability is the goal, I would be very cautious ab= out 2 nodes/machine. If you need the full amount of the disk =E2=80=93 you *= can* have larger nodes than 1 TB. I agree that administration tasks (like ad= ding/removing nodes, etc.) are more painful with large nodes =E2=80=93 but n= ot impossible. For large amounts of data, I like nodes that have about 2.5 =E2= =80=93 3 TB of usable SSD disk. > =20 > It is possible that your nodes might be under-utilized, especially at firs= t. But if the hardware is already available, you have to use what you have. > =20 > We have done multiple nodes on single physical hardware, but they were two= separate clusters (for the same application). In that case, we had a diffe= rent install location and different ports for one of the clusters. > =20 > Sean Durity > =20 > From: William R =20 > Sent: Thursday, April 18, 2019 9:14 AM > To: user@cassandra.apache.org > Subject: [EXTERNAL] multiple Cassandra instances per server, possible? > =20 > Hi all, > =20 > In our small company we have 10 nodes of (2 x 3 TB HD) 6 TB each, 128 GB r= am and 64 cores and we are thinking to use them as Cassandra nodes. =46rom w= hat I am reading around, the community recommends that every node should not= keep more than 1 TB data so in this case I am wondering if it is possible t= o install 2 instances per node using docker so each docker instance can writ= e to its own physical disk and utilise more efficiently the rest hardware (C= PU & RAM). > =20 > I understand with this setup there is the danger of creating a single poin= t of failure for 2 Cassandra nodes but except that do you think that is a po= ssible setup to start with the cluster? > =20 > Except the docker solution do you recommend any other way to split the phy= sical node to 2 instances? (VMWare? or even maybe 2 separate installations o= f Cassandra? ) > =20 > Eventually we are aiming in a cluster consisted of 2 DCs with 10 nodes eac= h (5 baremetal nodes with 2 Cassandra instances) > =20 > Probably later when we will start introducing more nodes to the cluster we= can decommissioning the "double-instaned" ones and aim for a more homogeneo= us solution.. > =20 > Thank you, > =20 > Wil >=20 >=20 > The information in this Internet Email is confidential and may be legally p= rivileged. It is intended solely for the addressee. Access to this Email by a= nyone else is unauthorized. If you are not the intended recipient, any discl= osure, copying, distribution or any action taken or omitted to be taken in r= eliance on it, is prohibited and may be unlawful. When addressed to our clie= nts any opinions or advice contained in this Email are subject to the terms a= nd conditions expressed in any applicable governing The Home Depot terms of= business or client engagement letter. The Home Depot disclaims all responsi= bility and liability for the accuracy and content of this attachment and for= any damages or losses arising from any inaccuracies, errors, viruses, e.g.,= worms, trojan horses, etc., or other items of a destructive nature, which m= ay be contained in this attachment and shall not be liable for direct, indir= ect, consequential or special damages in connection with this e-mail message= or its attachment. --Apple-Mail-5B3847C8-E8B8-4D1E-8D90-C94D459C6A9F Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Agreed that you can go larger than 1T on ss= d

You can do this safely with both instances in the same c= luster if you guarantee two replicas aren=E2=80=99t on the same machine. Cas= sandra provides a primitive to do this - rack awareness through the network t= opology snitch. 

The limitation (until 4.0) is= that you=E2=80=99ll need two IPs per machine as both instances have to run i= n the same port.


-- 
Jeff Jirsa


On A= pr 18, 2019, at 6:45 AM, Durity, Sean R <SEAN_R_DURITY@homedepot.com> wrote:

What is the data problem that you are trying to solve= with Cassandra? Is it high availability? Low latency queries? Large data vo= lumes? High concurrent users? I would design the solution to fit the problem= (s) you are solving.

 

For example, if high availability is the goal, I woul= d be very cautious about 2 nodes/machine. If you need the full amount of the= disk =E2=80=93 you *can* have larger nodes than 1 TB. I agree that a= dministration tasks (like adding/removing nodes, etc.) are more painful with large nodes =E2=80=93 but not impossible= . For large amounts of data, I like nodes that have about 2.5 =E2=80=93 3 TB= of usable SSD disk.

 

It is possible that your nodes might be under-utilize= d, especially at first. But if the hardware is already available, you have t= o use what you have.

 

We have done multiple nodes on single physical hardwa= re, but they were two separate clusters (for the same application). In that c= ase, we had  a different install location and different ports for one o= f the clusters.

 

Sean Durity

 

From: William R <triole@protonmail.com.INVALID>
Sent: Thursday, April 18, 2019 9:14 AM
To: user@cassandra.apach= e.org
Subject: [EXTERNAL] multiple Cassandra instances per server, possible= ?

 

Hi all,

 

In our small company we have 10 nodes of (2 x 3 TB HD= ) 6 TB each, 128 GB ram and 64 cores and we are thinking to use them as Cass= andra nodes. =46rom what I am reading around, the community recommends that e= very node should not keep more than 1 TB data so in this case I am wondering if it is possible to install 2 ins= tances per node using docker so each docker instance can write to its own ph= ysical disk and utilise more efficiently the rest hardware (CPU & RAM).<= o:p>

 

I understand with this setup there is the danger of c= reating a single point of failure for 2 Cassandra nodes but except that do y= ou think that is a possible setup to start with the cluster?

 

Except the docker solution do you recommend any other= way to split the physical node to 2 instances? (VMWare? or even maybe 2 sep= arate installations of Cassandra? )

 

Eventually we are aiming in a cluster consisted of 2 D= Cs with 10 nodes each (5 baremetal nodes with 2 Cassandra instances)

 

Probably later when we will start introducing more no= des to the cluster we can decommissioning the "double-instaned" ones and aim= for a more homogeneous solution..

 

Thank you,

 

Wil




The information in this Internet Email is confidential and may be legally pr= ivileged. It is intended solely for the addressee. Access to this Email by a= nyone else is unauthorized. If you are not the intended recipient, any discl= osure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited= and may be unlawful. When addressed to our clients any opinions or advice c= ontained in this Email are subject to the terms and conditions expressed in a= ny applicable governing The Home Depot terms of business or client engagement letter. The Home Depot di= sclaims all responsibility and liability for the accuracy and content of thi= s attachment and for any damages or losses arising from any inaccuracies, er= rors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be containe= d in this attachment and shall not be liable for direct, indirect, consequen= tial or special damages in connection with this e-mail message or its attach= ment.
= --Apple-Mail-5B3847C8-E8B8-4D1E-8D90-C94D459C6A9F--