Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 657B5200B13 for ; Wed, 15 Jun 2016 14:46:36 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 64306160A4D; Wed, 15 Jun 2016 12:46:36 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 367C5160A4C for ; Wed, 15 Jun 2016 14:46:35 +0200 (CEST) Received: (qmail 92528 invoked by uid 500); 15 Jun 2016 12:46:28 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 92501 invoked by uid 99); 15 Jun 2016 12:46:28 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 15 Jun 2016 12:46:28 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 2CE4F180290 for ; Wed, 15 Jun 2016 12:46:28 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.28 X-Spam-Level: * X-Spam-Status: No, score=1.28 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=anguenot-org.20150623.gappssmtp.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id XmueLZ3p-NUA for ; Wed, 15 Jun 2016 12:46:25 +0000 (UTC) Received: from mail-oi0-f53.google.com (mail-oi0-f53.google.com [209.85.218.53]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 5933C5F295 for ; Wed, 15 Jun 2016 12:46:25 +0000 (UTC) Received: by mail-oi0-f53.google.com with SMTP id u201so31580200oie.0 for ; Wed, 15 Jun 2016 05:46:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anguenot-org.20150623.gappssmtp.com; s=20150623; h=from:message-id:mime-version:subject:date:references:to:in-reply-to; bh=RL1w33x3or1ugwlyPxBZlob2wlMDKydvW7kmTmfJ1os=; b=qwiAQ2o9GLC6CosA5+lo4pTCPopFllGXX5G/+dalPg+AYSPNOTFU/tQ11+dWSoHymg D0dYcFGyEn+573CY+M4lMeYfJwfMSBLYZHOocePFnR4lhmLnuoeyq1V5cN7uwjdpLGuh lM5Ooy+B78YNup2M1MTPRSgeVigCez1lLg/ud2ZXVZz9+Y2GGodiqlAJ7LYCcvCxTBBq t/TCYKVXks9eKjFk81XdfGACa33RMlUQPBjVmfJf6cpqwX/cNAuOOJT4TorXHu6FWff/ ZGDrVhq4Pv4ifXUWNmCT/IBsFUCGsyLu2LS6mZzobMsXZ7PKNwiDwKKVvdAjCbmurdJd 2MxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:message-id:mime-version:subject:date :references:to:in-reply-to; bh=RL1w33x3or1ugwlyPxBZlob2wlMDKydvW7kmTmfJ1os=; b=biqq9dz4D5BmOEG9kYXiXO4Sx7Zteh0QPHGx7OPdQXw5qnyUekbPRDoIxmRx6lppCC MfLJUjnWosGz5gR05w16+dJ0ArMmoagPEDzfuUD7Q2WjbpphhvU2wKht9uN5BFi11+x/ T+ezopXFtYSe1LtMNL4LUYQ9vZYBkVsFW1kQr7L+o4pa4R8UY+m8pt+vel3fk14Hq/CX WegMrq4IfuKPalYzvYp2ygzgbRcl/G9q9SziOwqhviQCfnXlmpUvhdimWxiSi03BJUNC RQC6YQce5Y52f5d2r359Z/Z5tmUhMlIje+b3ObkBlgv328niz3rO8XXNjahx2R4to2Wf W2xw== X-Gm-Message-State: ALyK8tKHB1WDWAP+wYDW4kzirB2l++TQuzyOf6FsmjmuN5F1QqEkPy6qjW3MOgoXQh00hw== X-Received: by 10.157.43.84 with SMTP id f20mr13215681otd.130.1465994784110; Wed, 15 Jun 2016 05:46:24 -0700 (PDT) Received: from [10.0.1.3] (cpe-66-60-236-133.cmts1.phonoscopecable.net. [66.60.236.133]) by smtp.gmail.com with ESMTPSA id y84sm12515991oie.5.2016.06.15.05.46.23 for (version=TLSv1/SSLv3 cipher=OTHER); Wed, 15 Jun 2016 05:46:23 -0700 (PDT) From: Julien Anguenot Content-Type: multipart/alternative; boundary="Apple-Mail=_D862EE96-62A5-4281-AC2A-39592D1E6D95" Message-Id: <309E9257-58BB-4336-9443-4C131F6D8C95@anguenot.org> Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: how to force cassandra-stress to actually generate enough data Date: Wed, 15 Jun 2016 07:46:22 -0500 References: To: user@cassandra.apache.org In-Reply-To: X-Mailer: Apple Mail (2.3124) archived-at: Wed, 15 Jun 2016 12:46:36 -0000 --Apple-Mail=_D862EE96-62A5-4281-AC2A-39592D1E6D95 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii I usually do a write only bench run first. Doing a 1B write iterations = will produce 200GB+ data on disk. You can then do mixed tests. For instance, a write bench that would produce such volume on a 3 nodes = cluster: ./tools/bin/cassandra-stress write cl=3DLOCAL_QUORUM n=3D1000000000 = -rate threads=3D10000 -node 1.2.3.1,1.2.3.2.1.2.3.4 -schema = 'replication(strategy=3DNetworkTopologyStrategy,dallas=3D3)' -log = file=3Draid5_ssd_1b_10kt_cl_quorum.log -graph = file=3Draid5_ssd_1B_10kt_cl_quorum.html = title=3Draid5_ssd_1B_10kt_cl_quorum revision=3Dbenchmark-0 After that you can then do various mixed bench runs with data, SSTables = and compactions kicking in. Not sure this is the best, advocated, way to achieve the goal when = having empty disk and no dataset to start with though. J. > On Jun 15, 2016, at 7:24 AM, Peter Kovgan = wrote: >=20 > Hi, > =20 > The cassandra-stress is not helping really to populate the disk = sufficiently. > =20 > I tried several table structures, providing=20 >=20 > cluster: UNIFORM(1..10000000000) on clustering parts of the PK. > =20 > Partition part of PK makes about 660 000 partitions. > =20 > The hope was create enough cells in a row, make the row really WIDE. > =20 > No matter what I tried, does no matter how long it runs, I see maximum = 2-3 SSTables per node and maximum 300Mb of data per node. > =20 > (I have 6 nodes and very active 400 threads stress) > =20 > It looks, like It is impossible to make the row really wide and disk = really full. > =20 > Is it intentional?=20 > =20 > I mean, if there was an intention to avoid really wide rows, why there = is no hint on this in docs? > =20 > Do you have similar experience and do you know how resolve that? > =20 > Thanks. > =20 > =20 > =20 > =20 >=20 > = **************************************************************************= ********************************************** > This communication and all or some of the information contained = therein may be confidential and is subject to our Terms and Conditions. = If you have received this communication in error, please destroy all = electronic and paper copies and notify the sender immediately. Unless = specifically indicated, this communication is not a confirmation, an = offer to sell or solicitation of any offer to buy any financial product, = or an official statement of ICAP or its affiliates. Non-Transactable = Pricing Terms and Conditions apply to any non-transactable pricing = provided. All terms and conditions referenced herein available at = www.icapterms.com . Please notify us by reply = message if this link does not work. > = **************************************************************************= ********************************************** -- Julien Anguenot (@anguenot) --Apple-Mail=_D862EE96-62A5-4281-AC2A-39592D1E6D95 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii
I usually do a write only bench run first. = Doing a 1B write iterations will produce 200GB+ data on disk.  You = can then do mixed tests.

For instance, a write bench that would produce such volume on = a 3 nodes cluster:

./tools/bin/cassandra-stress write = cl=3DLOCAL_QUORUM n=3D1000000000  -rate threads=3D10000 -node = 1.2.3.1,1.2.3.2.1.2.3.4 -schema = 'replication(strategy=3DNetworkTopologyStrategy,dallas=3D3)'  -log = file=3Draid5_ssd_1b_10kt_cl_quorum.log -graph = file=3Draid5_ssd_1B_10kt_cl_quorum.html = title=3Draid5_ssd_1B_10kt_cl_quorum revision=3Dbenchmark-0

After that you can then do = various mixed bench runs with data, SSTables and compactions kicking = in.

Not sure this = is the best, advocated, way to achieve the goal when having empty disk = and no dataset to start with though.

   J.


On Jun 15, 2016, at 7:24 AM, Peter Kovgan = <peter.kovgan@ebsbrokertec.com> wrote:

Hi,
 
The = cassandra-stress is not helping really to populate the disk = sufficiently.
 
I tried several table structures, providing 

cluster: UNIFORM(1..10000000000)  on clustering parts of = the PK.
 
Partition part of PK makes about 660 000 partitions.
 
The hope = was create enough cells in a row, make the row really WIDE.
 
No matter = what I tried, does no matter how long it runs, I see maximum 2-3 = SSTables per node and maximum 300Mb of data per node.
 
(I have 6 = nodes and very active 400 threads stress)
 
It looks, like It is impossible to make = the row really wide and disk really full.
 
Is it intentional? 
 
I mean, = if there was an intention to avoid really wide rows, why there is no = hint on this in docs?
 
Do you have similar experience and do you know how resolve = that?
 
Thanks.
 
 
 
 

***************************************************************= *********************************************************
This communication = and all or some of the information contained therein may be confidential = and is subject to our Terms and Conditions. If you have received this = communication in error, please destroy all electronic and paper copies = and notify the sender immediately. Unless specifically indicated, this = communication is not a confirmation, an offer to sell or solicitation of = any offer to buy any financial product, or an official statement of ICAP = or its affiliates. Non-Transactable Pricing Terms and Conditions apply = to any non-transactable pricing provided. All terms and conditions = referenced herein available at www.icapterms.com. Please notify us by = reply message if this link does not work.
***************************************************************= *********************************************************

--
Julien Anguenot = (@anguenot)

= --Apple-Mail=_D862EE96-62A5-4281-AC2A-39592D1E6D95--