From dev-return-113004-archive-asf-public=cust-asf.ponee.io@cloudstack.apache.org Fri May 17 13:21:48 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id C62B718060F for ; Fri, 17 May 2019 15:21:47 +0200 (CEST) Received: (qmail 4038 invoked by uid 500); 17 May 2019 13:21:43 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 4004 invoked by uid 99); 17 May 2019 13:21:43 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 May 2019 13:21:43 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 5FE31C23E6; Fri, 17 May 2019 13:21:42 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.101 X-Spam-Level: X-Spam-Status: No, score=-0.101 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=li.nux.ro Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id XxIbzzhIxnfh; Fri, 17 May 2019 13:21:39 +0000 (UTC) Received: from mailserver.lastdot.org (mailserver.lastdot.org [31.193.175.196]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 19B875FE4D; Fri, 17 May 2019 13:21:39 +0000 (UTC) Received: from localhost (localhost [IPv6:::1]) by mailserver.lastdot.org (Postfix) with ESMTP id 7E55285016; Fri, 17 May 2019 14:21:32 +0100 (BST) Received: from mailserver.lastdot.org ([IPv6:::1]) by localhost (mailserver.lastdot.org [IPv6:::1]) (amavisd-new, port 10032) with ESMTP id kz03AMpxbb2U; Fri, 17 May 2019 14:21:26 +0100 (BST) Received: from localhost (localhost [IPv6:::1]) by mailserver.lastdot.org (Postfix) with ESMTP id 75FD285017; Fri, 17 May 2019 14:21:24 +0100 (BST) DKIM-Filter: OpenDKIM Filter v2.10.3 mailserver.lastdot.org 75FD285017 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=li.nux.ro; s=C605E3A6-F3C6-11E3-AEB0-DFF9218DCAC4; t=1558099284; bh=uZwHBQ+BdVsPa3nkEVFUSgotV0hq8cxAx3MiR9rj5B4=; h=Date:From:To:Message-ID:MIME-Version; b=GSvozG5Cb2H+MDIxMuxdgMY3uyk0xTDbckWl/proSty4nvnV6f/kK0xTwUhUKnNPj hQpPOGyiHS0rUYunO6vzMIBz5Uwu7uvy76QjfuzHWyOn4YchNEwOZ6HUiQdApfuqOW Me1ApBMsKT2KrErfHBTcJBDOMXD49gkhUGXtcaOE= X-Virus-Scanned: amavisd-new at mailserver.lastdot.org Received: from mailserver.lastdot.org ([IPv6:::1]) by localhost (mailserver.lastdot.org [IPv6:::1]) (amavisd-new, port 10026) with ESMTP id QqSQx7lU3s9v; Fri, 17 May 2019 14:21:23 +0100 (BST) Received: from mailserver.lastdot.org (mailserver.lastdot.org [31.193.175.196]) by mailserver.lastdot.org (Postfix) with ESMTP id 2992385014; Fri, 17 May 2019 14:21:01 +0100 (BST) Date: Fri, 17 May 2019 14:20:57 +0100 (BST) From: Nux! To: dev Cc: users Message-ID: <2089359563.38.1558099257184.JavaMail.zimbra@li.nux.ro> In-Reply-To: References: Subject: Re: Poor NVMe Performance with KVM MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Mailer: Zimbra 8.7.0_GA_1659 (ZimbraWebClient - FF60 (Linux)/8.7.0_GA_1659) Thread-Topic: Poor NVMe Performance with KVM Thread-Index: I0qcWaoFCb8AhPa4xVmmHgiMJpcG6g== What happens when you set deadline scheduler in both HV and guest? -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro ----- Original Message ----- > From: "Ivan Kudryavtsev" > To: "users" , "dev" > Sent: Friday, 17 May, 2019 14:16:31 > Subject: Re: Poor NVMe Performance with KVM > BTW, You may think that the improvement is achieved by caching, but I cle= ar > the cache with > sync & echo 3 > /proc/sys/vm/drop_caches >=20 > So, can't claim for sure, need other opinion, but looks like for NVMe, > writethrough must be used if you want high IO rate. At least with Intel > p4500. >=20 >=20 > =D0=BF=D1=82, 17 =D0=BC=D0=B0=D1=8F 2019 =D0=B3., 20:04 Ivan Kudryavtsev = : >=20 >> Well, just FYI, I changed cache_mode from NULL (none), to writethrough >> directly in DB and the performance boosted greatly. It may be an importa= nt >> feature for NVME drives. >> >> Currently, on 4.11, the user can set cache-mode for disk offerings, but >> cannot for service offerings, which are translated to cache=3Dnone >> corresponding disk offerings. >> >> The only way is to use SQL to change that for root disk disk offerings. >> CreateServiceOffering API doesn't support cache mode. It can be a seriou= s >> limitation for NVME users, because by default they could meet poor >> read/write performance. >> >> =D0=BF=D1=82, 17 =D0=BC=D0=B0=D1=8F 2019 =D0=B3., 19:30 Ivan Kudryavtsev= : >> >>> Darius, thanks for your participation, >>> >>> first, I used 4.14 kernel which is the default one for my cluster. Next= , >>> switched to 4.15 with dist-upgrade. >>> >>> Do you have an idea how to turn on amount of queues for virtio-scsi wit= h >>> Cloudstack? >>> >>> =D0=BF=D1=82, 17 =D0=BC=D0=B0=D1=8F 2019 =D0=B3., 19:26 Darius Kasparav= i=C4=8Dius : >>> >>>> Hi, >>>> >>>> I can see a few issues with your xml file. You can try using "queues" >>>> inside your disk definitions. This should help a little, not sure by >>>> how much for your case, but for my specific it went up by almost the >>>> number of queues. Also try cache directsync or writethrough. You >>>> should switch kernel if bugs are still there with 4.15 kernel. >>>> >>>> On Fri, May 17, 2019 at 12:14 PM Ivan Kudryavtsev >>>> wrote: >>>> > >>>> > Hello, colleagues. >>>> > >>>> > Hope, someone could help me. I just deployed a new VM host with Inte= l >>>> P4500 >>>> > local storage NVMe drive. >>>> > >>>> > From Hypervisor host I can get expected performance, 200K RIOPS, 3GB= s >>>> with >>>> > FIO, write performance is also high as expected. >>>> > >>>> > I've created a new KVM VM Service offering with virtio-scsi controll= er >>>> > (tried virtio as well) and VM is deployed. Now I try to benchmark it >>>> with >>>> > FIO. Results are very strange: >>>> > >>>> > 1. Read/Write with large blocks (1M) shows expected performance (my >>>> limits >>>> > are R=3D1000/W=3D500 MBs). >>>> > >>>> > 2. Write with direct=3D0 leads to expected 50K IOPS, while write wit= h >>>> > direct=3D1 leads to very moderate 2-3K IOPS. >>>> > >>>> > 3. Read with direct=3D0, direct=3D1 both lead to 3000 IOPS. >>>> > >>>> > During the benchmark I see VM IOWAIT=3D20%, while host IOWAIT is 0% >>>> which is >>>> > strange. >>>> > >>>> > So, basically, from inside VM my NVMe works very slow when small IOP= S >>>> are >>>> > executed. From the host, it works great. >>>> > >>>> > I tried to mount the volume with NBD to /dev/nbd0 and benchmark. Rea= d >>>> > performance is nice. Maybe someone managed to use NVME with KVM with >>>> small >>>> > IOPS? >>>> > >>>> > The filesystem is XFS, previously tried with EXT4 - results are the >>>> same. >>>> > >>>> > This is the part of VM XML definition generated by CloudStack: >>>> > >>>> > >>>> > /usr/bin/kvm-spice >>>> > >>>> > >>>> > >>> > file=3D'/var/lib/libvirt/images/6809dbd0-4a15-4014-9322-fe9010695934= '/> >>>> > >>>> > >>>> > >>> > file=3D'/var/lib/libvirt/images/ac43742c-3991-4be1-bff1-7617bf4fc6ef= '/> >>>> > >>>> > >>>> > >>>> > >>>> > 1048576000 >>>> > 524288000 >>>> > 100000 >>>> > 50000 >>>> > >>>> > 6809dbd04a1540149322 >>>> > >>>> >
>>> unit=3D'0'/> >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> >
>>> unit=3D'0'/> >>>> > >>>> > >>>> > >>>> >
>>> > function=3D'0x0'/> >>>> > >>>> > >>>> > So, what I see now, is that it works slower than couple of two Samsu= ng >>>> 960 >>>> > PRO which is extremely strange. >>>> > >>>> > Thanks in advance. >>>> > >>>> > >>>> > -- >>>> > With best regards, Ivan Kudryavtsev >>>> > Bitworks LLC >>>> > Cell RU: +7-923-414-1515 >>>> > Cell USA: +1-201-257-1512 >>>> > WWW: http://bitworks.software/ >>>>