Return-Path: Delivered-To: apmail-hbase-dev-archive@www.apache.org Received: (qmail 99771 invoked from network); 4 Dec 2010 23:05:16 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 4 Dec 2010 23:05:16 -0000 Received: (qmail 25958 invoked by uid 500); 4 Dec 2010 23:05:16 -0000 Delivered-To: apmail-hbase-dev-archive@hbase.apache.org Received: (qmail 25930 invoked by uid 500); 4 Dec 2010 23:05:16 -0000 Mailing-List: contact dev-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list dev@hbase.apache.org Received: (qmail 25922 invoked by uid 99); 4 Dec 2010 23:05:16 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 04 Dec 2010 23:05:16 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [209.85.214.169] (HELO mail-iw0-f169.google.com) (209.85.214.169) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 04 Dec 2010 23:05:11 +0000 Received: by iwn33 with SMTP id 33so1309218iwn.14 for ; Sat, 04 Dec 2010 15:04:50 -0800 (PST) Received: by 10.231.35.73 with SMTP id o9mr3702545ibd.193.1291503890565; Sat, 04 Dec 2010 15:04:50 -0800 (PST) MIME-Version: 1.0 Received: by 10.231.154.18 with HTTP; Sat, 4 Dec 2010 15:04:29 -0800 (PST) In-Reply-To: References: From: Todd Lipcon Date: Sat, 4 Dec 2010 15:04:29 -0800 Message-ID: Subject: Re: Local sockets To: dev@hbase.apache.org Content-Type: multipart/alternative; boundary=000325579f5e9f996404969dae3d --000325579f5e9f996404969dae3d Content-Type: text/plain; charset=ISO-8859-1 On Sat, Dec 4, 2010 at 2:57 PM, Vladimir Rodionov wrote: > From my own experiments performance difference is huge even on > sequential R/W operations (up to 300%) when you do local File I/O vs HDFS > File I/O > > Overhead of HDFS I/O is substantial to say the least. > > Much of this is from checksumming, though - turn off checksums and you should see about a 2x improvement at least. -Todd > Best regards, > Vladimir Rodionov > Principal Platform Engineer > Carrier IQ, www.carrieriq.com > e-mail: vrodionov@carrieriq.com > > ________________________________________ > From: Todd Lipcon [todd@cloudera.com] > Sent: Saturday, December 04, 2010 12:30 PM > To: dev@hbase.apache.org > Subject: Re: Local sockets > > Hi Leen, > > Check out HDFS-347 for more info on this. I hope to pick this back up in > 2011 - in 2010 we mostly focused on stability above performance in HBase's > interactions with HDFS. > > Thanks > -Todd > > On Sat, Dec 4, 2010 at 12:28 PM, Leen Toelen wrote: > > > Hi, > > > > has anyone tested the performance impact (when there is a hdfs > > datanode and a hbase node on the same machine) of using unix domain > > sockets communication or shared memory ipc using nio? I guess this > > should make a difference on reads? > > > > Regards, > > Leen > > > > > > -- > Todd Lipcon > Software Engineer, Cloudera > -- Todd Lipcon Software Engineer, Cloudera --000325579f5e9f996404969dae3d--