Return-Path: Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: (qmail 77343 invoked from network); 24 Jun 2010 04:47:14 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 24 Jun 2010 04:47:14 -0000 Received: (qmail 87288 invoked by uid 500); 24 Jun 2010 04:47:13 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 87227 invoked by uid 500); 24 Jun 2010 04:47:12 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 87219 invoked by uid 99); 24 Jun 2010 04:47:11 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 24 Jun 2010 04:47:11 +0000 X-ASF-Spam-Status: No, hits=-1541.5 required=10.0 tests=ALL_TRUSTED,AWL X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 24 Jun 2010 04:47:10 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id o5O4kng0010386 for ; Thu, 24 Jun 2010 04:46:50 GMT Message-ID: <3685373.32371277354809363.JavaMail.jira@thor> Date: Thu, 24 Jun 2010 00:46:49 -0400 (EDT) From: "stack (JIRA)" To: issues@hbase.apache.org Subject: [jira] Resolved: (HBASE-2774) Spin in ReadWriteConsistencyControl eating CPU (load > 40) and no progress running YCSB on clean cluster startup In-Reply-To: <17284048.10171277273809358.JavaMail.jira@thor> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-2774. -------------------------- Hadoop Flags: [Reviewed] Fix Version/s: 0.21.0 Resolution: Fixed Committed to development branch and to trunk. > Spin in ReadWriteConsistencyControl eating CPU (load > 40) and no progress running YCSB on clean cluster startup > ---------------------------------------------------------------------------------------------------------------- > > Key: HBASE-2774 > URL: https://issues.apache.org/jira/browse/HBASE-2774 > Project: HBase > Issue Type: Bug > Reporter: stack > Fix For: 0.21.0 > > Attachments: 2774-v4.txt, 2774-v5.txt, sync-wait3.txt > > > When I try to do a YCSB load, RSs will spin up massive load but make no progress. Seems to happen to each RS in turn until they do their first flush. They stay in the high-load mode for maybe 5-10 minutes or so and then fall out of the bad condition. > Here is my ugly YCSB command (Haven't gotten around to tidying it up yet): > {code} > $ java -cp build/ycsb.jar:/home/hadoop/current/conf/:/home/hadoop/current/hbase-0.21.0-SNAPSHOT.jar:/home/hadoop/current/lib/hadoop-core-0.20.3-append-r956776.jar:/home/hadoop/current/lib/zookeeper-3.3.1.jar:/home/hadoop/current/lib/commons-logging-1.1.1.jar:/home/hadoop/current/lib/log4j-1.2.15.jar com.yahoo.ycsb.Client -load -db com.yahoo.ycsb.db.HBaseClient -P workloads/5050 -p columnfamily=values -s -threads 100 -p recordcount=10000000 > {code} > Cluster is 5 regionservers NOT running hadoop-core-0.20.3-append-r956776 but rather old head of branch-0.20 hadoop. > It seems that its easy to repro if you start fresh. It might happen later in loading but it seems as though after first flush, we're ok. > It comes on pretty immediately. The server that is taking on the upload has its load start to climb gradually up into the 40s then stays there. Later it falls when condtion clears. > Here is content of my yahoo workload file: > {code} > recordcount=100000000 > operationcount=100000000 > workload=com.yahoo.ycsb.workloads.CoreWorkload > readallfields=true > readproportion=0.5 > updateproportion=0.5 > scanproportion=0 > insertproportion=0 > requestdistribution=zipfian > {code} > Here is my hbase-site.xml > {code} > > hbase.regions.slop > 0.01 > Rebalance if regionserver has average + (average * slop) regions. > Default is 30% slop. > > > > hbase.zookeeper.quorum > XXXXXXXXX > > > hbase.regionserver.hlog.blocksize > 67108864 > Block size for HLog files. To minimize potential data loss, > the size should be (avg key length) * (avg value length) * flushlogentries. > Default 1MB. > > > > hbase.hstore.blockingStoreFiles > 25 > > > hbase.rootdir > hdfs://svXXXXXX:9000/hbase > The directory shared by region servers. > > > hbase.cluster.distributed > true > > > zookeeper.znode.parent > /stack > > the path in zookeeper for this cluster > > > > hfile.block.cache.size > 0.2 > > The size of the block cache used by HFile/StoreFile. Set to 0 to disable. > > > > hbase.hregion.memstore.block.multiplier > 8 > > Block updates if memcache has hbase.hregion.block.memcache > time hbase.hregion.flush.size bytes. Useful preventing > runaway memcache during spikes in update traffic. Without an > upper-bound, memcache fills such that when it flushes the > resultant flush files take a long time to compact or split, or > worse, we OOME. > > > > zookeeper.session.timeout > 60000 > > > hbase.regionserver.handler.count > 60 > Count of RPC Server instances spun up on RegionServers > Same property is used by the HMaster for count of master handlers. > Default is 10. > > > > hbase.regions.percheckin > 20 > > > hbase.regionserver.maxlogs > 128 > > > hbase.regionserver.logroll.multiplier > 2.95 > > {code} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.