Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 60FF8D238 for ; Tue, 30 Oct 2012 06:46:30 +0000 (UTC) Received: (qmail 88648 invoked by uid 500); 30 Oct 2012 06:46:28 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 88569 invoked by uid 500); 30 Oct 2012 06:46:28 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 88545 invoked by uid 99); 30 Oct 2012 06:46:27 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Oct 2012 06:46:27 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of ramkrishna.s.vasudevan@gmail.com designates 74.125.83.41 as permitted sender) Received: from [74.125.83.41] (HELO mail-ee0-f41.google.com) (74.125.83.41) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Oct 2012 06:46:20 +0000 Received: by mail-ee0-f41.google.com with SMTP id c4so3133470eek.14 for ; Mon, 29 Oct 2012 23:46:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=NkQ6MQ58IEElJKgwGUsTP4AFmA+OhbPnLxpVObCGw6s=; b=JiZvy0xPpTUxQHur3j27jmuC8TUhYGl1Dwre7+7v50XwZgNsCH+EkjhNXDUuLmP7rx qnaAMQDmnzzbpSThONCLJAGFk68O2JfOC+AjzN+m2RgkXsriXqq4KVb1Wdirro88dIgf I+VOFiDsI1ZccDvK2zZNB8pKqTWwINdLf0dTnr3HDPH/pytAZCUCLQK/Vi58ZEH6Aj+I GgZwrCmwLfz2RoU1JnX4Cn8lLl5lkvtwg/2jdqAeDpZ0dR8DaiZzWph0aUuWBMxgQncM 9QTa722oLNwkUhqKkUMHn6PY5HbuPW1fQ3RW5M4y4VrXebzIRtovsocYrsq2GU4x+Rwu 5fhQ== MIME-Version: 1.0 Received: by 10.14.0.68 with SMTP id 44mr70797206eea.1.1351579560326; Mon, 29 Oct 2012 23:46:00 -0700 (PDT) Received: by 10.14.96.7 with HTTP; Mon, 29 Oct 2012 23:46:00 -0700 (PDT) In-Reply-To: References: <002c01cdb586$19509910$4bf1cb30$@com> <005301cdb5aa$59a4e920$0ceebb60$@com> <007701cdb5cc$ebbb9f30$c332dd90$@com> Date: Tue, 30 Oct 2012 12:16:00 +0530 Message-ID: Subject: =?GB2312?Q?Re=3A_=B4=F0=B8=B4=3A_How_to_adjust_hbase_settings_when_too_ma?= =?GB2312?Q?ny_store_files=3F?= From: ramkrishna vasudevan To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=047d7b62282294342c04cd4123b6 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b62282294342c04cd4123b6 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable Hi Can you see if your hTable instances are shared across different threads. that could be the reason for you null pointer excepiton. Regards Ram On Tue, Oct 30, 2012 at 7:22 AM, xkwang bruce wrot= e: > Hi,=CB=D5=EE=F1. > > U may need presplit you htable when the load is heavy or there should be > some problem in your client code. > Just a suggestion. > > bruce > > > 2012/10/29 =CB=D5=EE=F1 > > > Hi, everyone. > > > > I changed the max size of hbase store file and increased region servers= . > > The > > former exception doesn't happen again. > > But there is another exception at the client side. > > > > 2012-10-29 19:06:27:758 WARN [pool-2-thread-2] > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementati= on > > | Failed all from > > > > > region=3Dstatistic_visit_detail1,,1351508069797.3272dd30817191d9d393d1d6e= 1b99d > > 1b., hostname=3Dhadoop02, port=3D60020 > > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > > java.lang.NullPointerException > > at > > java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) > > at java.util.concurrent.FutureTask.get(FutureTask.java:83) > > at > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementati= on. > > processBatchCallback(HConnectionManager.java:1557) > > at > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementati= on. > > processBatch(HConnectionManager.java:1409) > > at > > org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:900) > > at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:773) > > at org.apache.hadoop.hbase.client.HTable.put(HTable.java:760) > > at > > com.lietou.datawarehouse.imp.HBaseImporter$ActualHBaseImporter$1. > > process(HBaseImporter.java:150) > > at > > com.lietou.datawarehouse.imp.HBaseImporter$ActualHBaseImporter$1. > > process(HBaseImporter.java:133) > > at > > > > > com.lietou.datawarehouse.common.range.Repeater.rangeRepeat(Repeater.java:= 48) > > at > > > > > com.lietou.datawarehouse.common.range.Repeater.rangeRepeat(Repeater.java:= 30) > > at > > > > > com.lietou.datawarehouse.imp.HBaseImporter$ActualHBaseImporter.run(HBaseI= mpo > > rter.java:162) > > at > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor= .ja > > va:886) > > at > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:9 > > 08) > > at java.lang.Thread.run(Thread.java:662) > > Caused by: java.lang.RuntimeException: java.lang.NullPointerException > > at > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementati= on. > > getRegionServerWithoutRetries(HConnectionManager.java:1371) > > at > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementati= on$ > > 3.call(HConnectionManager.java:1383) > > at > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementati= on$ > > 3.call(HConnectionManager.java:1381) > > at > > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > > ... 3 more > > Caused by: java.lang.NullPointerException > > at > > > > > org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcE= ngi > > ne.java:158) > > at $Proxy10.multi(Unknown Source) > > at > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementati= on$ > > 3$1.call(HConnectionManager.java:1386) > > at > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementati= on$ > > 3$1.call(HConnectionManager.java:1384) > > at > > > > > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementati= on. > > getRegionServerWithoutRetries(HConnectionManager.java:1365) > > ... 7 more > > > > > > This error happens quite offen. And on the server side, here are some > > warnings > > > > 2012-10-29 19:50:39,748 WARN org.apache.hadoop.ipc.HBaseServer: > > (responseTooSlow): > > > > > {"processingtimems":17476,"call":"multi(org.apache.hadoop.hbase.client.Mu= lti > > Action@62aaeb8e), rpc version=3D1, client version=3D29, > > methodsFingerPrint=3D54742778","client":"192.168.1.70:3237 > > ","starttimems":1351 > > > > > 511422270,"queuetimems":0,"class":"HRegionServer","responsesize":0,"metho= d": > > "multi"} > > > > > > I have only 5 threads to execute put actions at the same time. > > The machine load is not very high. > > > > top - 19:55:02 up 7:56, 4 users, load average: 1.62, 1.36, 1.11 > > > > Anyone met this error before? Please help me. > > > > Thanks. > > > > > > -----=D3=CA=BC=FE=D4=AD=BC=FE----- > > =B7=A2=BC=FE=C8=CB: =CB=D5=EE=F1 [mailto:sucheng@lietou.com] > > =B7=A2=CB=CD=CA=B1=BC=E4: 2012=C4=EA10=D4=C229=C8=D5 15:53 > > =CA=D5=BC=FE=C8=CB: user@hbase.apache.org > > =D6=F7=CC=E2: =B4=F0=B8=B4: How to adjust hbase settings when too many = store files? > > > > I checked the region server log again, and I found something below: > > > > 2012-10-28 06:24:24,811 ERROR > > org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: > > Compaction failed regionName=3Dstati > > > > > stic_visit_detail,20120922|13984|451728,1351376659451.9b2bfae5d77109693a1= 53e > > b16fcb7793., storeName=3Dcf1, fileCount=3D7, fileSize=3D1.1g ( > > 681.7m, 168.6m, 139.9m, 36.2m, 53.0m, 26.5m, 5.9m), priority=3D0, > > time=3D469259302083252 > > java.io.IOException: java.io.IOException: File > > > > > /hbase/statistic_visit_detail/9b2bfae5d77109693a153eb16fcb7793/.tmp/3a3e6= ee8 > > 8a524659b > > 9f9716e5ca21a74 could only be replicated to 0 nodes, instead of 1 > > at > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FS= Nam > > esystem.java:1531) > > at > > > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:68= 5) > > at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) > > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown > Source) > > at java.lang.reflect.Method.invoke(Unknown Source) > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) > > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) > > > > It seems that, the fileSize already exceed the max size, which is 1G as > > default. > > > > So I add more region servers and enlarge the max file size, to see if i= t > > works. > > > > Thanks a lot. > > > > -----=D3=CA=BC=FE=D4=AD=BC=FE----- > > =B7=A2=BC=FE=C8=CB: Ramkrishna.S.Vasudevan [mailto:ramkrishna.vasudevan= @huawei.com] > > =B7=A2=CB=CD=CA=B1=BC=E4: 2012=C4=EA10=D4=C229=C8=D5 14:06 > > =CA=D5=BC=FE=C8=CB: user@hbase.apache.org > > =D6=F7=CC=E2: RE: How to adjust hbase settings when too many store file= s? > > > > Also check what is your heap size of RS? > > When you say hTable.put(), how many such threads are there? > > > > What is your region size? Is your regions splitting continuously do to > > heavy load? > > > > Regards > > Ram > > > > > > > -----Original Message----- > > > From: yuzhihong@gmail.com [mailto:yuzhihong@gmail.com] > > > Sent: Monday, October 29, 2012 10:01 AM > > > To: user@hbase.apache.org > > > Cc: > > > Subject: Re: How to adjust hbase settings when too many store files? > > > > > > What version of hbase were you using ? > > > Did you pre split the table before loading ? > > > > > > Thanks > > > > > > > > > > > > On Oct 28, 2012, at 8:33 PM, =CB=D5=EE=F1 wrote: > > > > > > > Hello. I encounter a region server error when I try to put bulk dat= a > > > from a > > > > java client. > > > > > > > > The java client extracts data from a relational database and puts > > > those data > > > > into hbase. > > > > > > > > When I try to extract data from a large table(say, 1 billion > > > records), the > > > > error happens. > > > > > > > > > > > > > > > > The region server's log says: > > > > > > > > > > > > > > > >> 2012-10-28 00:00:02,169 WARN > > > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Region > > > > statistic_visit_detail,20120804|72495|8549 > > > > > > > > 56,1351353594195.ad2592ee7a3610c60c47cf8be77496c8. has too many sto= re > > > files; > > > > delaying flush up to 90000ms > > > > > > > >> 2012-10-28 00:00:02,791 DEBUG > > > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush thread > > > woke up > > > > because memory above low wa > > > > > > > > ter=3D347.1m > > > > > > > >> 2012-10-28 00:00:02,791 DEBUG > > > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Under global > > > heap > > > > pressure: Region statistic_vis > > > > > > > > > > > it_detail,20120804|72495|854956,1351353594195.ad2592ee7a3610c60c47cf8= be > > > 77496 > > > > c8. has too many store files, but is 141.5m vs best flus > > > > > > > > hable region's 46.8m. Choosing the bigger. > > > > > > > >> 2012-10-28 00:00:02,791 INFO > > > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of regi= on > > > > statistic_visit_detail,20120804|7 > > > > > > > > 2495|854956,1351353594195.ad2592ee7a3610c60c47cf8be77496c8. due to > > > global > > > > heap pressure > > > > > > > > ... > > > > > > > > > > > > > > > > And finally, > > > > > > > > > > > > > > > >> 2012-10-28 00:00:43,511 INFO > > > org.apache.hadoop.hbase.regionserver.HRegion: > > > > compaction interrupted by user > > > > > > > >> java.io.InterruptedIOException: Aborting compaction of store cf1 i= n > > > region > > > > statistic_visit_detail,20120804|72495|854956,135135359419 > > > > > > > > 5.ad2592ee7a3610c60c47cf8be77496c8. because user requested stop. > > > > > > > > at > > > > > > > org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:12= 75 > > > ) > > > > > > > > at > > > > org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:765) > > > > > > > > at > > > > > > > org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:102= 3) > > > > > > > > at > > > > > > > org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.ru= n( > > > Compa > > > > ctionRequest.java:177) > > > > > > > > at > > > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExec= ut > > > or.ja > > > > va:886) > > > > > > > > at > > > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor= .j > > > ava:9 > > > > 08) > > > > > > > > at java.lang.Thread.run(Thread.java:662) > > > > > > > > > > > > > > > > Then the region server shuts down. > > > > > > > > > > > > > > > > It seems that too many store files(due to too many records from > > > > relational-db) consumed too many memories, if I'm right. > > > > > > > > I'm new to hbase, what settings should I adjust? Or even increase > > > region > > > > servers? > > > > > > > > I'm going to do some research by myself, and any advise will be > > > appreciated. > > > > > > > > Best regards, > > > > > > > > > > > > > > > > Su > > > > > > > > > > > > > > > > > > > > > --047d7b62282294342c04cd4123b6--