Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 24F13E18D for ; Tue, 22 Jan 2013 01:13:15 +0000 (UTC) Received: (qmail 79730 invoked by uid 500); 22 Jan 2013 01:13:13 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 79681 invoked by uid 500); 22 Jan 2013 01:13:12 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 79673 invoked by uid 99); 22 Jan 2013 01:13:12 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 22 Jan 2013 01:13:12 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of varun@pinterest.com designates 209.85.210.175 as permitted sender) Received: from [209.85.210.175] (HELO mail-ia0-f175.google.com) (209.85.210.175) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 22 Jan 2013 01:13:08 +0000 Received: by mail-ia0-f175.google.com with SMTP id r4so2901138iaj.6 for ; Mon, 21 Jan 2013 17:12:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type:x-gm-message-state; bh=YDzev2mYLyg9v1HSNA9DOj09oMApR4ZF8buhRURmHBI=; b=Kd8Je0tXnT+XILZnKKbk8zDlDt7jdK6dHuxphJkq5VVb+ZsNPUBnVsAD9pM/V7jLtL alP9rrTW3sQ4aIUH2dkWVuTDpGarvdCxV4wUlFgIO2cDcgR+A0UXyrMpcnwgc7SXfJ/p 9DEwG1jhvqZ9ZHqYpw8c40o7s5SwIxQdu1KyLnrBFqOVV85bNaIKEpTx4hPXNvZy/1Nb QNtJwAS7PfqY4VDwDcE4YlGq9rm6VgyQshjtXim8h1TCzpDFp4BLeZI6FAlEVed0fwvJ LVqPKXBA2DiZcX7kW94t27/1LXXX4+uqiSiCVp+WZ5HJfSQmkEl/lCcbb+SNosSaw+Ga /Xgg== MIME-Version: 1.0 X-Received: by 10.50.202.97 with SMTP id kh1mr10883308igc.15.1358817167399; Mon, 21 Jan 2013 17:12:47 -0800 (PST) Received: by 10.231.233.4 with HTTP; Mon, 21 Jan 2013 17:12:47 -0800 (PST) In-Reply-To: References: <-135344591883378152@unknownmsgid> Date: Mon, 21 Jan 2013 17:12:47 -0800 Message-ID: Subject: Re: Storing images in Hbase From: Varun Sharma To: user@hbase.apache.org, magnito@gmail.com Content-Type: multipart/alternative; boundary=f46d044794f993db4404d3d646c7 X-Gm-Message-State: ALoCoQlCCv1Gc6CwvBTCbVPFk4OX9htIs//MIqwRilrY47Mw4Doal/MLRnaR1hyYBOyjaa92pdIC X-Virus-Checked: Checked by ClamAV on apache.org --f46d044794f993db4404d3d646c7 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable On Mon, Jan 21, 2013 at 5:10 PM, Varun Sharma wrote: > Thanks for the useful information. I wonder why you use only 5G heap when > you have an 8G machine ? Is there a reason to not use all of it (the > DataNode typically takes a 1G of RAM) > > > On Sun, Jan 20, 2013 at 11:49 AM, Jack Levin wrote: > >> I forgot to mention that I also have this setup: >> >> >> hbase.hregion.memstore.flush.size >> 33554432 >> Flush more often. Default: 67108864 >> >> >> This parameter works on per region amount, so this means if any of my >> 400 (currently) regions on a regionserver has 30MB+ in memstore, the >> hbase will flush it to disk. >> >> >> Here are some metrics from a regionserver: >> >> requests=3D2, regions=3D370, stores=3D370, storefiles=3D1390, >> storefileIndexSize=3D304, memstoreSize=3D2233, compactionQueueSize=3D0, >> flushQueueSize=3D0, usedHeap=3D3516, maxHeap=3D4987, >> blockCacheSize=3D790656256, blockCacheFree=3D255245888, >> blockCacheCount=3D2436, blockCacheHitCount=3D218015828, >> blockCacheMissCount=3D13514652, blockCacheEvictedCount=3D2561516, >> blockCacheHitRatio=3D94, blockCacheHitCachingRatio=3D98 >> >> Note, that memstore is only 2G, this particular regionserver HEAP is set >> to 5G. >> >> And last but not least, its very important to have good GC setup: >> >> export HBASE_OPTS=3D"$HBASE_OPTS -verbose:gc -Xms5000m >> -XX:CMSInitiatingOccupancyFraction=3D70 -XX:+PrintGCDetails >> -XX:+PrintGCDateStamps >> -XX:+HeapDumpOnOutOfMemoryError -Xloggc:$HBASE_HOME/logs/gc-hbase.log \ >> -XX:MaxTenuringThreshold=3D15 -XX:SurvivorRatio=3D8 \ >> -XX:+UseParNewGC \ >> -XX:NewSize=3D128m -XX:MaxNewSize=3D128m \ >> -XX:-UseAdaptiveSizePolicy \ >> -XX:+CMSParallelRemarkEnabled \ >> -XX:-TraceClassUnloading >> " >> >> -Jack >> >> On Thu, Jan 17, 2013 at 3:29 PM, Varun Sharma >> wrote: >> > Hey Jack, >> > >> > Thanks for the useful information. By flush size being 15 %, do you me= an >> > the memstore flush size ? 15 % would mean close to 1G, have you seen a= ny >> > issues with flushes taking too long ? >> > >> > Thanks >> > Varun >> > >> > On Sun, Jan 13, 2013 at 8:17 AM, Jack Levin wrote: >> > >> >> That's right, Memstore size , not flush size is increased. Filesize = is >> >> 10G. Overall write cache is 60% of heap and read cache is 20%. Flush >> size >> >> is 15%. 64 maxlogs at 128MB. One namenode server, one secondary that >> can >> >> be promoted. On the way to hbase images are written to a queue, so >> that we >> >> can take Hbase down for maintenance and still do inserts later. >> ImageShack >> >> has =E2=80=98perma cache=E2=80=99 servers that allows writes and serv= ing of data even >> when >> >> hbase is down for hours, consider it 4th replica =F0=9F=98=89 outside= of hadoop >> >> >> >> Jack >> >> >> >> *From:* Mohit Anchlia >> >> *Sent:* =E2=80=8EJanuary=E2=80=8E =E2=80=8E13=E2=80=8E, =E2=80=8E2013= =E2=80=8E7=E2=80=8E:=E2=80=8E48=E2=80=8E =E2=80=8EAM >> >> *To:* user@hbase.apache.org >> >> *Subject:* Re: Storing images in Hbase >> >> >> >> Thanks Jack for sharing this information. This definitely makes sense >> when >> >> using the type of caching layer. You mentioned about increasing write >> >> cache, I am assuming you had to increase the following parameters in >> >> addition to increase the memstore size: >> >> >> >> hbase.hregion.max.filesize >> >> hbase.hregion.memstore.flush.size >> >> >> >> On Fri, Jan 11, 2013 at 9:47 AM, Jack Levin wrote= : >> >> >> >> > We buffer all accesses to HBASE with Varnish SSD based caching laye= r. >> >> > So the impact for reads is negligible. We have 70 node cluster, 8 = GB >> >> > of RAM per node, relatively weak nodes (intel core 2 duo), with >> >> > 10-12TB per server of disks. Inserting 600,000 images per day. We >> >> > have relatively little of compaction activity as we made our write >> >> > cache much larger than read cache - so we don't experience region >> file >> >> > fragmentation as much. >> >> > >> >> > -Jack >> >> > >> >> > On Fri, Jan 11, 2013 at 9:40 AM, Mohit Anchlia < >> mohitanchlia@gmail.com> >> >> > wrote: >> >> > > I think it really depends on volume of the traffic, data >> distribution >> >> per >> >> > > region, how and when files compaction occurs, number of nodes in >> the >> >> > > cluster. In my experience when it comes to blob data where you ar= e >> >> > serving >> >> > > 10s of thousand+ requests/sec writes and reads then it's very >> difficult >> >> > to >> >> > > manage HBase without very hard operations and maintenance in play= . >> Jack >> >> > > earlier mentioned they have 1 billion images, It would be >> interesting >> >> to >> >> > > know what they see in terms of compaction, no of requests per sec= . >> I'd >> >> be >> >> > > surprised that in high volume site it can be done without any >> Caching >> >> > layer >> >> > > on the top to alleviate IO spikes that occurs because of GC and >> >> > compactions. >> >> > > >> >> > > On Fri, Jan 11, 2013 at 7:27 AM, Mohammad Tariq < >> dontariq@gmail.com> >> >> > wrote: >> >> > > >> >> > >> IMHO, if the image files are not too huge, Hbase can efficiently >> serve >> >> > the >> >> > >> purpose. You can store some additional info along with the file >> >> > depending >> >> > >> upon your search criteria to make the search faster. Say if you >> want >> >> to >> >> > >> fetch images by the type, you can store images in one column and >> its >> >> > >> extension in another column(jpg, tiff etc). >> >> > >> >> >> > >> BTW, what exactly is the problem which you are facing. You have >> >> written >> >> > >> "But I still cant do it"? >> >> > >> >> >> > >> Warm Regards, >> >> > >> Tariq >> >> > >> https://mtariq.jux.com/ >> >> > >> >> >> > >> >> >> > >> On Fri, Jan 11, 2013 at 8:30 PM, Michael Segel < >> >> > michael_segel@hotmail.com >> >> > >> >wrote: >> >> > >> >> >> > >> > That's a viable option. >> >> > >> > HDFS reads are faster than HBase, but it would require first >> hitting >> >> > the >> >> > >> > index in HBase which points to the file and then fetching the >> file. >> >> > >> > It could be faster... we found storing binary data in a sequen= ce >> >> file >> >> > and >> >> > >> > indexed on HBase to be faster than HBase, however, YMMV and >> HBase >> >> has >> >> > >> been >> >> > >> > improved since we did that project.... >> >> > >> > >> >> > >> > >> >> > >> > On Jan 10, 2013, at 10:56 PM, shashwat shriparv < >> >> > >> dwivedishashwat@gmail.com> >> >> > >> > wrote: >> >> > >> > >> >> > >> > > Hi Kavish, >> >> > >> > > >> >> > >> > > i have a better idea for you copy your image files to a sing= le >> >> file >> >> > on >> >> > >> > > hdfs, and if new image comes append it to the existing image= , >> and >> >> > keep >> >> > >> > and >> >> > >> > > update the metadata and the offset to the HBase. Because if >> you >> >> put >> >> > >> > bigger >> >> > >> > > image in hbase it wil lead to some issue. >> >> > >> > > >> >> > >> > > >> >> > >> > > >> >> > >> > > =E2=88=9E >> >> > >> > > Shashwat Shriparv >> >> > >> > > >> >> > >> > > >> >> > >> > > >> >> > >> > > On Fri, Jan 11, 2013 at 9:21 AM, lars hofhansl < >> larsh@apache.org> >> >> > >> wrote: >> >> > >> > > >> >> > >> > >> Interesting. That's close to a PB if my math is correct. >> >> > >> > >> Is there a write up about this somewhere? Something that we >> could >> >> > link >> >> > >> > >> from the HBase homepage? >> >> > >> > >> >> >> > >> > >> -- Lars >> >> > >> > >> >> >> > >> > >> >> >> > >> > >> ----- Original Message ----- >> >> > >> > >> From: Jack Levin >> >> > >> > >> To: user@hbase.apache.org >> >> > >> > >> Cc: Andrew Purtell >> >> > >> > >> Sent: Thursday, January 10, 2013 9:24 AM >> >> > >> > >> Subject: Re: Storing images in Hbase >> >> > >> > >> >> >> > >> > >> We stored about 1 billion images into hbase with file size >> up to >> >> > 10MB. >> >> > >> > >> Its been running for close to 2 years without issues and >> serves >> >> > >> > >> delivery of images for Yfrog and ImageShack. If you have a= ny >> >> > >> > >> questions about the setup, I would be glad to answer them. >> >> > >> > >> >> >> > >> > >> -Jack >> >> > >> > >> >> >> > >> > >> On Sun, Jan 6, 2013 at 1:09 PM, Mohit Anchlia < >> >> > mohitanchlia@gmail.com >> >> > >> > >> >> > >> > >> wrote: >> >> > >> > >>> I have done extensive testing and have found that blobs >> don't >> >> > belong >> >> > >> in >> >> > >> > >> the >> >> > >> > >>> databases but are rather best left out on the file system. >> >> Andrew >> >> > >> > >> outlined >> >> > >> > >>> issues that you'll face and not to mention IO issues when >> >> > compaction >> >> > >> > >> occurs >> >> > >> > >>> over large files. >> >> > >> > >>> >> >> > >> > >>> On Sun, Jan 6, 2013 at 12:52 PM, Andrew Purtell < >> >> > apurtell@apache.org >> >> > >> > >> >> > >> > >> wrote: >> >> > >> > >>> >> >> > >> > >>>> I meant this to say "a few really large values" >> >> > >> > >>>> >> >> > >> > >>>> On Sun, Jan 6, 2013 at 12:49 PM, Andrew Purtell < >> >> > >> apurtell@apache.org> >> >> > >> > >>>> wrote: >> >> > >> > >>>> >> >> > >> > >>>>> Consider if the split threshold is 2 GB but your one row >> >> > contains >> >> > >> 10 >> >> > >> > >> GB >> >> > >> > >>>> as >> >> > >> > >>>>> really large value. >> >> > >> > >>>> >> >> > >> > >>>> >> >> > >> > >>>> >> >> > >> > >>>> >> >> > >> > >>>> -- >> >> > >> > >>>> Best regards, >> >> > >> > >>>> >> >> > >> > >>>> - Andy >> >> > >> > >>>> >> >> > >> > >>>> Problems worthy of attack prove their worth by hitting >> back. - >> >> > Piet >> >> > >> > Hein >> >> > >> > >>>> (via Tom White) >> >> > >> > >>>> >> >> > >> > >> >> >> > >> > >> >> >> > >> > >> >> > >> > >> >> > >> >> >> > >> >> >> > > --f46d044794f993db4404d3d646c7--