Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5D88EE6D1 for ; Sun, 27 Jan 2013 02:56:30 +0000 (UTC) Received: (qmail 45664 invoked by uid 500); 27 Jan 2013 02:56:28 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 45451 invoked by uid 500); 27 Jan 2013 02:56:28 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 45429 invoked by uid 99); 27 Jan 2013 02:56:27 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 27 Jan 2013 02:56:27 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of magnito@gmail.com designates 209.85.128.173 as permitted sender) Received: from [209.85.128.173] (HELO mail-ve0-f173.google.com) (209.85.128.173) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 27 Jan 2013 02:56:22 +0000 Received: by mail-ve0-f173.google.com with SMTP id oz10so180301veb.18 for ; Sat, 26 Jan 2013 18:56:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type:content-transfer-encoding; bh=wnT8E0V77TuOGupSFBeDPdqiIRxh/mxqwardoOeLPls=; b=E2DM4C5YIGUYYb+Ttd27unSWtJLDM53SWe3viJ5sy0fP5bS256fbyHHg5VW3OYUUBS Xgzcliaat0r/VtD36GvawaLBcArLbN4Qz7WRNOpaJs+m4HI8gvI6bZYdZREOdB0xtkda 30FVO5WaHVyMuVE5xHkis9Ahzeh1Arwnzi9GB4aieC3qJqnYM47I/O8decGKZ/r4L/dQ uVaNRDb2Xe3Q/lkrFRsp3hILlno9dqY2PkdmIlXZgqvRkO0rYBusY2d1FOUVkN9TlUjU yrb3darA8H/vrtaJVixFFzHfvkRhXt7/ThV7VFHZ6tRMc4NShknlvmQbLaV8zn2c4a0d hkpw== MIME-Version: 1.0 X-Received: by 10.52.74.227 with SMTP id x3mr9433327vdv.80.1359255361711; Sat, 26 Jan 2013 18:56:01 -0800 (PST) Received: by 10.58.161.163 with HTTP; Sat, 26 Jan 2013 18:56:01 -0800 (PST) In-Reply-To: References: <-135344591883378152@unknownmsgid> Date: Sat, 26 Jan 2013 18:56:01 -0800 Message-ID: Subject: Re: Storing images in Hbase From: Jack Levin To: user@hbase.apache.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org AFAIK, namenode would not like tracking 20 billion small files :) -jack On Sat, Jan 26, 2013 at 6:00 PM, S Ahmed wrote: > That's pretty amazing. > > What I am confused is, why did you go with hbase and not just straight in= to > hdfs? > > > > > On Fri, Jan 25, 2013 at 2:41 AM, Jack Levin wrote: > >> Two people including myself, its fairly hands off. Took about 3 months t= o >> tune it right, however we did have had multiple years of experience with >> datanodes and hadoop in general, so that was a good boost. >> >> We have 4 hbase clusters today, image store being largest >> On Jan 24, 2013 2:14 PM, "S Ahmed" wrote: >> >> > Jack, out of curiosity, how many people manage the hbase related serve= rs? >> > >> > Does it require constant monitoring or its fairly hands-off now? (or = a >> bit >> > of both, early days was getting things write/learning and now its purr= ing >> > along). >> > >> > >> > On Wed, Jan 23, 2013 at 11:53 PM, Jack Levin wrote= : >> > >> > > Its best to keep some RAM for caching of the filesystem, besides we >> > > also run datanode which takes heap as well. >> > > Now, please keep in mind that even if you specify heap of say 5GB, i= f >> > > your server opens threads to communicate with other systems via RPC >> > > (which hbase does a lot), you will indeed use HEAP + >> > > Nthreads*thread*kb_size. There is a good Sun Microsystems document >> > > about it. (I don't have the link handy). >> > > >> > > -Jack >> > > >> > > >> > > >> > > On Mon, Jan 21, 2013 at 5:10 PM, Varun Sharma >> > wrote: >> > > > Thanks for the useful information. I wonder why you use only 5G he= ap >> > when >> > > > you have an 8G machine ? Is there a reason to not use all of it (t= he >> > > > DataNode typically takes a 1G of RAM) >> > > > >> > > > On Sun, Jan 20, 2013 at 11:49 AM, Jack Levin >> > wrote: >> > > > >> > > >> I forgot to mention that I also have this setup: >> > > >> >> > > >> >> > > >> hbase.hregion.memstore.flush.size >> > > >> 33554432 >> > > >> Flush more often. Default: 67108864 >> > > >> >> > > >> >> > > >> This parameter works on per region amount, so this means if any o= f >> my >> > > >> 400 (currently) regions on a regionserver has 30MB+ in memstore, = the >> > > >> hbase will flush it to disk. >> > > >> >> > > >> >> > > >> Here are some metrics from a regionserver: >> > > >> >> > > >> requests=3D2, regions=3D370, stores=3D370, storefiles=3D1390, >> > > >> storefileIndexSize=3D304, memstoreSize=3D2233, compactionQueueSiz= e=3D0, >> > > >> flushQueueSize=3D0, usedHeap=3D3516, maxHeap=3D4987, >> > > >> blockCacheSize=3D790656256, blockCacheFree=3D255245888, >> > > >> blockCacheCount=3D2436, blockCacheHitCount=3D218015828, >> > > >> blockCacheMissCount=3D13514652, blockCacheEvictedCount=3D2561516, >> > > >> blockCacheHitRatio=3D94, blockCacheHitCachingRatio=3D98 >> > > >> >> > > >> Note, that memstore is only 2G, this particular regionserver HEAP= is >> > set >> > > >> to 5G. >> > > >> >> > > >> And last but not least, its very important to have good GC setup: >> > > >> >> > > >> export HBASE_OPTS=3D"$HBASE_OPTS -verbose:gc -Xms5000m >> > > >> -XX:CMSInitiatingOccupancyFraction=3D70 -XX:+PrintGCDetails >> > > >> -XX:+PrintGCDateStamps >> > > >> -XX:+HeapDumpOnOutOfMemoryError >> -Xloggc:$HBASE_HOME/logs/gc-hbase.log >> > \ >> > > >> -XX:MaxTenuringThreshold=3D15 -XX:SurvivorRatio=3D8 \ >> > > >> -XX:+UseParNewGC \ >> > > >> -XX:NewSize=3D128m -XX:MaxNewSize=3D128m \ >> > > >> -XX:-UseAdaptiveSizePolicy \ >> > > >> -XX:+CMSParallelRemarkEnabled \ >> > > >> -XX:-TraceClassUnloading >> > > >> " >> > > >> >> > > >> -Jack >> > > >> >> > > >> On Thu, Jan 17, 2013 at 3:29 PM, Varun Sharma >> > > wrote: >> > > >> > Hey Jack, >> > > >> > >> > > >> > Thanks for the useful information. By flush size being 15 %, do >> you >> > > mean >> > > >> > the memstore flush size ? 15 % would mean close to 1G, have you >> seen >> > > any >> > > >> > issues with flushes taking too long ? >> > > >> > >> > > >> > Thanks >> > > >> > Varun >> > > >> > >> > > >> > On Sun, Jan 13, 2013 at 8:17 AM, Jack Levin >> > > wrote: >> > > >> > >> > > >> >> That's right, Memstore size , not flush size is increased. >> > Filesize >> > > is >> > > >> >> 10G. Overall write cache is 60% of heap and read cache is 20%. >> > Flush >> > > >> size >> > > >> >> is 15%. 64 maxlogs at 128MB. One namenode server, one seconda= ry >> > that >> > > >> can >> > > >> >> be promoted. On the way to hbase images are written to a queu= e, >> so >> > > >> that we >> > > >> >> can take Hbase down for maintenance and still do inserts later= . >> > > >> ImageShack >> > > >> >> has =E2=80=98perma cache=E2=80=99 servers that allows writes a= nd serving of data >> > even >> > > >> when >> > > >> >> hbase is down for hours, consider it 4th replica =F0=9F=98=89 = outside of >> > hadoop >> > > >> >> >> > > >> >> Jack >> > > >> >> >> > > >> >> *From:* Mohit Anchlia >> > > >> >> *Sent:* =E2=80=8EJanuary=E2=80=8E =E2=80=8E13=E2=80=8E, =E2=80= =8E2013 =E2=80=8E7=E2=80=8E:=E2=80=8E48=E2=80=8E =E2=80=8EAM >> > > >> >> *To:* user@hbase.apache.org >> > > >> >> *Subject:* Re: Storing images in Hbase >> > > >> >> >> > > >> >> Thanks Jack for sharing this information. This definitely make= s >> > sense >> > > >> when >> > > >> >> using the type of caching layer. You mentioned about increasin= g >> > write >> > > >> >> cache, I am assuming you had to increase the following paramet= ers >> > in >> > > >> >> addition to increase the memstore size: >> > > >> >> >> > > >> >> hbase.hregion.max.filesize >> > > >> >> hbase.hregion.memstore.flush.size >> > > >> >> >> > > >> >> On Fri, Jan 11, 2013 at 9:47 AM, Jack Levin >> > > wrote: >> > > >> >> >> > > >> >> > We buffer all accesses to HBASE with Varnish SSD based cachi= ng >> > > layer. >> > > >> >> > So the impact for reads is negligible. We have 70 node >> cluster, >> > 8 >> > > GB >> > > >> >> > of RAM per node, relatively weak nodes (intel core 2 duo), w= ith >> > > >> >> > 10-12TB per server of disks. Inserting 600,000 images per d= ay. >> > We >> > > >> >> > have relatively little of compaction activity as we made our >> > write >> > > >> >> > cache much larger than read cache - so we don't experience >> region >> > > file >> > > >> >> > fragmentation as much. >> > > >> >> > >> > > >> >> > -Jack >> > > >> >> > >> > > >> >> > On Fri, Jan 11, 2013 at 9:40 AM, Mohit Anchlia < >> > > >> mohitanchlia@gmail.com> >> > > >> >> > wrote: >> > > >> >> > > I think it really depends on volume of the traffic, data >> > > >> distribution >> > > >> >> per >> > > >> >> > > region, how and when files compaction occurs, number of no= des >> > in >> > > the >> > > >> >> > > cluster. In my experience when it comes to blob data where >> you >> > > are >> > > >> >> > serving >> > > >> >> > > 10s of thousand+ requests/sec writes and reads then it's v= ery >> > > >> difficult >> > > >> >> > to >> > > >> >> > > manage HBase without very hard operations and maintenance = in >> > > play. >> > > >> Jack >> > > >> >> > > earlier mentioned they have 1 billion images, It would be >> > > >> interesting >> > > >> >> to >> > > >> >> > > know what they see in terms of compaction, no of requests = per >> > > sec. >> > > >> I'd >> > > >> >> be >> > > >> >> > > surprised that in high volume site it can be done without = any >> > > >> Caching >> > > >> >> > layer >> > > >> >> > > on the top to alleviate IO spikes that occurs because of G= C >> and >> > > >> >> > compactions. >> > > >> >> > > >> > > >> >> > > On Fri, Jan 11, 2013 at 7:27 AM, Mohammad Tariq < >> > > dontariq@gmail.com >> > > >> > >> > > >> >> > wrote: >> > > >> >> > > >> > > >> >> > >> IMHO, if the image files are not too huge, Hbase can >> > efficiently >> > > >> serve >> > > >> >> > the >> > > >> >> > >> purpose. You can store some additional info along with th= e >> > file >> > > >> >> > depending >> > > >> >> > >> upon your search criteria to make the search faster. Say = if >> > you >> > > >> want >> > > >> >> to >> > > >> >> > >> fetch images by the type, you can store images in one col= umn >> > and >> > > >> its >> > > >> >> > >> extension in another column(jpg, tiff etc). >> > > >> >> > >> >> > > >> >> > >> BTW, what exactly is the problem which you are facing. Yo= u >> > have >> > > >> >> written >> > > >> >> > >> "But I still cant do it"? >> > > >> >> > >> >> > > >> >> > >> Warm Regards, >> > > >> >> > >> Tariq >> > > >> >> > >> https://mtariq.jux.com/ >> > > >> >> > >> >> > > >> >> > >> >> > > >> >> > >> On Fri, Jan 11, 2013 at 8:30 PM, Michael Segel < >> > > >> >> > michael_segel@hotmail.com >> > > >> >> > >> >wrote: >> > > >> >> > >> >> > > >> >> > >> > That's a viable option. >> > > >> >> > >> > HDFS reads are faster than HBase, but it would require >> first >> > > >> hitting >> > > >> >> > the >> > > >> >> > >> > index in HBase which points to the file and then fetchi= ng >> > the >> > > >> file. >> > > >> >> > >> > It could be faster... we found storing binary data in a >> > > sequence >> > > >> >> file >> > > >> >> > and >> > > >> >> > >> > indexed on HBase to be faster than HBase, however, YMMV >> and >> > > HBase >> > > >> >> has >> > > >> >> > >> been >> > > >> >> > >> > improved since we did that project.... >> > > >> >> > >> > >> > > >> >> > >> > >> > > >> >> > >> > On Jan 10, 2013, at 10:56 PM, shashwat shriparv < >> > > >> >> > >> dwivedishashwat@gmail.com> >> > > >> >> > >> > wrote: >> > > >> >> > >> > >> > > >> >> > >> > > Hi Kavish, >> > > >> >> > >> > > >> > > >> >> > >> > > i have a better idea for you copy your image files to= a >> > > single >> > > >> >> file >> > > >> >> > on >> > > >> >> > >> > > hdfs, and if new image comes append it to the existin= g >> > > image, >> > > >> and >> > > >> >> > keep >> > > >> >> > >> > and >> > > >> >> > >> > > update the metadata and the offset to the HBase. Beca= use >> > if >> > > you >> > > >> >> put >> > > >> >> > >> > bigger >> > > >> >> > >> > > image in hbase it wil lead to some issue. >> > > >> >> > >> > > >> > > >> >> > >> > > >> > > >> >> > >> > > >> > > >> >> > >> > > =E2=88=9E >> > > >> >> > >> > > Shashwat Shriparv >> > > >> >> > >> > > >> > > >> >> > >> > > >> > > >> >> > >> > > >> > > >> >> > >> > > On Fri, Jan 11, 2013 at 9:21 AM, lars hofhansl < >> > > >> larsh@apache.org> >> > > >> >> > >> wrote: >> > > >> >> > >> > > >> > > >> >> > >> > >> Interesting. That's close to a PB if my math is >> correct. >> > > >> >> > >> > >> Is there a write up about this somewhere? Something >> that >> > we >> > > >> could >> > > >> >> > link >> > > >> >> > >> > >> from the HBase homepage? >> > > >> >> > >> > >> >> > > >> >> > >> > >> -- Lars >> > > >> >> > >> > >> >> > > >> >> > >> > >> >> > > >> >> > >> > >> ----- Original Message ----- >> > > >> >> > >> > >> From: Jack Levin >> > > >> >> > >> > >> To: user@hbase.apache.org >> > > >> >> > >> > >> Cc: Andrew Purtell >> > > >> >> > >> > >> Sent: Thursday, January 10, 2013 9:24 AM >> > > >> >> > >> > >> Subject: Re: Storing images in Hbase >> > > >> >> > >> > >> >> > > >> >> > >> > >> We stored about 1 billion images into hbase with fil= e >> > size >> > > up >> > > >> to >> > > >> >> > 10MB. >> > > >> >> > >> > >> Its been running for close to 2 years without issues >> and >> > > >> serves >> > > >> >> > >> > >> delivery of images for Yfrog and ImageShack. If you >> have >> > > any >> > > >> >> > >> > >> questions about the setup, I would be glad to answer >> > them. >> > > >> >> > >> > >> >> > > >> >> > >> > >> -Jack >> > > >> >> > >> > >> >> > > >> >> > >> > >> On Sun, Jan 6, 2013 at 1:09 PM, Mohit Anchlia < >> > > >> >> > mohitanchlia@gmail.com >> > > >> >> > >> > >> > > >> >> > >> > >> wrote: >> > > >> >> > >> > >>> I have done extensive testing and have found that >> blobs >> > > don't >> > > >> >> > belong >> > > >> >> > >> in >> > > >> >> > >> > >> the >> > > >> >> > >> > >>> databases but are rather best left out on the file >> > system. >> > > >> >> Andrew >> > > >> >> > >> > >> outlined >> > > >> >> > >> > >>> issues that you'll face and not to mention IO issue= s >> > when >> > > >> >> > compaction >> > > >> >> > >> > >> occurs >> > > >> >> > >> > >>> over large files. >> > > >> >> > >> > >>> >> > > >> >> > >> > >>> On Sun, Jan 6, 2013 at 12:52 PM, Andrew Purtell < >> > > >> >> > apurtell@apache.org >> > > >> >> > >> > >> > > >> >> > >> > >> wrote: >> > > >> >> > >> > >>> >> > > >> >> > >> > >>>> I meant this to say "a few really large values" >> > > >> >> > >> > >>>> >> > > >> >> > >> > >>>> On Sun, Jan 6, 2013 at 12:49 PM, Andrew Purtell < >> > > >> >> > >> apurtell@apache.org> >> > > >> >> > >> > >>>> wrote: >> > > >> >> > >> > >>>> >> > > >> >> > >> > >>>>> Consider if the split threshold is 2 GB but your = one >> > row >> > > >> >> > contains >> > > >> >> > >> 10 >> > > >> >> > >> > >> GB >> > > >> >> > >> > >>>> as >> > > >> >> > >> > >>>>> really large value. >> > > >> >> > >> > >>>> >> > > >> >> > >> > >>>> >> > > >> >> > >> > >>>> >> > > >> >> > >> > >>>> >> > > >> >> > >> > >>>> -- >> > > >> >> > >> > >>>> Best regards, >> > > >> >> > >> > >>>> >> > > >> >> > >> > >>>> - Andy >> > > >> >> > >> > >>>> >> > > >> >> > >> > >>>> Problems worthy of attack prove their worth by >> hitting >> > > >> back. - >> > > >> >> > Piet >> > > >> >> > >> > Hein >> > > >> >> > >> > >>>> (via Tom White) >> > > >> >> > >> > >>>> >> > > >> >> > >> > >> >> > > >> >> > >> > >> >> > > >> >> > >> > >> > > >> >> > >> > >> > > >> >> > >> >> > > >> >> > >> > > >> >> >> > > >> >> > > >> > >>