Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C8E03E684 for ; Mon, 28 Jan 2013 10:01:29 +0000 (UTC) Received: (qmail 61922 invoked by uid 500); 28 Jan 2013 10:01:27 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 61744 invoked by uid 500); 28 Jan 2013 10:01:26 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 61710 invoked by uid 99); 28 Jan 2013 10:01:25 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Jan 2013 10:01:25 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of adrien.mogenet@gmail.com designates 209.85.220.49 as permitted sender) Received: from [209.85.220.49] (HELO mail-pa0-f49.google.com) (209.85.220.49) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Jan 2013 10:01:21 +0000 Received: by mail-pa0-f49.google.com with SMTP id bi1so1432590pad.36 for ; Mon, 28 Jan 2013 02:01:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=KUlvXtgrJ7qQgMpbAESmdA/Or7soF7VHTjZZ0HMK+1o=; b=Qx+CasfysL6nFDtdDAtLqLkyMo7jqqATVAa8BJ/0FnHPxEBdmPXVNl7PDTarfVoPcC p5qqZ6a5jutLFxTsSv74hoEPA1zIgVbagD4YdpCYDzj3dza6h8SlTUapxYRBMgJKlkPx ISYHIcLfNOcyeBXHwdf3eflpugXvQxbIKM+Nf3vVNAqnXRjZu6oS56aZbAQoMTTDND7l LiqGvBi651O8GGIIGkgNFRJCFphyBY/xL/MekDlgtTHVxaGu32NWwHEsOMC10OJOiVSf qph6mQkFdLYBoTXYwNIV3x8cY7ORlzjvd2qB+0Q57RLxdi6plvdtiPHrQJx/jidTjV57 nJJQ== MIME-Version: 1.0 X-Received: by 10.68.222.196 with SMTP id qo4mr35915660pbc.140.1359367260822; Mon, 28 Jan 2013 02:01:00 -0800 (PST) Received: by 10.68.125.162 with HTTP; Mon, 28 Jan 2013 02:01:00 -0800 (PST) Received: by 10.68.125.162 with HTTP; Mon, 28 Jan 2013 02:01:00 -0800 (PST) In-Reply-To: References: <-135344591883378152@unknownmsgid> Date: Mon, 28 Jan 2013 11:01:00 +0100 Message-ID: Subject: Re: Storing images in Hbase From: Adrien Mogenet To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=047d7b2ede71b35d1704d4565af5 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b2ede71b35d1704d4565af5 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Could HCatalog be an option ? Le 26 janv. 2013 21:56, "Jack Levin" a =C3=A9crit : > > AFAIK, namenode would not like tracking 20 billion small files :) > > -jack > > On Sat, Jan 26, 2013 at 6:00 PM, S Ahmed wrote: > > That's pretty amazing. > > > > What I am confused is, why did you go with hbase and not just straight into > > hdfs? > > > > > > > > > > On Fri, Jan 25, 2013 at 2:41 AM, Jack Levin wrote: > > > >> Two people including myself, its fairly hands off. Took about 3 months to > >> tune it right, however we did have had multiple years of experience with > >> datanodes and hadoop in general, so that was a good boost. > >> > >> We have 4 hbase clusters today, image store being largest > >> On Jan 24, 2013 2:14 PM, "S Ahmed" wrote: > >> > >> > Jack, out of curiosity, how many people manage the hbase related servers? > >> > > >> > Does it require constant monitoring or its fairly hands-off now? (or a > >> bit > >> > of both, early days was getting things write/learning and now its purring > >> > along). > >> > > >> > > >> > On Wed, Jan 23, 2013 at 11:53 PM, Jack Levin wrote: > >> > > >> > > Its best to keep some RAM for caching of the filesystem, besides w= e > >> > > also run datanode which takes heap as well. > >> > > Now, please keep in mind that even if you specify heap of say 5GB, if > >> > > your server opens threads to communicate with other systems via RP= C > >> > > (which hbase does a lot), you will indeed use HEAP + > >> > > Nthreads*thread*kb_size. There is a good Sun Microsystems documen= t > >> > > about it. (I don't have the link handy). > >> > > > >> > > -Jack > >> > > > >> > > > >> > > > >> > > On Mon, Jan 21, 2013 at 5:10 PM, Varun Sharma > >> > wrote: > >> > > > Thanks for the useful information. I wonder why you use only 5G heap > >> > when > >> > > > you have an 8G machine ? Is there a reason to not use all of it (the > >> > > > DataNode typically takes a 1G of RAM) > >> > > > > >> > > > On Sun, Jan 20, 2013 at 11:49 AM, Jack Levin > >> > wrote: > >> > > > > >> > > >> I forgot to mention that I also have this setup: > >> > > >> > >> > > >> > >> > > >> hbase.hregion.memstore.flush.size > >> > > >> 33554432 > >> > > >> Flush more often. Default: 67108864 > >> > > >> > >> > > >> > >> > > >> This parameter works on per region amount, so this means if any of > >> my > >> > > >> 400 (currently) regions on a regionserver has 30MB+ in memstore, the > >> > > >> hbase will flush it to disk. > >> > > >> > >> > > >> > >> > > >> Here are some metrics from a regionserver: > >> > > >> > >> > > >> requests=3D2, regions=3D370, stores=3D370, storefiles=3D1390, > >> > > >> storefileIndexSize=3D304, memstoreSize=3D2233, compactionQueueSize=3D0, > >> > > >> flushQueueSize=3D0, usedHeap=3D3516, maxHeap=3D4987, > >> > > >> blockCacheSize=3D790656256, blockCacheFree=3D255245888, > >> > > >> blockCacheCount=3D2436, blockCacheHitCount=3D218015828, > >> > > >> blockCacheMissCount=3D13514652, blockCacheEvictedCount=3D256151= 6, > >> > > >> blockCacheHitRatio=3D94, blockCacheHitCachingRatio=3D98 > >> > > >> > >> > > >> Note, that memstore is only 2G, this particular regionserver HEAP is > >> > set > >> > > >> to 5G. > >> > > >> > >> > > >> And last but not least, its very important to have good GC setup: > >> > > >> > >> > > >> export HBASE_OPTS=3D"$HBASE_OPTS -verbose:gc -Xms5000m > >> > > >> -XX:CMSInitiatingOccupancyFraction=3D70 -XX:+PrintGCDetails > >> > > >> -XX:+PrintGCDateStamps > >> > > >> -XX:+HeapDumpOnOutOfMemoryError > >> -Xloggc:$HBASE_HOME/logs/gc-hbase.log > >> > \ > >> > > >> -XX:MaxTenuringThreshold=3D15 -XX:SurvivorRatio=3D8 \ > >> > > >> -XX:+UseParNewGC \ > >> > > >> -XX:NewSize=3D128m -XX:MaxNewSize=3D128m \ > >> > > >> -XX:-UseAdaptiveSizePolicy \ > >> > > >> -XX:+CMSParallelRemarkEnabled \ > >> > > >> -XX:-TraceClassUnloading > >> > > >> " > >> > > >> > >> > > >> -Jack > >> > > >> > >> > > >> On Thu, Jan 17, 2013 at 3:29 PM, Varun Sharma < varun@pinterest.com> > >> > > wrote: > >> > > >> > Hey Jack, > >> > > >> > > >> > > >> > Thanks for the useful information. By flush size being 15 %, do > >> you > >> > > mean > >> > > >> > the memstore flush size ? 15 % would mean close to 1G, have you > >> seen > >> > > any > >> > > >> > issues with flushes taking too long ? > >> > > >> > > >> > > >> > Thanks > >> > > >> > Varun > >> > > >> > > >> > > >> > On Sun, Jan 13, 2013 at 8:17 AM, Jack Levin > >> > > wrote: > >> > > >> > > >> > > >> >> That's right, Memstore size , not flush size is increased. > >> > Filesize > >> > > is > >> > > >> >> 10G. Overall write cache is 60% of heap and read cache is 20%. > >> > Flush > >> > > >> size > >> > > >> >> is 15%. 64 maxlogs at 128MB. One namenode server, one secondary > >> > that > >> > > >> can > >> > > >> >> be promoted. On the way to hbase images are written to a queue, > >> so > >> > > >> that we > >> > > >> >> can take Hbase down for maintenance and still do inserts later. > >> > > >> ImageShack > >> > > >> >> has =E2=80=98perma cache=E2=80=99 servers that allows writes= and serving of data > >> > even > >> > > >> when > >> > > >> >> hbase is down for hours, consider it 4th replica =F0=9F=98= =89 outside of > >> > hadoop > >> > > >> >> > >> > > >> >> Jack > >> > > >> >> > >> > > >> >> *From:* Mohit Anchlia > >> > > >> >> *Sent:* =E2=80=8EJanuary=E2=80=8E =E2=80=8E13=E2=80=8E, =E2= =80=8E2013 =E2=80=8E7=E2=80=8E:=E2=80=8E48=E2=80=8E =E2=80=8EAM > >> > > >> >> *To:* user@hbase.apache.org > >> > > >> >> *Subject:* Re: Storing images in Hbase > >> > > >> >> > >> > > >> >> Thanks Jack for sharing this information. This definitely makes > >> > sense > >> > > >> when > >> > > >> >> using the type of caching layer. You mentioned about increasing > >> > write > >> > > >> >> cache, I am assuming you had to increase the following parameters > >> > in > >> > > >> >> addition to increase the memstore size: > >> > > >> >> > >> > > >> >> hbase.hregion.max.filesize > >> > > >> >> hbase.hregion.memstore.flush.size > >> > > >> >> > >> > > >> >> On Fri, Jan 11, 2013 at 9:47 AM, Jack Levin < magnito@gmail.com> > >> > > wrote: > >> > > >> >> > >> > > >> >> > We buffer all accesses to HBASE with Varnish SSD based caching > >> > > layer. > >> > > >> >> > So the impact for reads is negligible. We have 70 node > >> cluster, > >> > 8 > >> > > GB > >> > > >> >> > of RAM per node, relatively weak nodes (intel core 2 duo), with > >> > > >> >> > 10-12TB per server of disks. Inserting 600,000 images per day. > >> > We > >> > > >> >> > have relatively little of compaction activity as we made our > >> > write > >> > > >> >> > cache much larger than read cache - so we don't experience > >> region > >> > > file > >> > > >> >> > fragmentation as much. > >> > > >> >> > > >> > > >> >> > -Jack > >> > > >> >> > > >> > > >> >> > On Fri, Jan 11, 2013 at 9:40 AM, Mohit Anchlia < > >> > > >> mohitanchlia@gmail.com> > >> > > >> >> > wrote: > >> > > >> >> > > I think it really depends on volume of the traffic, data > >> > > >> distribution > >> > > >> >> per > >> > > >> >> > > region, how and when files compaction occurs, number of nodes > >> > in > >> > > the > >> > > >> >> > > cluster. In my experience when it comes to blob data where > >> you > >> > > are > >> > > >> >> > serving > >> > > >> >> > > 10s of thousand+ requests/sec writes and reads then it's very > >> > > >> difficult > >> > > >> >> > to > >> > > >> >> > > manage HBase without very hard operations and maintenance in > >> > > play. > >> > > >> Jack > >> > > >> >> > > earlier mentioned they have 1 billion images, It would b= e > >> > > >> interesting > >> > > >> >> to > >> > > >> >> > > know what they see in terms of compaction, no of requests per > >> > > sec. > >> > > >> I'd > >> > > >> >> be > >> > > >> >> > > surprised that in high volume site it can be done without any > >> > > >> Caching > >> > > >> >> > layer > >> > > >> >> > > on the top to alleviate IO spikes that occurs because of GC > >> and > >> > > >> >> > compactions. > >> > > >> >> > > > >> > > >> >> > > On Fri, Jan 11, 2013 at 7:27 AM, Mohammad Tariq < > >> > > dontariq@gmail.com > >> > > >> > > >> > > >> >> > wrote: > >> > > >> >> > > > >> > > >> >> > >> IMHO, if the image files are not too huge, Hbase can > >> > efficiently > >> > > >> serve > >> > > >> >> > the > >> > > >> >> > >> purpose. You can store some additional info along with the > >> > file > >> > > >> >> > depending > >> > > >> >> > >> upon your search criteria to make the search faster. Say if > >> > you > >> > > >> want > >> > > >> >> to > >> > > >> >> > >> fetch images by the type, you can store images in one column > >> > and > >> > > >> its > >> > > >> >> > >> extension in another column(jpg, tiff etc). > >> > > >> >> > >> > >> > > >> >> > >> BTW, what exactly is the problem which you are facing. You > >> > have > >> > > >> >> written > >> > > >> >> > >> "But I still cant do it"? > >> > > >> >> > >> > >> > > >> >> > >> Warm Regards, > >> > > >> >> > >> Tariq > >> > > >> >> > >> https://mtariq.jux.com/ > >> > > >> >> > >> > >> > > >> >> > >> > >> > > >> >> > >> On Fri, Jan 11, 2013 at 8:30 PM, Michael Segel < > >> > > >> >> > michael_segel@hotmail.com > >> > > >> >> > >> >wrote: > >> > > >> >> > >> > >> > > >> >> > >> > That's a viable option. > >> > > >> >> > >> > HDFS reads are faster than HBase, but it would requir= e > >> first > >> > > >> hitting > >> > > >> >> > the > >> > > >> >> > >> > index in HBase which points to the file and then fetching > >> > the > >> > > >> file. > >> > > >> >> > >> > It could be faster... we found storing binary data in a > >> > > sequence > >> > > >> >> file > >> > > >> >> > and > >> > > >> >> > >> > indexed on HBase to be faster than HBase, however, YMMV > >> and > >> > > HBase > >> > > >> >> has > >> > > >> >> > >> been > >> > > >> >> > >> > improved since we did that project.... > >> > > >> >> > >> > > >> > > >> >> > >> > > >> > > >> >> > >> > On Jan 10, 2013, at 10:56 PM, shashwat shriparv < > >> > > >> >> > >> dwivedishashwat@gmail.com> > >> > > >> >> > >> > wrote: > >> > > >> >> > >> > > >> > > >> >> > >> > > Hi Kavish, > >> > > >> >> > >> > > > >> > > >> >> > >> > > i have a better idea for you copy your image files to a > >> > > single > >> > > >> >> file > >> > > >> >> > on > >> > > >> >> > >> > > hdfs, and if new image comes append it to the existing > >> > > image, > >> > > >> and > >> > > >> >> > keep > >> > > >> >> > >> > and > >> > > >> >> > >> > > update the metadata and the offset to the HBase. Because > >> > if > >> > > you > >> > > >> >> put > >> > > >> >> > >> > bigger > >> > > >> >> > >> > > image in hbase it wil lead to some issue. > >> > > >> >> > >> > > > >> > > >> >> > >> > > > >> > > >> >> > >> > > > >> > > >> >> > >> > > =E2=88=9E > >> > > >> >> > >> > > Shashwat Shriparv > >> > > >> >> > >> > > > >> > > >> >> > >> > > > >> > > >> >> > >> > > > >> > > >> >> > >> > > On Fri, Jan 11, 2013 at 9:21 AM, lars hofhansl < > >> > > >> larsh@apache.org> > >> > > >> >> > >> wrote: > >> > > >> >> > >> > > > >> > > >> >> > >> > >> Interesting. That's close to a PB if my math is > >> correct. > >> > > >> >> > >> > >> Is there a write up about this somewhere? Somethin= g > >> that > >> > we > >> > > >> could > >> > > >> >> > link > >> > > >> >> > >> > >> from the HBase homepage? > >> > > >> >> > >> > >> > >> > > >> >> > >> > >> -- Lars > >> > > >> >> > >> > >> > >> > > >> >> > >> > >> > >> > > >> >> > >> > >> ----- Original Message ----- > >> > > >> >> > >> > >> From: Jack Levin > >> > > >> >> > >> > >> To: user@hbase.apache.org > >> > > >> >> > >> > >> Cc: Andrew Purtell > >> > > >> >> > >> > >> Sent: Thursday, January 10, 2013 9:24 AM > >> > > >> >> > >> > >> Subject: Re: Storing images in Hbase > >> > > >> >> > >> > >> > >> > > >> >> > >> > >> We stored about 1 billion images into hbase with file > >> > size > >> > > up > >> > > >> to > >> > > >> >> > 10MB. > >> > > >> >> > >> > >> Its been running for close to 2 years without issues > >> and > >> > > >> serves > >> > > >> >> > >> > >> delivery of images for Yfrog and ImageShack. If you > >> have > >> > > any > >> > > >> >> > >> > >> questions about the setup, I would be glad to answer > >> > them. > >> > > >> >> > >> > >> > >> > > >> >> > >> > >> -Jack > >> > > >> >> > >> > >> > >> > > >> >> > >> > >> On Sun, Jan 6, 2013 at 1:09 PM, Mohit Anchlia < > >> > > >> >> > mohitanchlia@gmail.com > >> > > >> >> > >> > > >> > > >> >> > >> > >> wrote: > >> > > >> >> > >> > >>> I have done extensive testing and have found that > >> blobs > >> > > don't > >> > > >> >> > belong > >> > > >> >> > >> in > >> > > >> >> > >> > >> the > >> > > >> >> > >> > >>> databases but are rather best left out on the fil= e > >> > system. > >> > > >> >> Andrew > >> > > >> >> > >> > >> outlined > >> > > >> >> > >> > >>> issues that you'll face and not to mention IO issues > >> > when > >> > > >> >> > compaction > >> > > >> >> > >> > >> occurs > >> > > >> >> > >> > >>> over large files. > >> > > >> >> > >> > >>> > >> > > >> >> > >> > >>> On Sun, Jan 6, 2013 at 12:52 PM, Andrew Purtell < > >> > > >> >> > apurtell@apache.org > >> > > >> >> > >> > > >> > > >> >> > >> > >> wrote: > >> > > >> >> > >> > >>> > >> > > >> >> > >> > >>>> I meant this to say "a few really large values" > >> > > >> >> > >> > >>>> > >> > > >> >> > >> > >>>> On Sun, Jan 6, 2013 at 12:49 PM, Andrew Purtell = < > >> > > >> >> > >> apurtell@apache.org> > >> > > >> >> > >> > >>>> wrote: > >> > > >> >> > >> > >>>> > >> > > >> >> > >> > >>>>> Consider if the split threshold is 2 GB but your one > >> > row > >> > > >> >> > contains > >> > > >> >> > >> 10 > >> > > >> >> > >> > >> GB > >> > > >> >> > >> > >>>> as > >> > > >> >> > >> > >>>>> really large value. > >> > > >> >> > >> > >>>> > >> > > >> >> > >> > >>>> > >> > > >> >> > >> > >>>> > >> > > >> >> > >> > >>>> > >> > > >> >> > >> > >>>> -- > >> > > >> >> > >> > >>>> Best regards, > >> > > >> >> > >> > >>>> > >> > > >> >> > >> > >>>> - Andy > >> > > >> >> > >> > >>>> > >> > > >> >> > >> > >>>> Problems worthy of attack prove their worth by > >> hitting > >> > > >> back. - > >> > > >> >> > Piet > >> > > >> >> > >> > Hein > >> > > >> >> > >> > >>>> (via Tom White) > >> > > >> >> > >> > >>>> > >> > > >> >> > >> > >> > >> > > >> >> > >> > >> > >> > > >> >> > >> > > >> > > >> >> > >> > > >> > > >> >> > >> > >> > > >> >> > > >> > > >> >> > >> > > >> > >> > > > >> > > >> --047d7b2ede71b35d1704d4565af5--