Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7C4CDE55D for ; Sun, 13 Jan 2013 16:18:08 +0000 (UTC) Received: (qmail 87059 invoked by uid 500); 13 Jan 2013 16:18:06 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 86955 invoked by uid 500); 13 Jan 2013 16:18:06 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 86947 invoked by uid 99); 13 Jan 2013 16:18:06 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 13 Jan 2013 16:18:06 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of magnito@gmail.com designates 209.85.220.171 as permitted sender) Received: from [209.85.220.171] (HELO mail-vc0-f171.google.com) (209.85.220.171) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 13 Jan 2013 16:17:59 +0000 Received: by mail-vc0-f171.google.com with SMTP id n11so2903364vch.30 for ; Sun, 13 Jan 2013 08:17:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:importance:date:message-id:subject:to :content-type; bh=IX82CtbVHSWyjyN/FX2WZYFfpuqdQ4ipQ6JYuFMCEBE=; b=JP3/ZQpP9o3XP5MDz61mDneiBVMrOVA/A1xlnaC2cepSrwvKCEkwOduJZCk+kLxAI2 jgNb/gMOsVHjqFJKJStj5ivRz24ICgOT37HsGcGXs6l9p815HAL3uA9T4Id0E7kLCOeJ 3KN8eKTdLcr5ZNgwKdmdY2DCNw7vnqlmXvx/LpvLO2dyUwLBiBGZceNGaz3L33T10bUO oMofagKjP1TgSeeTwZkEuAR8aQi8h3xZWxl+ivhkldyRuHbanY6VG5JQJpHyejyZ+RVO QP/Zm80iEptYG9a8FYbMLRTXcOtQC/qOsY+kF2tf992XO76BQIBYiDHnC2nj2/VrtOZG DB9g== Received: by 10.220.154.129 with SMTP id o1mr560115vcw.64.1358093858281; Sun, 13 Jan 2013 08:17:38 -0800 (PST) MIME-Version: 1.0 From: Jack Levin Importance: Normal Date: Sun, 13 Jan 2013 16:17:37 +0000 Message-ID: <-135344591883378152@unknownmsgid> Subject: RE: Storing images in Hbase To: "user@hbase.apache.org" , Mohit Anchlia Content-Type: multipart/alternative; boundary=f46d0438ee95fe999b04d32ddd34 X-Virus-Checked: Checked by ClamAV on apache.org --f46d0438ee95fe999b04d32ddd34 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable That's right, Memstore size , not flush size is increased. Filesize is 10G. Overall write cache is 60% of heap and read cache is 20%. Flush size is 15%. 64 maxlogs at 128MB. One namenode server, one secondary that can be promoted. On the way to hbase images are written to a queue, so that we can take Hbase down for maintenance and still do inserts later. ImageShack has =E2=80=98perma cache=E2=80=99 servers that allows writes and serving of= data even when hbase is down for hours, consider it 4th replica =F0=9F=98=89 outside of ha= doop Jack *From:* Mohit Anchlia *Sent:* =E2=80=8EJanuary=E2=80=8E =E2=80=8E13=E2=80=8E, =E2=80=8E2013 =E2= =80=8E7=E2=80=8E:=E2=80=8E48=E2=80=8E =E2=80=8EAM *To:* user@hbase.apache.org *Subject:* Re: Storing images in Hbase Thanks Jack for sharing this information. This definitely makes sense when using the type of caching layer. You mentioned about increasing write cache, I am assuming you had to increase the following parameters in addition to increase the memstore size: hbase.hregion.max.filesize hbase.hregion.memstore.flush.size On Fri, Jan 11, 2013 at 9:47 AM, Jack Levin wrote: > We buffer all accesses to HBASE with Varnish SSD based caching layer. > So the impact for reads is negligible. We have 70 node cluster, 8 GB > of RAM per node, relatively weak nodes (intel core 2 duo), with > 10-12TB per server of disks. Inserting 600,000 images per day. We > have relatively little of compaction activity as we made our write > cache much larger than read cache - so we don't experience region file > fragmentation as much. > > -Jack > > On Fri, Jan 11, 2013 at 9:40 AM, Mohit Anchlia > wrote: > > I think it really depends on volume of the traffic, data distribution per > > region, how and when files compaction occurs, number of nodes in the > > cluster. In my experience when it comes to blob data where you are > serving > > 10s of thousand+ requests/sec writes and reads then it's very difficult > to > > manage HBase without very hard operations and maintenance in play. Jack > > earlier mentioned they have 1 billion images, It would be interesting t= o > > know what they see in terms of compaction, no of requests per sec. I'd be > > surprised that in high volume site it can be done without any Caching > layer > > on the top to alleviate IO spikes that occurs because of GC and > compactions. > > > > On Fri, Jan 11, 2013 at 7:27 AM, Mohammad Tariq > wrote: > > > >> IMHO, if the image files are not too huge, Hbase can efficiently serve > the > >> purpose. You can store some additional info along with the file > depending > >> upon your search criteria to make the search faster. Say if you want t= o > >> fetch images by the type, you can store images in one column and its > >> extension in another column(jpg, tiff etc). > >> > >> BTW, what exactly is the problem which you are facing. You have writte= n > >> "But I still cant do it"? > >> > >> Warm Regards, > >> Tariq > >> https://mtariq.jux.com/ > >> > >> > >> On Fri, Jan 11, 2013 at 8:30 PM, Michael Segel < > michael_segel@hotmail.com > >> >wrote: > >> > >> > That's a viable option. > >> > HDFS reads are faster than HBase, but it would require first hitting > the > >> > index in HBase which points to the file and then fetching the file. > >> > It could be faster... we found storing binary data in a sequence fil= e > and > >> > indexed on HBase to be faster than HBase, however, YMMV and HBase ha= s > >> been > >> > improved since we did that project.... > >> > > >> > > >> > On Jan 10, 2013, at 10:56 PM, shashwat shriparv < > >> dwivedishashwat@gmail.com> > >> > wrote: > >> > > >> > > Hi Kavish, > >> > > > >> > > i have a better idea for you copy your image files to a single fil= e > on > >> > > hdfs, and if new image comes append it to the existing image, and > keep > >> > and > >> > > update the metadata and the offset to the HBase. Because if you pu= t > >> > bigger > >> > > image in hbase it wil lead to some issue. > >> > > > >> > > > >> > > > >> > > =E2=88=9E > >> > > Shashwat Shriparv > >> > > > >> > > > >> > > > >> > > On Fri, Jan 11, 2013 at 9:21 AM, lars hofhansl > >> wrote: > >> > > > >> > >> Interesting. That's close to a PB if my math is correct. > >> > >> Is there a write up about this somewhere? Something that we could > link > >> > >> from the HBase homepage? > >> > >> > >> > >> -- Lars > >> > >> > >> > >> > >> > >> ----- Original Message ----- > >> > >> From: Jack Levin > >> > >> To: user@hbase.apache.org > >> > >> Cc: Andrew Purtell > >> > >> Sent: Thursday, January 10, 2013 9:24 AM > >> > >> Subject: Re: Storing images in Hbase > >> > >> > >> > >> We stored about 1 billion images into hbase with file size up to > 10MB. > >> > >> Its been running for close to 2 years without issues and serves > >> > >> delivery of images for Yfrog and ImageShack. If you have any > >> > >> questions about the setup, I would be glad to answer them. > >> > >> > >> > >> -Jack > >> > >> > >> > >> On Sun, Jan 6, 2013 at 1:09 PM, Mohit Anchlia < > mohitanchlia@gmail.com > >> > > >> > >> wrote: > >> > >>> I have done extensive testing and have found that blobs don't > belong > >> in > >> > >> the > >> > >>> databases but are rather best left out on the file system. Andre= w > >> > >> outlined > >> > >>> issues that you'll face and not to mention IO issues when > compaction > >> > >> occurs > >> > >>> over large files. > >> > >>> > >> > >>> On Sun, Jan 6, 2013 at 12:52 PM, Andrew Purtell < > apurtell@apache.org > >> > > >> > >> wrote: > >> > >>> > >> > >>>> I meant this to say "a few really large values" > >> > >>>> > >> > >>>> On Sun, Jan 6, 2013 at 12:49 PM, Andrew Purtell < > >> apurtell@apache.org> > >> > >>>> wrote: > >> > >>>> > >> > >>>>> Consider if the split threshold is 2 GB but your one row > contains > >> 10 > >> > >> GB > >> > >>>> as > >> > >>>>> really large value. > >> > >>>> > >> > >>>> > >> > >>>> > >> > >>>> > >> > >>>> -- > >> > >>>> Best regards, > >> > >>>> > >> > >>>> - Andy > >> > >>>> > >> > >>>> Problems worthy of attack prove their worth by hitting back. - > Piet > >> > Hein > >> > >>>> (via Tom White) > >> > >>>> > >> > >> > >> > >> > >> > > >> > > >> > --f46d0438ee95fe999b04d32ddd34--