Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 81300EA07 for ; Tue, 12 Feb 2013 19:33:06 +0000 (UTC) Received: (qmail 56625 invoked by uid 500); 12 Feb 2013 19:32:59 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 56372 invoked by uid 500); 12 Feb 2013 19:32:59 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 56218 invoked by uid 99); 12 Feb 2013 19:32:59 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Feb 2013 19:32:59 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of mailinglists19@gmail.com designates 209.85.214.41 as permitted sender) Received: from [209.85.214.41] (HELO mail-bk0-f41.google.com) (209.85.214.41) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Feb 2013 19:32:52 +0000 Received: by mail-bk0-f41.google.com with SMTP id q16so202248bkw.28 for ; Tue, 12 Feb 2013 11:32:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=L+vsulV6FagXqi78QZqeyZpPUqxtWgqsRivlakos6ko=; b=aRbYYjMJHjqb+EnuvkSVlp+g0n3EKjzO6fg3nd33tjTXd0qwI7ZQP+ILj9ljo4EdQL wsL1TBPjVlMTjG9YEKQbifeuMhToyfmkEFBJ1rpkewQLXWLLn+RJ+jTDetFVMNoAWpYK PcEdMf0dYQO8xMdrPePScAv94AUs4jd3Cs7hp/3e5+vxehssDnO3SXRca5M/r3biIMtq fnJV83OqL5HTgcw+Dt9EAGhXSZCSSwUa0F8raqZWKr0OgLnB1aqmEXAa2K7YtxMBFa0B EP6xOauhb770LrttUk+8aJNCOsViOjZv9Q/uYRD1vWM4TgWcqd1lpe/tqDyLSdeNXgHB pd1Q== MIME-Version: 1.0 X-Received: by 10.204.9.21 with SMTP id j21mr5582369bkj.32.1360697551963; Tue, 12 Feb 2013 11:32:31 -0800 (PST) Received: by 10.205.40.70 with HTTP; Tue, 12 Feb 2013 11:32:31 -0800 (PST) In-Reply-To: References: <4300198D-0743-4D51-9840-5130FA9D5E7A@localresponse.com> <06F4AA40-621E-4A68-8A60-B27E8F979CAC@localresponse.com> Date: Tue, 12 Feb 2013 11:32:31 -0800 Message-ID: Subject: Re: Loader for small files From: Something Something To: user@hadoop.apache.org, user@pig.apache.org Content-Type: multipart/alternative; boundary=0015175885583b468e04d58c1690 X-Virus-Checked: Checked by ClamAV on apache.org --0015175885583b468e04d58c1690 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable No, Yong, I believe you misunderstood. David's explanation makes sense. As pointed out in my original email, everything is going to 1 Mapper. It's not creating multiple mappers. BTW, the code given in my original email, indeed works as expected. It does trigger multiple mappers, but it doesn't really improve the performance. We believe the problem is that there's a data skew. We are looking into creating Partitioner to solve it. Thanks. On Tue, Feb 12, 2013 at 7:15 AM, java8964 java8964 wr= ote: > Hi, Davie: > > I am not sure I understand this suggestion. Why smaller block size will > help this performance issue? > > From what the original question about, it looks like the performance > problem is due to that there are a lot of small files, and each file will > run in its own mapper. > > As hadoop needs to start a lot of mappers (I think creating a mapper also > takes time and resource), but each mapper only take small amount of data > (maybe hundreds K or several M of data, much less than the block size), > most of the time is wasting on creating task instance for mapper, but eac= h > mapper finishes very quickly. > > This is the reason of performance problem, right? Do I understand the > problem wrong? > > If so, reducing the block size won't help in this case, right? To fix it, > we need to merge multi-files into one mapper, so let one mapper has enoug= h > data to process. > > Unless my understanding is total wrong, I don't know how reducing block > size will help in this case. > > Thanks > > Yong > > > Subject: Re: Loader for small files > > From: davidlabarbera@localresponse.com > > Date: Mon, 11 Feb 2013 15:38:54 -0500 > > CC: user@hadoop.apache.org > > To: user@pig.apache.org > > > > > What process creates the data in HDFS? You should be able to set the > block size there and avoid the copy. > > > > I would test the dfs.block.size on the copy and see if you get the > mapper split you want before worrying about optimizing. > > > > David > > > > On Feb 11, 2013, at 2:10 PM, Something Something < > mailinglists19@gmail.com> wrote: > > > > > David: Your suggestion would add an additional step of copying data > from > > > one place to another. Not bad, but not ideal. Is there no way to avoi= d > > > copying of data? > > > > > > BTW, we have tried changing the following options to no avail :( > > > > > > set pig.splitCombination false; > > > > > > & a few other 'dfs' options given below: > > > > > > mapreduce.min.split.size > > > mapreduce.max.split.size > > > > > > Thanks. > > > > > > On Mon, Feb 11, 2013 at 10:29 AM, David LaBarbera < > > > davidlabarbera@localresponse.com> wrote: > > > > > >> You could store your data in smaller block sizes. Do something like > > >> hadoop fs HADOOP_OPTS=3D"-Ddfs.block.size=3D1048576 > > >> -Dfs.local.block.size=3D1048576" -cp /org-input /small-block-input > > >> You might only need one of those parameters. You can verify the bloc= k > size > > >> with > > >> hadoop fsck /small-block-input > > >> > > >> In your pig script, you'll probably need to set > > >> pig.maxCombinedSplitSize > > >> to something around the block size > > >> > > >> David > > >> > > >> On Feb 11, 2013, at 1:24 PM, Something Something < > mailinglists19@gmail.com> > > >> wrote: > > >> > > >>> Sorry.. Moving 'hbase' mailing list to BCC 'cause this is not > related to > > >>> HBase. Adding 'hadoop' user group. > > >>> > > >>> On Mon, Feb 11, 2013 at 10:22 AM, Something Something < > > >>> mailinglists19@gmail.com> wrote: > > >>> > > >>>> Hello, > > >>>> > > >>>> We are running into performance issues with Pig/Hadoop because our > input > > >>>> files are small. Everything goes to only 1 Mapper. To get around > > >> this, we > > >>>> are trying to use our own Loader like this: > > >>>> > > >>>> 1) Extend PigStorage: > > >>>> > > >>>> public class SmallFileStorage extends PigStorage { > > >>>> > > >>>> public SmallFileStorage(String delimiter) { > > >>>> super(delimiter); > > >>>> } > > >>>> > > >>>> @Override > > >>>> public InputFormat getInputFormat() { > > >>>> return new NLineInputFormat(); > > >>>> } > > >>>> } > > >>>> > > >>>> > > >>>> > > >>>> 2) Add command line argument to the Pig command as follows: > > >>>> > > >>>> -Dmapreduce.input.lineinputformat.linespermap=3D500000 > > >>>> > > >>>> > > >>>> > > >>>> 3) Use SmallFileStorage in the Pig script as follows: > > >>>> > > >>>> USING com.xxx.yyy.SmallFileStorage ('\t') > > >>>> > > >>>> > > >>>> But this doesn't seem to work. We still see that everything is > going to > > >>>> one mapper. Before we spend any more time on this, I am wondering = if > > >> this > > >>>> is a good approach =96 OR =96 if there's a better approach? Please= let > me > > >>>> know. Thanks. > > >>>> > > >>>> > > >>>> > > >> > > >> > > > --0015175885583b468e04d58c1690 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable No, Yong, I believe you misunderstood. David's explanation makes sense.= =A0 As pointed out in my original email, everything is going to 1 Mapper.= =A0 It's not creating multiple mappers.

BTW, the code given in m= y original email, indeed works as expected.=A0 It does trigger multiple map= pers, but it doesn't really improve the performance.

We believe the problem is that there's a data skew.=A0 We are looki= ng into creating Partitioner to solve it.=A0 Thanks.


On Tue, Feb 12, 2013 at 7:15 AM, java8964 java8964 <ja= va8964@hotmail.com> wrote:
=A0Hi, Davie:

I am not sure I understand this suggestion= . Why smaller block size will help this performance issue?

From what the original question about, it looks like the performan= ce problem is due to that there are a lot of small files, and each file wil= l run in its own mapper.

As hadoop needs to start a lot of mappers (I think crea= ting a mapper also takes time and resource), but each mapper only take smal= l amount of data (maybe hundreds K or several M of data, much less than the= block size), most of the time is wasting on creating task instance for map= per, but each mapper finishes very quickly.

This is the reason of performance problem, right? Do I = understand the problem wrong?

If so, reducing the = block size won't help in this case, right? To fix it, we need to merge = multi-files into one mapper, so let one mapper has enough data to process.= =A0

Unless my understanding is total wrong, I don't kno= w how reducing block size will help in this case.

= Thanks

Yong

> Subject: R= e: Loader for small files
> From: davidlabarbera@localresponse.com
> Date: Mon, 11 Feb 2013 1= 5:38:54 -0500
> CC: user@hadoop.apache.org
> To: user@pig.= apache.org

>
> What process creates= the data in HDFS? You should be able to set the block size there and avoid= the copy.
>
> I would test the dfs.block.size on the copy and see if you ge= t the mapper split you want before worrying about optimizing.
>
&= gt; David
>
> On Feb 11, 2013, at 2:10 PM, Something Something= <mailingl= ists19@gmail.com> wrote:
>
> > David: Your suggestion would add an additional step of = copying data from
> > one place to another. Not bad, but not idea= l. Is there no way to avoid
> > copying of data?
> > > > BTW, we have tried changing the following options to no avail :(<= br>> >
> > set pig.splitCombination false;
> > > > & a few other 'dfs' options given below:
> >= ;
> > mapreduce.min.split.size
> > mapreduce.max.split.size> >
> > Thanks.
> >
> > On Mon, Feb 11,= 2013 at 10:29 AM, David LaBarbera <
> > davidlabarbera@localrespons= e.com> wrote:
> >
> >> You could store your data in smaller block size= s. Do something like
> >> hadoop fs HADOOP_OPTS=3D"-Ddfs.b= lock.size=3D1048576
> >> -Dfs.local.block.size=3D1048576" = -cp /org-input /small-block-input
> >> You might only need one of those parameters. You can verify t= he block size
> >> with
> >> hadoop fsck /small-blo= ck-input
> >>
> >> In your pig script, you'll = probably need to set
> >> pig.maxCombinedSplitSize
> >> to something around= the block size
> >>
> >> David
> >> <= br>> >> On Feb 11, 2013, at 1:24 PM, Something Something <mailinglists19@gma= il.com>
> >> wrote:
> >>
> >>> Sorry.. Moving = 'hbase' mailing list to BCC 'cause this is not related to
&g= t; >>> HBase. Adding 'hadoop' user group.
> >>= ;>
> >>> On Mon, Feb 11, 2013 at 10:22 AM, Something Something <= ;
> >>> mailinglists19@gmail.com> wrote:
> >>> > >>>> Hello,
> >>>>
> >>>= > We are running into performance issues with Pig/Hadoop because our inp= ut
> >>>> files are small. Everything goes to only 1 Map= per. To get around
> >> this, we
> >>>> are trying to use our own L= oader like this:
> >>>>
> >>>> 1) Ext= end PigStorage:
> >>>>
> >>>> public c= lass SmallFileStorage extends PigStorage {
> >>>>
> >>>> public SmallFileStorage(S= tring delimiter) {
> >>>> super(delimiter);
>= >>>> }
> >>>>
> >>>> = @Override
> >>>> public InputFormat getInputFormat() {
> >&= gt;>> return new NLineInputFormat();
> >>>> = }
> >>>> }
> >>>>
> >>>= ;>
> >>>>
> >>>> 2) Add command line argume= nt to the Pig command as follows:
> >>>>
> >>= ;>> -Dmapreduce.input.lineinputformat.linespermap=3D500000
> &g= t;>>>
> >>>>
> >>>>
> >>>> 3= ) Use SmallFileStorage in the Pig script as follows:
> >>>&= gt;
> >>>> USING com.xxx.yyy.SmallFileStorage ('\t&#= 39;)
> >>>>
> >>>>
> >>>> B= ut this doesn't seem to work. We still see that everything is going to=
> >>>> one mapper. Before we spend any more time on thi= s, I am wondering if
> >> this
> >>>> is a good approach =96 OR =96 i= f there's a better approach? Please let me
> >>>> kn= ow. Thanks.
> >>>>
> >>>>
> &g= t;>>>
> >>
> >>
>
=

--0015175885583b468e04d58c1690--