Return-Path: Delivered-To: apmail-hadoop-chukwa-dev-archive@minotaur.apache.org Received: (qmail 15115 invoked from network); 1 Nov 2009 00:32:03 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 1 Nov 2009 00:32:03 -0000 Received: (qmail 62182 invoked by uid 500); 1 Nov 2009 00:32:03 -0000 Delivered-To: apmail-hadoop-chukwa-dev-archive@hadoop.apache.org Received: (qmail 62145 invoked by uid 500); 1 Nov 2009 00:32:03 -0000 Mailing-List: contact chukwa-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: chukwa-dev@hadoop.apache.org Delivered-To: mailing list chukwa-dev@hadoop.apache.org Received: (qmail 62134 invoked by uid 99); 1 Nov 2009 00:32:03 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 01 Nov 2009 00:32:03 +0000 X-ASF-Spam-Status: No, hits=-1.8 required=5.0 tests=AWL,BAYES_00 X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of thushw@gmail.com designates 209.85.212.202 as permitted sender) Received: from [209.85.212.202] (HELO mail-vw0-f202.google.com) (209.85.212.202) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 01 Nov 2009 00:32:00 +0000 Received: by vws40 with SMTP id 40so1109714vws.5 for ; Sat, 31 Oct 2009 17:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=p9IGwAtPfkwT1ulOt+PV4tuBbvL+V3TLLstI4cnHz9Y=; b=cvA2bbwWRyLzXJ9Ikw8EVZvCkhAr0hxO4MvcvUo8Gk+u9ze6LHoL2NXRIpg+fIFVuh NhxA3SH+1dnkwu79louzziCYY3WuOu8ngQOY+YVuvb6ZC+jnBxGmLmdS6rtGMe5pZAjm Yi0PSFmjnoNXvfrdRtPqonSdpcD+Tcs4U3Lr8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=beRIIc6E+L3c9QcO5mw8Rnk3MQ6Hq1ihg4Fwps+xcyzsqSAR8NdF/ON0IqCrWLCMNH v72uv9pZ2gofIkz9NALatTMROfINbfY7DyJ5M90wSj2NFO47Tm4yc+JlGmXaigvpVwf8 IcmT/RvVy6K33rrbVkobvhQpvYdS/YFhcF90s= MIME-Version: 1.0 Received: by 10.220.122.90 with SMTP id k26mr3189772vcr.69.1257035499961; Sat, 31 Oct 2009 17:31:39 -0700 (PDT) In-Reply-To: <2625b9520910311711u7879a81btfc6970ceb9a8359c@mail.gmail.com> References: <2625b9520910311711u7879a81btfc6970ceb9a8359c@mail.gmail.com> Date: Sat, 31 Oct 2009 17:31:39 -0700 Message-ID: <2625b9520910311731g7da1eefl10982ad518f251b0@mail.gmail.com> Subject: Re: Using SocketTeeWriter without SeqFileWriter From: Thushara Wijeratna To: chukwa-dev@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable actually there was one more change: [~/hadoop-src/chukwa/trunk] svn diff Index: src/java/org/apache/hadoop/chukwa/datacollection/writer/SocketTeeWri= ter.java =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- src/java/org/apache/hadoop/chukwa/datacollection/writer/SocketTeeWriter= .java (revision 831608) +++ src/java/org/apache/hadoop/chukwa/datacollection/writer/SocketTeeWriter= .java (working copy) @@ -225,7 +225,9 @@ @Override public CommitStatus add(List chunks) throws WriterException { - CommitStatus rv =3D next.add(chunks); //pass data through + CommitStatus rv =3D ChukwaWriter.COMMIT_OK; + if (next !=3D null) + rv =3D next.add(chunks); //pass data through synchronized(tees) { Iterator loop =3D tees.iterator(); while(loop.hasNext()) { @@ -240,7 +242,8 @@ @Override public void close() throws WriterException { - next.close(); + if (next !=3D null) + next.close(); running =3D false; listenThread.shutdown(); } [~/hadoop-src/chukwa/trunk] On Sat, Oct 31, 2009 at 5:11 PM, Thushara Wijeratna wrot= e: > I wanted to use SocketTeeWriter without going through the steps of > writing to HDFS. > It seems that PipelineStageWriter is designed to have any number of > PipeLineable writers, optionally followed by a SeqFileWriter. > So I changed my collector config : > > =A0 > =A0 =A0chukwaCollector.writerClass > =A0 =A0org.apache.hadoop.chukwa.datacollection.writer.PipelineStag= eWriter > =A0 > > =A0 > =A0 =A0chukwaCollector.pipeline > =A0 =A0org.apache.hadoop.chukwa.datacollection.writer.SocketTeeWri= ter > =A0 > > After doing one minor change to SocketTeeWriter, I could get this to > work. The advantage is that now I do not need to set up HDFS. > > Please let me know if this is something we should patch, I will submit > the patch. > > SocketTeeWriter changes: > > [~/hadoop-src/chukwa/trunk] svn diff > Index: src/java/org/apache/hadoop/chukwa/datacollection/writer/SocketTeeW= riter.java > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- src/java/org/apache/hadoop/chukwa/datacollection/writer/SocketTeeWrit= er.jav(revision > 831608) > +++ src/java/org/apache/hadoop/chukwa/datacollection/writer/SocketTeeWrit= er.jav(working > copy) > @@ -225,7 +225,9 @@ > > =A0 @Override > =A0 public CommitStatus add(List chunks) throws WriterException { > - =A0 =A0CommitStatus rv =3D next.add(chunks); //pass data through > + =A0 =A0CommitStatus rv =3D ChukwaWriter.COMMIT_OK; > + =A0 =A0if (next !=3D null) > + =A0 =A0 =A0 rv =3D next.add(chunks); //pass data through > =A0 =A0 synchronized(tees) { > =A0 =A0 =A0 Iterator loop =3D tees.iterator(); > =A0 =A0 =A0 while(loop.hasNext()) { > [~/hadoop-src/chukwa/trunk] >