Return-Path: X-Original-To: apmail-flume-user-archive@www.apache.org Delivered-To: apmail-flume-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A7306DBA8 for ; Tue, 25 Sep 2012 19:18:22 +0000 (UTC) Received: (qmail 85104 invoked by uid 500); 25 Sep 2012 19:18:22 -0000 Delivered-To: apmail-flume-user-archive@flume.apache.org Received: (qmail 84950 invoked by uid 500); 25 Sep 2012 19:18:22 -0000 Mailing-List: contact user-help@flume.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flume.apache.org Delivered-To: mailing list user@flume.apache.org Received: (qmail 84942 invoked by uid 99); 25 Sep 2012 19:18:22 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Sep 2012 19:18:22 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FSL_RCVD_USER,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of mvharish14988@gmail.com designates 209.85.220.51 as permitted sender) Received: from [209.85.220.51] (HELO mail-pa0-f51.google.com) (209.85.220.51) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Sep 2012 19:18:13 +0000 Received: by pabkq12 with SMTP id kq12so2362168pab.38 for ; Tue, 25 Sep 2012 12:17:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=YSBVe4bEMmq4n4ZhTru7odlLM5nGbie1CKTH47WmcYY=; b=o3rCUF0G4ndO7UJedzNtooLhaJOXDqMXAOokmx3bgjEV2e1MVUdjocKXotLdO+6GwJ TEksiQq/1vzeauro1VXZ9EoYmGICYHmZGspU5BzgufP9R5V4pG5BzzFSZbvFgn7NJYli 5h8q/Ufs3u8WftCtSnBrFCv4kY+Yb+dmP3PtzCL1h8NvjlZ7k0jXiW7gCrV3DUuQpqAH 0BKvW5fGDVhy3jcRJEaJ6uVzq5CuhrIsVX6z5R8UnmOxS6S4TQQiYTnj9NdcgtLSAkmv JuC9+n39NWl84ueirUtGIY6eOTq8+0GIOQDLESI6mKc9M7bc1DYS9nEG5cEhgRll5bjG Gvmw== MIME-Version: 1.0 Received: by 10.68.83.68 with SMTP id o4mr45091617pby.25.1348600671785; Tue, 25 Sep 2012 12:17:51 -0700 (PDT) Received: by 10.66.147.201 with HTTP; Tue, 25 Sep 2012 12:17:51 -0700 (PDT) In-Reply-To: References: Date: Tue, 25 Sep 2012 15:17:51 -0400 Message-ID: Subject: Re: HDFS Event Sink problems From: Harish Mandala To: user@flume.apache.org Content-Type: multipart/alternative; boundary=047d7b111e65fc512704ca8b8f0a --047d7b111e65fc512704ca8b8f0a Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Thanks, but I understood why this is happening. On Mon, Sep 24, 2012 at 6:01 PM, Harish Mandala wr= ote: > Hello, > > > I=92m having some trouble with the HDFS Event Sink. I=92m using the lates= t > version of flume NG, checked out today. > > > I am using curloader to hit =93MycustomSource=94, which essentially takes= in > HTTP messages, and splits the content into 2 =93kinds=94 of flume events > (differentiated by header key-value). The first kind is sent to hdfs-sink= 1, > and the second kind to hdfs-sink2 by a multiplexing selector as outlined = in > the configuration below. There=92s also an hdfs-sink3 which can be ignore= d at > present. > > I can=92t really understand what=92s going on. It seems related to some o= f the > race condition issues outlined here: > > https://issues.apache.org/jira/browse/FLUME-1219 > > > Please let me know if you need more information. > > > The following is my conf file. It is followed by flume.log. > > > #### flume.conf #### > > agent1.channels =3D ch1 ch2 ch3 > > agent1.sources =3D mycustom-source1 > > agent1.sinks =3D hdfs-sink1 hdfs-sink2 hdfs-sink3 > > # Define a memory channel called ch1 on agent1 > > agent1.channels.ch1.type =3D memory > > agent1.channels.ch1.capacity =3D 200000 > > agent1.channels.ch1.transactionCapacity =3D 20000 > > agent1.channels.ch2.type =3D memory > > agent1.channels.ch2.capacity =3D 1000000 > > agent1.channels.ch2.transactionCapacity =3D 100000 > > agent1.channels.ch3.type =3D memory > > agent1.channels.ch3.capacity =3D 10000 > > agent1.channels.ch3.transactionCapacity =3D 5000 > > > > #agent1.channels.ch2.type =3D memory > > #agent1.channels.ch3.type =3D memory > > > > # Define an Mycustom custom source called mycustom-source1 on agent1 and > tell it > > # to bind to 0.0.0.0:41414. Connect it to channel ch1. > > agent1.sources.mycustom-source1.channels =3D ch1 ch2 ch3 > > agent1.sources.mycustom-source1.type =3D > org.apache.flume.source.MycustomSource > > agent1.sources.mycustom-source1.bind =3D 127.0.0.1 > > agent1.sources.mycustom-source1.port =3D 1234 > > agent1.sources.mycustom-source1.serialization_method =3D json > > #agent1.sources.mycustom-source1.schema_filepath =3D > /home/ubuntu/Software/flume/trunk/conf/AvroEventSchema.avpr > > > > # Define an HDFS sink > > agent1.sinks.hdfs-sink1.channel =3D ch1 > > agent1.sinks.hdfs-sink1.type =3D hdfs > > agent1.sinks.hdfs-sink1.hdfs.path =3D hdfs://localhost:54310/user/flumeDu= mp1 > > agent1.sinks.hdfs-sink1.hdfs.filePrefix =3D events > > agent1.sinks.hdfs-sink1.hdfs.batchSize =3D 20000 > > agent1.sinks.hdfs-sink1.hdfs.fileType =3D DataStream > > agent1.sinks.hdfs-sink1.hdfs.writeFormat =3D Text > > agent1.sinks.hdfs-sink1.hdfs.maxOpenFiles =3D 10000 > > agent1.sinks.hdfs-sink1.hdfs.rollSize =3D 0 > > agent1.sinks.hdfs-sink1.hdfs.rollInterval =3D 0 > > agent1.sinks.hdfs-sink1.hdfs.rollCount =3D 20000 > > agent1.sinks.hdfs-sink1.hdfs.hdfs.threadsPoolSize =3D 20 > > > > agent1.sinks.hdfs-sink2.channel =3D ch2 > > agent1.sinks.hdfs-sink2.type =3D hdfs > > agent1.sinks.hdfs-sink2.hdfs.path =3D hdfs://localhost:54310/user/flumeDu= mp2 > > agent1.sinks.hdfs-sink2.hdfs.filePrefix =3D events > > agent1.sinks.hdfs-sink2.hdfs.batchSize =3D 100000 > > agent1.sinks.hdfs-sink2.hdfs.fileType =3D DataStream > > agent1.sinks.hdfs-sink2.hdfs.writeFormat =3D Text > > agent1.sinks.hdfs-sink2.hdfs.maxOpenFiles =3D 10000 > > agent1.sinks.hdfs-sink2.hdfs.rollSize =3D 0 > > agent1.sinks.hdfs-sink2.hdfs.rollInterval =3D 0 > > agent1.sinks.hdfs-sink2.hdfs.rollCount =3D 100000 > > agent1.sinks.hdfs-sink2.hdfs.hdfs.threadsPoolSize =3D 20 > > > > agent1.sinks.hdfs-sink3.channel =3D ch3 > > agent1.sinks.hdfs-sink3.type =3D hdfs > > agent1.sinks.hdfs-sink3.hdfs.path =3D hdfs://localhost:54310/user/flumeDu= mp3 > > agent1.sinks.hdfs-sink3.hdfs.filePrefix =3D events > > agent1.sinks.hdfs-sink3.hdfs.batchSize =3D 1000 > > agent1.sinks.hdfs-sink3.hdfs.fileType =3D DataStream > > agent1.sinks.hdfs-sink3.hdfs.writeFormat =3D Text > > agent1.sinks.hdfs-sink3.hdfs.maxOpenFiles =3D 10000 > > agent1.sinks.hdfs-sink3.hdfs.rollSize =3D 0 > > agent1.sinks.hdfs-sink3.hdfs.rollInterval =3D 0 > > agent1.sinks.hdfs-sink3.hdfs.rollCount =3D 1000 > > agent1.sinks.hdfs-sink3.hdfs.hdfs.threadsPoolSize =3D 20 > > > > agent1.sources.mycustom-source1.selector.type =3D multiplexing > > agent1.sources.mycustom-source1.selector.header =3D Type > > agent1.sources.mycustom-source1.selector.mapping.type1 =3D ch1 > > agent1.sources.mycustom-source1.selector.mapping.type2 =3D ch2 > > agent1.sources.mycustom-source1.selector.mapping.type3 =3D ch3 > > agent1.sources.mycustom-source1.selector.default =3D ch1 > > > > #### end of conf file #### > > > > Here are the errors from flume.log. > > > 24 Sep 2012 21:32:13,569 WARN [SinkRunner-PollingRunner-DefaultSinkProce= ssor] > (org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout:366) - > Unexpected Exception null > > java.lang.InterruptedException > > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNan= os(AbstractQueuedSynchronizer.java:1325) > > at > java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:257) > > at java.util.concurrent.FutureTask.get(FutureTask.java:119= ) > > at > org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.ja= va:339) > > at > org.apache.flume.sink.hdfs.HDFSEventSink.flush(HDFSEventSink.java:732) > > at > org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:430) > > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.j= ava:68) > > at > org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > > at java.lang.Thread.run(Thread.java:679) > > 24 Sep 2012 21:32:13,572 ERROR > [SinkRunner-PollingRunner-DefaultSinkProcessor] > (org.apache.flume.sink.hdfs.HDFSEventSink.process:450) - process failed > > java.lang.InterruptedException > > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNan= os(AbstractQueuedSynchronizer.java:1325) > > at > java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:257) > > at java.util.concurrent.FutureTask.get(FutureTask.java:119= ) > > at > org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.ja= va:339) > > at > org.apache.flume.sink.hdfs.HDFSEventSink.flush(HDFSEventSink.java:732) > > at > org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:430) > > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.j= ava:68) > > at > org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > > at java.lang.Thread.run(Thread.java:679) > > 24 Sep 2012 21:32:13,572 ERROR > [SinkRunner-PollingRunner-DefaultSinkProcessor] > (org.apache.flume.SinkRunner$PollingRunner.run:160) - Unable to deliver > event. Exception follows. > > org.apache.flume.EventDeliveryException: java.lang.InterruptedException > > at > org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:454) > > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.j= ava:68) > > at > org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > > at java.lang.Thread.run(Thread.java:679) > > Caused by: java.lang.InterruptedException > > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNan= os(AbstractQueuedSynchronizer.java:1325) > > at > java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:257) > > at java.util.concurrent.FutureTask.get(FutureTask.java:119= ) > > at > org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.ja= va:339) > > at > org.apache.flume.sink.hdfs.HDFSEventSink.flush(HDFSEventSink.java:732) > > at > org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:430) > > ... 3 more > > 24 Sep 2012 21:32:16,350 WARN [SinkRunner-PollingRunner-DefaultSinkProce= ssor] > (org.apache.flume.sink.hdfs.HDFSEventSink.process:446) - HDFS IO error > > java.io.IOException: Filesystem closed > > at > org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264) > > at > org.apache.hadoop.hdfs.DFSClient.access$1200(DFSClient.java:74) > > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3664= ) > > at > org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97) > > at > org.apache.flume.sink.hdfs.HDFSDataStream.sync(HDFSDataStream.java:95) > > at > org.apache.flume.sink.hdfs.BucketWriter.doFlush(BucketWriter.java:298) > > at > org.apache.flume.sink.hdfs.BucketWriter.access$500(BucketWriter.java:50) > > at > org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:287) > > at > org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:284) > > at > org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:1= 27) > > at > org.apache.flume.sink.hdfs.BucketWriter.flush(BucketWriter.java:284) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:735) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:732) > > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > > at java.util.concurrent.FutureTask.run(FutureTask.java:166= ) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1110) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:603) > > at java.lang.Thread.run(Thread.java:679) > > 24 Sep 2012 21:32:18,573 INFO [node-shutdownHook] > (org.apache.flume.sink.hdfs.HDFSEventSink.stop:465) - Closing > hdfs://localhost:54310/user/flumeDump2//events > > 24 Sep 2012 21:32:18,575 WARN [hdfs-hdfs-sink2-call-runner-5] > (org.apache.flume.sink.hdfs.BucketWriter.doClose:259) - failed to > close() HDFSWriter for file > (hdfs://localhost:54310/user/flumeDump2//events.1348522332892.tmp). > Exception follows. > > java.io.IOException: Filesystem closed > > at > org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264) > > at > org.apache.hadoop.hdfs.DFSClient.access$1200(DFSClient.java:74) > > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3664= ) > > at > org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97) > > at > org.apache.flume.sink.hdfs.HDFSDataStream.close(HDFSDataStream.java:103) > > at > org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:256) > > at > org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:50) > > at > org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:242) > > at > org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:239) > > at > org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:1= 27) > > at > org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:239) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:750) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:747) > > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > > at java.util.concurrent.FutureTask.run(FutureTask.java:166= ) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1110) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:603) > > at java.lang.Thread.run(Thread.java:679) > > 24 Sep 2012 21:32:18,576 WARN [node-shutdownHook] > (org.apache.flume.sink.hdfs.HDFSEventSink.stop:470) - Exception while > closing hdfs://localhost:54310/user/flumeDump2//events. Exception follows= . > > java.io.IOException: Filesystem closed > > at > org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264) > > at > org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:873) > > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFil= eSystem.java:513) > > at > org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768) > > at > org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:37= 5) > > at > org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:275) > > at > org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:50) > > at > org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:242) > > at > org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:239) > > at > org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:1= 27) > > at > org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:239) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:750) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:747) > > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > > at java.util.concurrent.FutureTask.run(FutureTask.java:166= ) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1110) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:603) > > at java.lang.Thread.run(Thread.java:679) > > 24 Sep 2012 21:32:18,589 INFO [node-shutdownHook] > (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:87) - > Component type: SINK, name: hdfs-sink2 stopped > > 24 Sep 2012 21:32:18,590 INFO [node-shutdownHook] > (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllCompo= nents:82) > - Stopping Sink hdfs-sink1 > > 24 Sep 2012 21:32:18,590 INFO [lifecycleSupervisor-1-4] > (org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run:215) = - > Component has already been stopped SinkRunner: { > policy:org.apache.flume.sink.DefaultSinkProcessor@49dc423f counterGroup:{ > name:null counters:{runner.backoffs.consecutive=3D4, runner.backoffs=3D4, > runner.deliveryErrors=3D1} } } > > 24 Sep 2012 21:32:18,591 INFO [node-shutdownHook] > (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156) - > Stopping component: SinkRunner: { > policy:org.apache.flume.sink.DefaultSinkProcessor@1b815bfb counterGroup:{ > name:null counters:{runner.backoffs.consecutive=3D5, runner.backoffs=3D5}= } } > > 24 Sep 2012 21:32:18,592 INFO [node-shutdownHook] > (org.apache.flume.sink.hdfs.HDFSEventSink.stop:465) - Closing > hdfs://localhost:54310/user/flumeDump1//events > > 24 Sep 2012 21:32:18,594 WARN [hdfs-hdfs-sink1-call-runner-3] > (org.apache.flume.sink.hdfs.BucketWriter.doClose:259) - failed to > close() HDFSWriter for file > (hdfs://localhost:54310/user/flumeDump1//events.1348522332892.tmp). > Exception follows. > > java.io.IOException: Filesystem closed > > at > org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264) > > at > org.apache.hadoop.hdfs.DFSClient.access$1200(DFSClient.java:74) > > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3664= ) > > at > org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97) > > at > org.apache.flume.sink.hdfs.HDFSDataStream.close(HDFSDataStream.java:103) > > at > org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:256) > > at > org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:50) > > at > org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:242) > > at > org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:239) > > at > org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:1= 27) > > at > org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:239) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:750) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:747) > > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > > at java.util.concurrent.FutureTask.run(FutureTask.java:166= ) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1110) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:603) > > at java.lang.Thread.run(Thread.java:679) > > 24 Sep 2012 21:32:18,595 WARN [node-shutdownHook] > (org.apache.flume.sink.hdfs.HDFSEventSink.stop:470) - Exception while > closing hdfs://localhost:54310/user/flumeDump1//events. Exception follows= . > > java.io.IOException: Filesystem closed > > at > org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264) > > at > org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:873) > > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFil= eSystem.java:513) > > at > org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768) > > at > org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:37= 5) > > at > org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:275) > > at > org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:50) > > at > org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:242) > > at > org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:239) > > at > org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:1= 27) > > at > org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:239) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:750) > > at > org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:747) > > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > > at java.util.concurrent.FutureTask.run(FutureTask.java:166= ) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1110) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:603) > > at java.lang.Thread.run(Thread.java:679) > > 24 Sep 2012 21:32:18,600 INFO [node-shutdownHook] > (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:87) - > Component type: SINK, name: hdfs-sink1 stopped > > 24 Sep 2012 21:32:18,600 INFO [node-shutdownHook] > (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllCompo= nents:92) > - Stopping Channel ch3 > > 24 Sep 2012 21:32:18,601 INFO [node-shutdownHook] > (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156) - > Stopping component: org.apache.flume.channel.MemoryChannel{name: ch3} > > 24 Sep 2012 21:32:18,601 INFO [node-shutdownHook] > (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:87) - > Component type: CHANNEL, name: ch3 stopped > > 24 Sep 2012 21:32:18,601 INFO [node-shutdownHook] > (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllCompo= nents:92) > - Stopping Channel ch2 > > 24 Sep 2012 21:32:18,601 INFO [node-shutdownHook] > (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156) - > Stopping component: org.apache.flume.channel.MemoryChannel{name: ch2} > > 24 Sep 2012 21:32:18,601 INFO [node-shutdownHook] > (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:87) - > Component type: CHANNEL, name: ch2 stopped > > 24 Sep 2012 21:32:18,601 INFO [node-shutdownHook] > (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllCompo= nents:92) > - Stopping Channel ch1 > > 24 Sep 2012 21:32:18,601 INFO [node-shutdownHook] > (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156) - > Stopping component: org.apache.flume.channel.MemoryChannel{name: ch1} > > 24 Sep 2012 21:32:18,602 INFO [node-shutdownHook] > (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:87) - > Component type: CHANNEL, name: ch1 stopped > > 24 Sep 2012 21:32:18,602 INFO [node-shutdownHook] > (org.apache.flume.lifecycle.LifecycleSupervisor.stop:78) - Stopping > lifecycle supervisor 8 > > 24 Sep 2012 21:32:18,604 INFO [node-shutdownHook] > (org.apache.flume.conf.file.AbstractFileConfigurationProvider.stop:91) - > Configuration provider stopping > > > Thanks, > > Harish > --047d7b111e65fc512704ca8b8f0a Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Thanks, but I understood why this is happening.

On Mon, Sep 24, 2012 at 6:01 PM, Harish Mandala &= lt;mvharish149= 88@gmail.com> wrote:

Hello,


I=92m having some trouble with the HDFS Event Sink. = I=92m using the latest version of flume NG, checked out today.


I am using curloader to hit =93MycustomSource=94, wh= ich essentially takes in HTTP messages, and splits the content into 2 =93kinds= =94 of flume events (differentiated by header key-value). The first kind is sent to hdfs-sink1, and the second kind to hdfs-sink2 by a multiplexing selector as outlined in the configuration below. There=92s also an hdfs-sink3 which can= be ignored at present.

I can=92t really understand what=92s going on. It se= ems related to some of the race condition issues outlined here:

https://issues.apache.org/jira/browse/FLUME-1219


Please let me know if= you need more information.


The following is my c= onf file. It is followed by flume.log.


#### flume.conf ####<= /p>

agent1.channels =3D ch1 ch2 ch3

agent1.sources =3D mycustom-source1

agent1.sinks =3D hdfs-sink1 hdfs-sink2 hdfs-sink3

# Define a memory channel called ch1 on agent1

agent1.channels.ch1.type =3D memory

agent1.channels.ch1.capacity =3D 200000

agent1.channels.ch1.transactionCapacity =3D 20000

agent1.channels.ch2.type =3D memory

agent1.channels.ch2.capacity =3D 1000000

agent1.channels.ch2.transactionCapacity =3D 100000

agent1.channels.ch3.type =3D memory

agent1.channels.ch3.capacity =3D 10000

agent1.channels.ch3.transactionCapacity =3D 5000

=A0

#agent1.channels.ch2.type =3D memory

#agent1.channels.ch3.type =3D memory

=A0

# Define an Mycustom custom source called mycustom-s= ource1 on agent1 and tell it

# to bind to 0.0.0.0:41414. Connect it to channel ch1.

agent1.sources.mycustom-source1.channels =3D ch1 ch2= ch3

agent1.sources.mycustom-source1.type =3D org.apache.flume.source.MycustomSource

agent1.sources.mycustom-source1.bind =3D 127.0.0.1

agent1.sources.mycustom-source1.port =3D 1234

agent1.sources.mycustom-source1.serialization_method= =3D json

#agent1.sources.mycustom-source1.schema_filepath =3D /home/ubuntu/Software/flume/trunk/conf/AvroEventSchema.avpr

=A0

# Define an HDFS sink

agent1.sinks.hdfs-sink1.channel =3D ch1

agent1.sinks.hdfs-sink1.type =3D hdfs

agent1.sinks.hdfs-sink1.hdfs.path =3D hdfs://localhost:54310/user/flumeDump1

agent1.sinks.hdfs-sink1.hdfs.filePrefix =3D events

agent1.sinks.hdfs-sink1.hdfs.batchSize =3D 20000

agent1.sinks.hdfs-sink1.hdfs.fileType =3D DataStream=

agent1.sinks.hdfs-sink1.hdfs.writeFormat =3D Text

agent1.sinks.hdfs-sink1.hdfs.maxOpenFiles =3D 10000<= /p>

agent1.sinks.hdfs-sink1.hdfs.rollSize =3D 0

agent1.sinks.hdfs-sink1.hdfs.rollInterval =3D 0

agent1.sinks.hdfs-sink1.hdfs.rollCount =3D 20000

agent1.sinks.hdfs-sink1.hdfs.hdfs.threadsPoolSize = =3D 20

=A0

agent1.sinks.hdfs-sink2.channel =3D ch2

agent1.sinks.hdfs-sink2.type =3D hdfs

agent1.sinks.hdfs-sink2.hdfs.path =3D hdfs://localhost:54310/user/flumeDump2

agent1.sinks.hdfs-sink2.hdfs.filePrefix =3D events

agent1.sinks.hdfs-sink2.hdfs.batchSize =3D 100000

agent1.sinks.hdfs-sink2.hdfs.fileType =3D DataStream=

agent1.sinks.hdfs-sink2.hdfs.writeFormat =3D Text

agent1.sinks.hdfs-sink2.hdfs.maxOpenFiles =3D 10000<= /p>

agent1.sinks.hdfs-sink2.hdfs.rollSize =3D 0

agent1.sinks.hdfs-sink2.hdfs.rollInterval =3D 0

agent1.sinks.hdfs-sink2.hdfs.rollCount =3D 100000

agent1.sinks.hdfs-sink2.hdfs.hdfs.threadsPoolSize = =3D 20

=A0

agent1.sinks.hdfs-sink3.channel =3D ch3

agent1.sinks.hdfs-sink3.type =3D hdfs

agent1.sinks.hdfs-sink3.hdfs.path =3D hdfs://localhost:54310/user/flumeDump3

agent1.sinks.hdfs-sink3.hdfs.filePrefix =3D events

agent1.sinks.hdfs-sink3.hdfs.batchSize =3D 1000

agent1.sinks.hdfs-sink3.hdfs.fileType =3D DataStream=

agent1.sinks.hdfs-sink3.hdfs.writeFormat =3D Text

agent1.sinks.hdfs-sink3.hdfs.maxOpenFiles =3D 10000<= /p>

agent1.sinks.hdfs-sink3.hdfs.rollSize =3D 0

agent1.sinks.hdfs-sink3.hdfs.rollInterval =3D 0

agent1.sinks.hdfs-sink3.hdfs.rollCount =3D 1000

agent1.sinks.hdfs-sink3.hdfs.hdfs.threadsPoolSize = =3D 20

=A0

agent1.sources.mycustom-source1.selector.type =3D mu= ltiplexing

agent1.sources.mycustom-source1.selector.header =3D = Type

agent1.sources.mycustom-source1.selector.mapping.typ= e1 =3D ch1

agent1.sources.mycustom-source1.selector.mapping.typ= e2 =3D ch2

agent1.sources.mycustom-source1.selector.mapping.typ= e3 =3D ch3

agent1.sources.mycustom-source1.selector.default =3D= ch1

=A0

#### end of conf file ####

=A0

Here are =A0the errors from flume.log.


24 Sep 2012 21:32:13,569 WARN=A0 [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout:366)=A0 - Unexpected Exception null

java.lang.InterruptedException

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos= (AbstractQueuedSynchronizer.java:1325)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:257)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask.get(FutureTask.java:119)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.java= :339)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.flush(HDFSEventSink.java:732)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:= 430)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.jav= a:68)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.lang.Thread.run(Thread.java:679)

24 Sep 2012 21:32:13,572 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:450)=A0 - pr= ocess failed

java.lang.InterruptedException

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos= (AbstractQueuedSynchronizer.java:1325)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:257)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask.get(FutureTask.java:119)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.java= :339)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.flush(HDFSEventSink.java:732)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:430)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.jav= a:68)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.lang.Thread.run(Thread.java:679)

24 Sep 2012 21:32:13,572 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.SinkRunner$PollingRunner.run:160)=A0 - Unabl= e to deliver event. Exception follows.

org.apache.flume.EventDeliveryException: java.lang.InterruptedException

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:454)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProces= sor.java:68)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.lang.Thread.run(Thread.java:679)

Caused by: java.lang.InterruptedException

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos= (AbstractQueuedSynchronizer.java:1325)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:257)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask.get(FutureTask.java:119)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.java= :339)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.flush(HDFSEventSink.java:732)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:430)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ... 3 more

24 Sep 2012 21:32:16,350 WARN=A0 [SinkR= unner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:446)=A0 - HD= FS IO error

java.io.IOException: Filesystem closed

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.access$1200(DFSClient.java:74)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3664)<= /p>

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSDataStream.sync(HDFSDataStream.java:95)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.doFlush(BucketWriter.java:298)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.access$500(BucketWriter.java:50)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:287)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:284)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:127= )

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.flush(BucketWriter.java:284)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:735)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:732)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask.run(FutureTask.java:166)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 110)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 603)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.lang.Thread.run(Thread.java:679)

24 Sep 2012 21:32:18,573 INFO=A0 [node-= shutdownHook] (org.apache.flume.sink.hdfs.HDFSEventSink.stop:465)=A0 - Closi= ng hdfs://localhost:54310/user/flumeDump2//events

24 Sep 2012 21:32:18,575 WARN=A0 [hdfs-= hdfs-sink2-call-runner-5] (org.apache.flume.sink.hdfs.BucketWriter.doClose:259)=A0 - fai= led to close() HDFSWriter for file (hdfs://localhost:54310/user/flumeDump2//events.1348522332892.tmp). Excepti= on follows.

java.io.IOException: Filesystem closed

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.access$1200(DFSClient.java:74)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3664)<= /p>

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSDataStream.close(HDFSDataStream.java:103)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:256)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:50)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:242)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:239)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:127= )

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:239)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:750)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:747)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask.run(FutureTask.java:166)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 110)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 603)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.lang.Thread.run(Thread.java:679)

24 Sep 2012 21:32:18,576 WARN=A0 [node-= shutdownHook] (org.apache.flume.sink.hdfs.HDFSEventSink.stop:470)=A0 - Excep= tion while closing hdfs://localhost:54310/user/flumeDump2//events. Exception follows.

java.io.IOException: Filesystem closed

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:873)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileS= ystem.java:513)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:375)=

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:275)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:50)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:242)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:239)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:127= )

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:239)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:750)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:747)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask.run(FutureTask.java:166)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 110)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 603)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.lang.Thread.run(Thread.java:679)

24 Sep 2012 21:32:18,589 INFO=A0 [node-= shutdownHook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:87)=A0 <= /span>- Component type: SINK, name: hdfs-sink2 stopped

24 Sep 2012 21:32:18,590 INFO=A0 [node-= shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllCompone= nts:82)=A0 - Stopping Sink hdfs-sink1

24 Sep 2012 21:32:18,590 INFO=A0 [lifec= ycleSupervisor-1-4] (org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run:215)=A0 - Component has already been stopped SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@49dc423f counterGroup:{ name:null counters:{runner.backoffs.consecutive=3D4, runner.backoffs=3D4, runner.deliveryErrors=3D1} } }

24 Sep 2012 21:32:18,591 INFO=A0 [node-= shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156)=A0 <= /span>- Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@1b815bfb counterGroup:{ name:null counters:{runner.backoffs.consecutive=3D5, runner.backoffs=3D5} }= }

24 Sep 2012 21:32:18,592 INFO=A0 [node-= shutdownHook] (org.apache.flume.sink.hdfs.HDFSEventSink.stop:465)=A0 - Closi= ng hdfs://localhost:54310/user/flumeDump1//events

24 Sep 2012 21:32:18,594 WARN=A0 [hdfs-= hdfs-sink1-call-runner-3] (org.apache.flume.sink.hdfs.BucketWriter.doClose:259)=A0 - fai= led to close() HDFSWriter for file (hdfs://localhost:54310/user/flumeDump1//events.1348522332892.tmp). Excepti= on follows.

java.io.IOException: Filesystem closed

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.access$1200(DFSClient.java:74)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3664)<= /p>

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSDataStream.close(HDFSDataStream.java:103)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:256)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:50)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:242)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:239)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:127= )

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:239)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:750)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:7= 47)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask.run(FutureTask.java:166)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 110)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 603)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.lang.Thread.run(Thread.java:679)

24 Sep 2012 21:32:18,595 WARN=A0 [node-= shutdownHook] (org.apache.flume.sink.hdfs.HDFSEventSink.stop:470)=A0 - Excep= tion while closing hdfs://localhost:54310/user/flumeDump1//events. Exception follows.

java.io.IOException: Filesystem closed

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:264)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:873)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileS= ystem.java:513)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:375)=

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:275)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:50)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:242)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:239)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:127= )

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:239)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:750)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at org.apache.flume.sink.hdfs.HDFSEventSink$3.call(HDFSEventSink.java:747)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.FutureTask.run(FutureTask.java:166)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 110)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 603)

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 at java.lang.Thread.run(Thread.java:679)

24 Sep 2012 21:32:18,600 INFO=A0 [node-= shutdownHook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:87)=A0 <= /span>- Component type: SINK, name: hdfs-sink1 stopped

24 Sep 2012 21:32:18,600 INFO=A0 [node-= shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllCompone= nts:92)=A0 - Stopping Channel ch3

24 Sep 2012 21:32:18,601 INFO=A0 [node-= shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156)=A0 <= /span>- Stopping component: org.apache.flume.channel.MemoryChannel{name: ch3}

24 Sep 2012 21:32:18,601 INFO=A0 [node-= shutdownHook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:87)=A0 <= /span>- Component type: CHANNEL, name: ch3 stopped

24 Sep 2012 21:32:18,601 INFO=A0 [node-= shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllCompone= nts:92)=A0 - Stopping Channel ch2

24 Sep 2012 21:32:18,601 INFO=A0 [node-= shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156)=A0 <= /span>- Stopping component: org.apache.flume.channel.MemoryChannel{name: ch2}

24 Sep 2012 21:32:18,601 INFO=A0 [node-= shutdownHook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:87)=A0 <= /span>- Component type: CHANNEL, name: ch2 stopped

24 Sep 2012 21:32:18,601 INFO=A0 [node-= shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllCompone= nts:92)=A0 - Stopping Channel ch1

24 Sep 2012 21:32:18,601 INFO=A0 [node-= shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156)=A0 <= /span>- Stopping component: org.apache.flume.channel.MemoryChannel{name: ch1}

24 Sep 2012 21:32:18,602 INFO=A0 [node-= shutdownHook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:= 87)=A0 - Component type: CHANNEL, name: ch1 stopped

24 Sep 2012 21:32:18,602 INFO=A0 [node-= shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.stop:78)=A0 - = Stopping lifecycle supervisor 8

24 Sep 2012 21:32:18,604 INFO=A0 [node-= shutdownHook] (org.apache.flume.conf.file.AbstractFileConfigurationProvider.stop:91)=A0 - Configuration provider stopping

Thanks,

Harish


--047d7b111e65fc512704ca8b8f0a--