Return-Path: X-Original-To: apmail-flume-user-archive@www.apache.org Delivered-To: apmail-flume-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5BFB0D298 for ; Tue, 24 Jul 2012 12:24:30 +0000 (UTC) Received: (qmail 95275 invoked by uid 500); 24 Jul 2012 12:24:29 -0000 Delivered-To: apmail-flume-user-archive@flume.apache.org Received: (qmail 95211 invoked by uid 500); 24 Jul 2012 12:24:28 -0000 Mailing-List: contact user-help@flume.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flume.apache.org Delivered-To: mailing list user@flume.apache.org Received: (qmail 95174 invoked by uid 99); 24 Jul 2012 12:24:27 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Jul 2012 12:24:27 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=FSL_RCVD_USER,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of brock@cloudera.com designates 209.85.220.179 as permitted sender) Received: from [209.85.220.179] (HELO mail-vc0-f179.google.com) (209.85.220.179) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Jul 2012 12:24:22 +0000 Received: by vcbf11 with SMTP id f11so5565257vcb.38 for ; Tue, 24 Jul 2012 05:24:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=iLP7XlMQn+eh+20KzicRmpa8SWcTRpW2dY0bC4A0Tv4=; b=fcBXtpMRGeNm2aAfKDBhFb8u3FXv+9BbUnEkISPytS+zjdisnErRMNG65k25TAOa0t RxjrmgRf9dudaj2jztbOkZLKBqZSc+9h0g1JgpOPBP2B/KojWXQUy69NW+s1TqA+YZkZ a9iMRIzP2RVcumjBYB+JbNd+NVIfjFzw2GwR7M3oF44FL8RcwoHV+j2w/KP9vXkE7/xz hGWOc3+IQjS3QJH0zH5z2WBo976y9f/qL6lWqSnKFQwCXU67OxwzaN3W5LxrNDpEuxxb OkAFMN3e8UItZ1kCTi/BLyV3NgXS3eiYEY34Ikt3LEW8IlZtFy7iRkS06LQ92Locqbyd hopw== Received: by 10.52.97.196 with SMTP id ec4mr13334690vdb.96.1343132641389; Tue, 24 Jul 2012 05:24:01 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.91.199 with HTTP; Tue, 24 Jul 2012 05:23:41 -0700 (PDT) In-Reply-To: References: From: Brock Noland Date: Tue, 24 Jul 2012 07:23:41 -0500 Message-ID: Subject: Re: How to upload the SEQ data into hdfs To: user@flume.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQmLLZiz2M7BwcbY2HCc2EGxxqNnHGysJiHwy4gc91I06drExYMT50db9ALX83ZUhSdzJ1lu X-Virus-Checked: Checked by ClamAV on apache.org Hi, Your channel is not hooked up to the source and sink. See the additions below. agent.sources = avro-AppSrv-source agent.sinks = hdfs-Cluster1-sink agent.channels = mem-channel-1 # set channel for sources, sinks # properties of avro-AppSrv-source agent.sources.avro-AppSrv-source.type = SEQ agent.sources.avro-AppSrv-source.bind = localhost agent.sources.avro-AppSrv-source.port = 10000 agent.sources.avro-AppSrv-source.channels = mem-channel-1 # properties of mem-channel-1 agent.channels.mem-channel-1.type = memory agent.channels.mem-channel-1.capacity = 1000 agent.channels.mem-channel-1.transactionCapacity = 100 # properties of hdfs-Cluster1-sink agent.sinks.hdfs-Cluster1-sink.type = hdfs agent.sinks.hdfs-Cluster1-sink.channel = mem-channel-1 agent.sinks.hdfs-Cluster1-sink.hdfs.path = hdfs://134.83.35.24/user/mukhtaj/flume/ Also we seem to give a better error message here: https://issues.apache.org/jira/browse/FLUME-1271 Brock On Tue, Jul 24, 2012 at 6:58 AM, mardan Khan wrote: > Hi Will, > > I did changed in configuration file as per your suggestion > (agent.sources.avro-AppSrv-source.type = SEQ) but still I am getting the > same error. > > The configiration file as: > > > agent.sources = avro-AppSrv-source > agent.sinks = hdfs-Cluster1-sink > agent.channels = mem-channel-1 > # set channel for sources, sinks > # properties of avro-AppSrv-source > agent.sources.avro-AppSrv-source.type = SEQ > > agent.sources.avro-AppSrv-source.bind = localhost > agent.sources.avro-AppSrv-source.port = 10000 > # properties of mem-channel-1 > agent.channels.mem-channel-1.type = memory > agent.channels.mem-channel-1.capacity = 1000 > agent.channels.mem-channel-1.transactionCapacity = 100 > # properties of hdfs-Cluster1-sink > agent.sinks.hdfs-Cluster1-sink.type = hdfs > agent.sinks.hdfs-Cluster1-sink.hdfs.path = > hdfs://134.83.35.24/user/mukhtaj/flume/ > > > > > The error as: > > > 12/07/24 12:52:33 ERROR properties.PropertiesFileConfigurationProvider: > Failed to load configuration data. Exception follows. > > java.lang.NullPointerException > at > org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadSources(PropertiesFileConfigurationProvider.java:324) > at > org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.load(PropertiesFileConfigurationProvider.java:222) > at > org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(AbstractFileConfigurationProvider.java:123) > at > org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$300(AbstractFileConfigurationProvider.java:38) > at > org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:202) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) > at > java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) > > > Why i am getting this error. I am struggling from few days for this proble. > Runing any command get this error. > > > Any sugesstion please. > > > Thanks > > > > > > > > On Tue, Jul 24, 2012 at 3:46 AM, Will McQueen wrote: >> >> Or as Brock said, you can refer to the link he posted and use the example >> from the user guide instead, then you'll need to include this: >> >> agent.sources = avro-AppSrv-source >> agent.sinks = hdfs-Cluster1-sink >> agent.channels = mem-channel-1 >> >> ... but that example uses an Avro source so you'll likely need to start an >> avro-client to test (or use Flume SDK). Or just change the source type to >> SEQ. >> >> Cheers, >> Will >> >> >> On Mon, Jul 23, 2012 at 6:07 PM, mardan Khan wrote: >>> >>> >>> >>> >>> Thanks Brocks, >>> >>> I have just gone through the posted link and just copy past the one of >>> configuration file and change the hdfs path as below: >>> >>> >>> >>> # properties of avro-AppSrv-source >>> agent.sources.avro-AppSrv-source.type = avro >>> agent.sources.avro-AppSrv-source.bind = localhost >>> agent.sources.avro-AppSrv-source.port = 10000 >>> >>> # properties of mem-channel-1 >>> agent.channels.mem-channel-1.type = memory >>> agent.channels.mem-channel-1.capacity = 1000 >>> agent.channels.mem-channel-1.transactionCapacity = 100 >>> >>> # properties of hdfs-Cluster1-sink >>> agent.sinks.hdfs-Cluster1-sink.type = hdfs >>> agent.sinks.hdfs-Cluster1-sink.hdfs.path = >>> hdfs://134.83.35.24/user/mardan/flume/ >>> >>> >>> apply the following command: >>> >>> $ /usr/bin/flume-ng agent -n agent -c conf -f >>> /usr/lib/flume-ng/conf/flume.conf >>> >>> >>> and got the following error. Most of the time of getting this error >>> >>> 12/07/24 01:54:43 ERROR properties.PropertiesFileConfigurationProvider: >>> Failed to load configuration data. Exception follows. >>> java.lang.NullPointerException >>> at >>> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadSources(PropertiesFileConfigurationProvider.java:324) >>> at >>> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.load(PropertiesFileConfigurationProvider.java:222) >>> at >>> org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(AbstractFileConfigurationProvider.java:123) >>> at >>> org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$300(AbstractFileConfigurationProvider.java:38) >>> at >>> org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:202) >>> at >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) >>> at >>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) >>> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) >>> at >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) >>> at >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180) >>> at >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204) >>> at >>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) >>> at >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) >>> at java.lang.Thread.run(Thread.java:662) >>> >>> I think some thing wrong in the configuration file. I am using flume1.x >>> version and installed in /usr/lib/flume-ng/ >>> >>> Could you please check the command and configuration file. >>> >>> Thanks >>> >>> >>> >>> >>> >>> >>> >>> On Tue, Jul 24, 2012 at 1:33 AM, Brock Noland wrote: >>>> >>>> Yes, you can do that. In fact that is the most common case. The >>>> documents which should help you do so are here: >>>> >>>> >>>> https://cwiki.apache.org/confluence/display/FLUME/Flume+1.x+Documentation >>>> >>>> Brock >>>> >>>> On Mon, Jul 23, 2012 at 7:26 PM, mardan Khan >>>> wrote: >>>> > Hi, >>>> > >>>> > I am just doing testing. I am generating the sequence and want to >>>> > upload >>>> > into hdfs. My configuration file as: >>>> > >>>> > agent2.channels = c1 >>>> > agent2.sources = r1 >>>> > agent2.sinks = k1 >>>> > >>>> > agent2.channels.c1.type = MEMORY >>>> > >>>> > agent2.sources.r1.channels = c1 >>>> > agent2.sources.r1.type = SEQ >>>> > >>>> > agent2.sinks.k1.channel = c1 >>>> > agent2.sinks.k1.type = LOGGER >>>> > >>>> > >>>> > Is it possible to upload into hdfs, if possible then how I can make >>>> > the >>>> > changes in configuration file. >>>> > >>>> > >>>> > Many thanks >>>> > >>>> >>>> >>>> >>>> -- >>>> Apache MRUnit - Unit testing MapReduce - >>>> http://incubator.apache.org/mrunit/ >>> >>> >> > -- Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/