flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Ryan <jur...@ziprealty.com>
Subject Re: writing flume to hdfs failed
Date Fri, 11 Mar 2016 18:45:00 GMT
It still sounds like you have an HDFS version mismatch between your HDFS
nodes and your Flume’s HDFS lib, as Gonzalo pointed out.

From:  2402 楊建中 joeyang <joeyang@pershing.com.tw>
Reply-To:  <user@flume.apache.org>
Date:  Friday, March 11, 2016 at 1:00 AM
To:  "user@flume.apache.org" <user@flume.apache.org>
Subject:  RE: writing flume to hdfs failed

This link did fix some of my previous error:
http://stackoverflow.com/questions/35173503/hdfs-io-error-org-apache-hadoop-
ipc-remoteexception-server-ipc-version-9-cannot
 
but we still get stuck on the same place:
Creating 
hdfs://192.168.112.172:9000/user/flume/syslogtcp/Syslog.1457666187655.tmp
<http://192.168.112.172:9000/user/flume/syslogtcp/Syslog.1457666187655.tmp>
2016-03-11 11:16:29,243 (SinkRunner-PollingRunner-DefaultSinkProcessor)
[ERROR - 
org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:459)]
process failed
java.lang.NoSuchMethodError:
org.apache.hadoop.util.StringUtils.toLowerCase(Ljava/lang/String;)Ljava/lang
/String;
        at 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$RollingUpgradeStart
upOption.getAllOptionString(HdfsServerConstants.java:80)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<clinit>(NameNode.java:248)
 
From: Gonzalo Herreros [mailto:gherreros@gmail.com]
Sent: Friday, March 11, 2016 4:47 PM
To: user <user@flume.apache.org>
Subject: Re: writing flume to hdfs failed
 

Looks like the hdfs sink needs to be updated to support the latest Hadoop.

 

In the meanwhile I would use an older client, which probably works in a
newer server. Alternatively you can use the Flume branch that Hortonworks
compile for 2.7.1

 

Gonzalo

 

On 11 March 2016 at 03:37, 2402 楊建中 joeyang <joeyang@pershing.com.tw>
wrote:
> 
> Hi, 
>  
> We’ve been smacked on the following error while wirting flume to hdfs, any
> advice would be appreciated:
>  
> Hadoop: 2.7.2
> Flume:1.6
>  
> $vi /home/csi/apache-flume-1.6.0-bin/conf/hdfs_sink.conf
> a1.sources = r1
> a1.sinks = k1
> a1.channels = c1
> # Describe/configure the source
> a1.sources.r1.type = syslogtcp
> a1.sources.r1.port = 5140
> a1.sources.r1.host = localhost
> a1.sources.r1.channels = c1
> # Describe the sink
> a1.sinks.k1.type = hdfs
> a1.sinks.k1.channel = c1
> a1.sinks.k1.hdfs.path = hdfs://m1:9000/user/flume/syslogtcp
> a1.sinks.k1.hdfs.filePrefix = Syslog
> a1.sinks.k1.hdfs.round = true
> a1.sinks.k1.hdfs.roundValue = 10
> a1.sinks.k1.hdfs.roundUnit = minute
> # Use a channel which buffers events in memory
> a1.channels.c1.type = memory
> a1.channels.c1.capacity = 1000
> a1.channels.c1.transactionCapacity = 100
> # Bind the source and sink to the channel
> a1.sources.r1.channels = c1
> a1.sinks.k1.channel = c1
> $flume-ng agent -c . -f /home/csi/apache-flume-1.6.0-bin/conf/hdfs_sink.conf
> -n a1 -Dflume.root.logger=INFO,console
> $echo "hello idoall flume -> hadoop testing one" | nc localhost 5140
> ------------------------------------------------------------------------------
> -----------------
>  
> 2016-03-11 11:16:28,222 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO
> - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)]
> Creating 
> hdfs://192.168.112.172:9000/user/flume/syslogtcp/Syslog.1457666187655.tmp
> <http://192.168.112.172:9000/user/flume/syslogtcp/Syslog.1457666187655.tmp>
> 2016-03-11 11:16:29,243 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR
> - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:459)]
> process failed
> java.lang.NoSuchMethodError:
> org.apache.hadoop.util.StringUtils.toLowerCase(Ljava/lang/String;)Ljava/lang/S
> tring;
>         at 
> org.apache.hadoop.hdfs.server.common.HdfsServerConstants$RollingUpgradeStartup
> Option.getAllOptionString(HdfsServerConstants.java:80)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<clinit>(NameNode.java:248)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.
> java:149)
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>         at 
> org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:243)
>         at 
> org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)
>         at 
> org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
>         at 
> org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
>         at 
> org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142>
)
>         at 
> 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617>
)
>         at java.lang.Thread.run(Thread.java:745)
> Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor"
> java.lang.NoSuchMethodError:
> org.apache.hadoop.util.StringUtils.toLowerCase(Ljava/lang/String;)Ljava/lang/S
> tring;
>         at 
> org.apache.hadoop.hdfs.server.common.HdfsServerConstants$RollingUpgradeStartup
> Option.getAllOptionString(HdfsServerConstants.java:80)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<clinit>(NameNode.java:248)
>         at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.
> java:149)
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>         at 
> org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:243)
>         at 
> org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)
>         at 
> org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
>         at 
> org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
>         at 
> org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142>
)
>         at 
> 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617>
)
>         at java.lang.Thread.run(Thread.java:745)
>  
>  
> TEL: 02 2658-1910 ext 2402 FAX: 02 2658-1920
> Mobile: 0986-711896             Email: joeyang@pershing.com.tw
> <mailto:joeyang@pershing.com.tw>
> ADD: 2nd Floor, No.18, Wenhu Street, Neihu District, Taipei City 114
>  
> ===================================================
> 本通訊及其所有附件所含之資訊均屬機密,僅供指定之收件人使用,未經寄件人許可不得揭露、複製或散
> 布本通訊。若您並非指定之收件人,請勿使用、保存或揭露本通訊之任何部份,
並請即通知寄件人並完全
> 刪除本通訊。本通訊僅供參考,且不應視為任何要約、要約之引誘、或締結契約或交易之確認或承諾。寄
> 件人並不保證本通訊內所載數據資料或其他資訊之完整性及正確性,
該等資料或資訊並得隨時不經通知
> 而變更。又本通訊之評論或陳述不當然反映北祥股份有限公司係之意見或看法。網路通訊可能含有病
> 毒,收件人應自行確認本郵件是否安全,若因此造成損害,寄件人恕不負責。
> ===================================================
 
===================================================
本通訊及其所有附件所含之資訊均屬機密,僅供指定之收件人使用,未經寄件人許可不得
揭露、複製或散布本通訊。
若您並非指定之收件人,請勿使用、保存或揭露本通訊之任
何部份, 並請即通知寄件人並完全刪除本通訊。
本通訊僅供參考,且不應視為任何要約、
要約之引誘、或締結契約或交易之確認或承諾。
寄件人並不保證本通訊內所載數據資料
或其他資訊之完整性及正確性, 該等資料或資訊並得隨時不經通知而變更。
又本通訊之
評論或陳述不當然反映北祥股份有限公司係之意見或看法。
網路通訊可能含有病毒,收
件人應自行確認本郵件是否安全,若因此造成損害,寄件人恕不負責。
===================================================


Mime
View raw message