flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yun Gao" <yungao...@aliyun.com>
Subject 回复:Performance issue when writing to HDFS
Date Fri, 22 May 2020 06:05:56 GMT
Hi Kong,

     Sorry that I'm not expert of Hadoop, but from the logs and Google, It seems more likely
to be a problem of HDFS side [1] ? Like long-time GC in DataNode.

     Also I have found a similar issue from the history mails [2], and the conclusion should
be similar.

 Best,
 Yun


   [1] https://community.cloudera.com/t5/Support-Questions/Solution-for-quot-slow-readprocessor-quot-warnings/td-p/122046
   [2] http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/quot-Slow-ReadProcessor-quot-warnings-when-using-BucketSink-td9427.html



 ------------------原始邮件 ------------------
发件人:Mu Kong <kong.mu.biz@gmail.com>
发送时间:Fri May 22 11:16:32 2020
收件人:user <user@flink.apache.org>
主题:Performance issue when writing to HDFS

Hi all,

I have Flink application consuming from Kafka and writing the data to HDFS bucketed by event
time with BucketingSink.
Sometimes, the the traffic gets high and from the prometheus metrics, it shows the writing
is not stable.

(getting from flink_taskmanager_job_task_operator_numRecordsOutPerSecond)

The output data on HDFS is also getting delayed. (The records for a certain hour bucket are
written to HDFS 50 minutes after that hour)

I looked into the log, and find warning regarding the datanode ack, which might be related:

DFSClient exception:2020-05-21 10:43:10,432 INFO  org.apache.hadoop.hdfs.DFSClient       
                      - Exception in createBlockOutputStream
java.io.IOException: Got error, status message , ack with firstBadLink as <IP address here>:1004
        at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1478)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1380)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:558)

 Slow ReadProcessor read fields warning:2020-05-21 10:42:30,509 WARN  org.apache.hadoop.hdfs.DFSClient
                             - Slow ReadProcessor read fields took 30230ms (threshold=30000ms);
ack: seqno: 126 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 372753456
flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[<IP address here>:1004,DS-833b175e-9848-453d-a222-abf5c05d643e,DISK],
DatanodeInfoWithStorage[<IP address here>:1004,DS-f998208a-df7b-4c63-9dde-26453ba69559,DISK],
DatanodeInfoWithStorage[<IP address here>:1004,DS-4baa6ba6-3951-46f7-a843-62a13e3a62f7,DISK]]


We haven't done any tuning for the Flink job regarding writing to HDFS. Is there any config
or optimization we can try to avoid delay and these warnings?

Thanks in advance!!

Best regards,
Mu
Mime
View raw message