Hi Diego,You cannot prefix each stream with a differentstring so that the paths do not collide?If I understand your use-case correctly, this might work.Cheers,KostasOn Nov 29, 2016, at 10:04 AM, Diego Fustes Villadóniga <email@example.com> wrote:Hi Kostas,Thanks for your reply.The problem is at the initialization of the job. The reason was that I was using the same HDFS path as sink for 3 different streams, which is something that I would like. I can fix it by using different pathsfor each stream.Maybe there is a way to achieve this in a different manner by joining the streams somehow before sinking… maybe through Kafka?Kind Regards,DiegoHi Diego,The message shows that two tasks are trying to touch concurrently the same file.This message is thrown upon recovery after a failure, or at the initialization of the job?Could you please check the logs for other exceptions before this?Can this be related to this issue?Thanks,KostasOn Nov 28, 2016, at 5:37 PM, Diego Fustes Villadóniga <firstname.lastname@example.org> wrote:Hi colleagues,I am experiencing problems when trying to write events from a stream to HDFS. I get the following exception:org.apache.hadoop.ipc.
RemoteException(org.apache. hadoop.hdfs.protocol. AlreadyBeingCreatedException): failed to create file /user/biguardian/events/2016- 11-28--15/flinkpart-0-0.text for DFSClient_NONMAPREDUCE_ 1634980080_43 for client 172.21.40.75 because current leaseholder is trying to recreate file.My Flink version is 1.1.3 and I am running it directly from a JAR (not in YARN) with java -jar.Do you know the reason of this error?Kind regards,Diego