flink-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gyula Fóra <gyf...@apache.org>
Subject Strange segfaults in Flink 1.2 (probably not RocksDB related)
Date Sun, 22 Jan 2017 18:51:57 GMT
Hey All,

I am trying to port some of my streaming jobs to Flink 1.2 from 1.1.4 and I
have noticed some very strange segfaults. (I am running in test
environments - with the minicluster)
It is a fairly complex job so I wouldnt go into details but the interesting
part is that adding/removing a simple filter in the wrong place in the
topology (such as (e -> true)  or anything actually ) seems to cause
frequent segfaults during execution.

Basically the key part looks something like:

...
DataStream stream = source.map().setParallelism(1)..uid("AssignFieldIds").
name("AssignFieldIds").startNewChain();
DataStream filtered = input1.filter(t -> true).setParallelism(1)
IterativeStream itStream = filtered.iterate(...)
...

Some notes before the actual error: replacing the filter with a map or
other chained transforms also leads to this problem. If the filter is not
chained there is no error (or if I remove the filter).

The error I get looks like this:
https://gist.github.com/gyfora/da713b99b1e85a681ff1a4182f001242

I wonder if anyone has seen something like this before, or have some ideas
how to debug it. The simple work around is to not chain the filter but it's
still very strange.

Regards,
Gyula

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message