flink-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stephan Ewen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-941) Possible deadlock after increasing my data set size
Date Tue, 17 Jun 2014 23:56:01 GMT

    [ https://issues.apache.org/jira/browse/FLINK-941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14034621#comment-14034621

Stephan Ewen commented on FLINK-941:

When I change line 271 as you said, I do not get the deadlock.

The deadlock may be dependent on how many buffers are available for intermediate results.
What DOPs have you used and what is your maximum head size? I have tried it with DOP 1 and
DOP4 on -Xmx192m and it worked.

[~uce] How did you reproduce it?

> Possible deadlock after increasing my data set size
> ---------------------------------------------------
>                 Key: FLINK-941
>                 URL: https://issues.apache.org/jira/browse/FLINK-941
>             Project: Flink
>          Issue Type: Bug
>    Affects Versions: pre-apache-0.5.1
>            Reporter: Bastian Köcher
>            Assignee: Stephan Ewen
>         Attachments: IMPRO-3.SS14.G03.zip
> If I increase my data set, my algorithm stops at some point and doesn't continue anymore.
I already waited a quite amount of time, but nothing happens. The linux processor explorer
also displays that the process is sleeping and waiting for something to happen, could maybe
be a deadlock.
> I attached the source of my program, the class HAC_2 is the actual algorithm.
> Changing the line 271 from "if(Integer.parseInt(tokens[0]) > 282)" to "if(Integer.parseInt(tokens[0])
> 283)" at my PC "enables" the bug. The numbers 282, 283 are the numbers of the documents
in my test data and this line skips all documents with an id greater than that.

This message was sent by Atlassian JIRA

View raw message