flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pankaj (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
Date Fri, 16 Mar 2018 18:34:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16402323#comment-16402323
] 

Pankaj commented on FLINK-9009:
-------------------------------

No, Is not related with Kafka. I have already tried and check the problem only occurs when
we introduced  more parallelism and flink is writing two cassandra with two cluster. Lets
say in my case I introduced parallelism =10 coz i have 10 partition in kafka topic.

I do not face any problem with same scenario with no cassandra writing from flink.

Problem can be replicated with steps i shared in description.

I'm not sure if flink has the fix of below two tickets in the cassandra connector api i shared

https://issues.apache.org/jira/browse/CASSANDRA-11243

https://issues.apache.org/jira/browse/CASSANDRA-10837

 

> Error| You are creating too many HashedWheelTimer instances.  HashedWheelTimer is a shared
resource that must be reused across the application, so that only a few instances are created.
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-9009
>                 URL: https://issues.apache.org/jira/browse/FLINK-9009
>             Project: Flink
>          Issue Type: Bug
>         Environment: Pass platform: Openshit
>            Reporter: Pankaj
>            Priority: Blocker
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink cassandra
sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment which are
reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above error. It
also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message