atlas-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jiaqi Shan (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (ATLAS-2863) Some configurations for Kafka notification producer are invalid
Date Sun, 09 Sep 2018 06:29:00 GMT

     [ https://issues.apache.org/jira/browse/ATLAS-2863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jiaqi Shan updated ATLAS-2863:
------------------------------
    Description: 
We encountered a problem when using Hive hook in production. Hive hook time out after 180
seconds when kafka server is closed. Setting properties zookeeper.connection.timeout.ms and
zookeeper.session.timeout.ms can't solve the problem.

We found some warns in hive.log. Seems Some configurations for Kafka notification producer
are invalid.
{code:java}
The configuration 'zookeeper.connection.timeout.ms' was supplied but isn't a known config.
The configuration 'zookeeper.session.timeout.ms' was supplied but isn't a known config
{code}
When kafka server closed unexpectedly, Atlas hook throws TimeoutException caused by fail to
update metadata.
{code:java}
org.apache.atlas.notification.NotificationException: java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
        at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:220)
        at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:182)
        at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:89)
        at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:133)
        at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:118)
        at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:171)
        at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:156)
        at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:52)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1804)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:775)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException:
Failed to update metadata after 60000 ms.
        at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1124)
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:823)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:648)
        at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:197)
        ... 23 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after
60000 ms.
{code}
 We propose to add property atlas.kafka.max.block.ms in atlas-application.properties to control
how long {{KafkaProducer}} will block when failed to update metadata.

 

  was:
We encountered a problem when using Hive hook in production. Hive hook time out after 180
seconds when kafka server is closed. Setting properties zookeeper.connection.timeout.ms and
zookeeper.session.timeout.ms can't solve the problem.

We found some warns in hive.log. Seems Some configurations for Kafka notification producer
are invalid.
{code:java}
The configuration 'zookeeper.connection.timeout.ms' was supplied but isn't a known config.
The configuration 'zookeeper.session.timeout.ms' was supplied but isn't a known config
{code}
Atlas hook throws TimeoutException.
{code:java}
org.apache.atlas.notification.NotificationException: java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
        at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:220)
        at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:182)
        at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:89)
        at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:133)
        at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:118)
        at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:171)
        at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:156)
        at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:52)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1804)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:775)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException:
Failed to update metadata after 60000 ms.
        at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1124)
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:823)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:648)
        at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:197)
        ... 23 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after
60000 ms.
{code}
 We propose to add property atlas.kafka.max.block.ms in atlas-application.properties to control
how long {{KafkaProducer.send()}} will block.

 


> Some configurations for Kafka notification producer are invalid
> ---------------------------------------------------------------
>
>                 Key: ATLAS-2863
>                 URL: https://issues.apache.org/jira/browse/ATLAS-2863
>             Project: Atlas
>          Issue Type: Bug
>    Affects Versions: 1.0.0
>            Reporter: Jiaqi Shan
>            Priority: Major
>             Fix For: 1.0.0
>
>
> We encountered a problem when using Hive hook in production. Hive hook time out after
180 seconds when kafka server is closed. Setting properties zookeeper.connection.timeout.ms
and zookeeper.session.timeout.ms can't solve the problem.
> We found some warns in hive.log. Seems Some configurations for Kafka notification producer
are invalid.
> {code:java}
> The configuration 'zookeeper.connection.timeout.ms' was supplied but isn't a known config.
> The configuration 'zookeeper.session.timeout.ms' was supplied but isn't a known config
> {code}
> When kafka server closed unexpectedly, Atlas hook throws TimeoutException caused by fail
to update metadata.
> {code:java}
> org.apache.atlas.notification.NotificationException: java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
>         at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:220)
>         at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:182)
>         at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:89)
>         at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:133)
>         at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:118)
>         at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:171)
>         at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:156)
>         at org.apache.atlas.hive.hook.HiveHook.run(HiveHook.java:52)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1804)
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
>         at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
>         at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
>         at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
>         at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:775)
>         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException:
Failed to update metadata after 60000 ms.
>         at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1124)
>         at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:823)
>         at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
>         at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:648)
>         at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:197)
>         ... 23 more
> Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
after 60000 ms.
> {code}
>  We propose to add property atlas.kafka.max.block.ms in atlas-application.properties
to control how long {{KafkaProducer}} will block when failed to update metadata.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message