atlas-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hemanth Yamijala (JIRA)" <>
Subject [jira] [Updated] (ATLAS-629) Kafka messages in ATLAS_HOOK might be lost in HA mode at the instant of failover.
Date Fri, 13 May 2016 04:17:12 GMT


Hemanth Yamijala updated ATLAS-629:
    Attachment: ATLAS-629-3.patch

I made just one change to the last patch. When initializing the autoCommitEnabled variable
for the Kafka consumer, I default the value to true to maintain backwards compatibility, considering
cases like Ranger. It can of course be overridden in configuration as usual.

-+                Boolean.valueOf(properties.getProperty("auto.commit.enable", "false")));
++                Boolean.valueOf(properties.getProperty("auto.commit.enable", "true")));

> Kafka messages in ATLAS_HOOK might be lost in HA mode at the instant of failover.
> ---------------------------------------------------------------------------------
>                 Key: ATLAS-629
>                 URL:
>             Project: Atlas
>          Issue Type: Bug
>    Affects Versions: 0.7-incubating
>            Reporter: Hemanth Yamijala
>            Assignee: Hemanth Yamijala
>            Priority: Critical
>             Fix For: 0.7-incubating
>         Attachments: ATLAS-629-1.patch, ATLAS-629-2.patch, ATLAS-629-3.patch, ATLAS-629.patch
> Write data to Kafka continuously from Hive hook - can do this by writing a script that
constantly creates tables. Bring down the Active instance with kill -9. Ensure writes continue
after passive becomes active. The expectation is the number of tables created and the number
of tables in Atlas match.
> In one test, wrote 180 tables and switched over 6 times from one instance to another.
Found that 1 table was lost of the lot. i.e. 179 tables were created, and 1 did not get in.

This message was sent by Atlassian JIRA

View raw message