impala-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeszy <>
Subject Re: invalidate metadata behaviour
Date Wed, 29 Nov 2017 13:13:38 GMT

On 29 November 2017 at 11:12, Antoni Ivanov <> wrote:
> Thanks,
> I hope you don't mind a few more questions:
>> Node2 would also eventually consider these invalidated
> - How exactly does it work. E.g When I issue invalidate metadata. Does it say to the
catalogd to invalidate metadata or is this information broadcasted through the statestored?

That's right, it's invalidated on the catalogd first, then propagated
to the daemons through statestore.

>> stored in the catalog daemon centrally
> - Oh so metadata is stored in the catalogD . I thought it was stored only in the statestore
(and cached in each ImpalaD) and catalog facilitate fetching metadata from Hive Metastore
and Block information from HDFS Namenode.
> What was I wrong ?

Statestore is only responsible for pushing the catalog changes to the
impala daemons, catalogd is the central store. I'm not 100% sure about
how the catalog deltas are stored in the statestore so I'll let others

> - Does INVALIDATE METADATA have any impact on the Hive Metastore . I don't believe so,
right? E.g instead of running invalidate metadata (say after HDFS rebalance) I can restart
Impala to clear caches (including statestore catalog topic) so that new data is loaded lazily

A global 'invalidate metadata' (i.e. not a table-specific one) will
have the same impact on the HMS as a catalog / service restart would.
Impala will have to fetch a list of tables in both cases, so there is
some work on the HMS side, but it's miniscule. It matters when HMS is
unreachable, for example.

One addition, I just noticed that earlier you said that most of the
2000 tables have much less partitions than 5000, so YMMV. Partition
and file count will directly impact metadata size, the less you have,
the better (metadata-wise, at least).


> -Antoni
> -----Original Message-----
> From: Jeszy []
> Sent: Wednesday, November 29, 2017 9:56 AM
> To:
> Cc:
> Subject: Re: invalidate metadata behaviour
> Hey Antoni,
> On 29 November 2017 at 07:42, Antoni Ivanov <> wrote:
>> Hi,
>> I am wondering if I run INVALIDATE METADATA for the whole database on
>> node1
>> Then I ran a query on node2 – would the query on node2 used the cached
>> metadata for the tables or it would know it’s invalidated?
> Node2 would also eventually consider these invalidated.
>> And second how safe it is to run it for a database with many (say 30)
>> tables over 10,000 partitions and 2000 more under 5000 partitions
>> (most of the under 100)
>> And each Impala Deamon node has a little (below Cloudera recommended)
>> memory
>> (32G)
> These numbers influence the size of the catalog cache, which is stored in the catalog
daemon centrally, and then replicated on each impalad, or on each coordinator in more recent
versions. The metadata you mention (2000 tables * 5000 partitions each, plus the big tables)
is in the 10 million partitions range. Each of those will have at least one file with 3 blocks,
probably more, so all this adds up to a sizeable metadata. The cached version will require
a large amount of memory (on the catalog as well as the daemons/coordinators), which could
easily lead to even small queries running out of memory with only 32gb.
>> Thanks,
>> Antoni

View raw message