impala-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joe McDonnell (Code Review)" <>
Subject [Impala-ASF-CR] IMPALA-4623: Enable file handle cache
Date Sat, 08 Apr 2017 01:18:34 GMT
Joe McDonnell has uploaded a new patch set (#3).

Change subject: IMPALA-4623: Enable file handle cache

IMPALA-4623: Enable file handle cache

Currently, every scan range maintains a file handle, even
when multiple scan ranges are accessing the same file.
Open the file handles causes load on the NameNode, which
can lead to scaling issues.

There are two parts to this transaction:
1. Enable file handle caching by default
2. Share the file handle between scan ranges from the same

The scan range no longer maintains its own Hdfs file
handle. On each read, the io thread will get the Hdfs file
handle from the cache (opening it if necessary) and use
that for the read. This allows multiple scan ranges on the
same file to use the same file handle. Since the file
offsets are no longer consistent for an individual scan
range, all Hdfs reads need to either use hdfsPread or do
a seek before reading. Additionally, since Hdfs read
statistics are maintained on the file handle, the read
statistics must be retrieved and cleared after each read.

Scan ranges that are accessing data cached by Hdfs
will get a file handle from the cache, but the file
handle will be kept on the scan range for the time
that the scan range is in use. This prevents the
cache from closing the file handle while the data
buffer is in use.

To manage contention, the file handle cache is now
partitioned by a hash of the key into independent
caches with independent locks. The allowed capacity
of the file handle cache is split evenly among the
partitions. File handles are evicted independently
for each partition.

If max_cached_file_handles is set to 0, file handle
caching is off and the previous behavior applies.

Tests: copies the files from an existing
table into a new directory and uses that to create an
external table. It queries the external table, then
uses the hdfs commandline to manipulate the hdfs file
(delete, move, etc). It queries again to make sure we
don't crash. Then, it runs "invalidate metadata". In
the delete case, we expect zero rows. In the move case,
we expect the same number of rows.

1. Determine appropriate defaults.
2. Other tests
  a. File overwrite
  b. Any others?
3. For scan ranages that use Hdfs caching, should there
be some sharing at the scanner level?

Change-Id: Ibe5ff60971dd653c3b6a0e13928cfa9fc59d078d
M be/src/exec/
M be/src/exec/hdfs-scan-node-base.h
M be/src/runtime/buffered-block-mgr.h
M be/src/runtime/disk-io-mgr-internal.h
M be/src/runtime/
M be/src/runtime/
M be/src/runtime/
M be/src/runtime/disk-io-mgr.h
M be/src/util/
M be/src/util/lru-cache.h
M be/src/util/lru-cache.inline.h
M tests/query_test/
12 files changed, 498 insertions(+), 166 deletions(-)

  git pull ssh:// refs/changes/78/6478/3
To view, visit
To unsubscribe, visit

Gerrit-MessageType: newpatchset
Gerrit-Change-Id: Ibe5ff60971dd653c3b6a0e13928cfa9fc59d078d
Gerrit-PatchSet: 3
Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-Owner: Joe McDonnell <>
Gerrit-Reviewer: Dan Hecht <>
Gerrit-Reviewer: Joe McDonnell <>
Gerrit-Reviewer: Marcel Kornacker <>
Gerrit-Reviewer: Tim Armstrong <>

View raw message