Thanks Grant

I have upgraded Kudu Client (with kudu-spark, so i have tried client 1.9.0 with Kudu cluster 1.8.0) but still have this issue
This is especially noticeable for big tables ( with 10 000 000 000 records) 
Apache spark job failed after 4 attempts to find scanner and we can't process such tables in kudu

Regards, Dmitry Pavlov

Пятница, 29 марта 2019, 18:36 +03:00 от Grant Henke <>:

Kudu 1.8.0 has a bug (KUDU-2710) when making scanner keep alive calls to the server that can overload the server with
table lookups and cause scanners to timeout. I would recommend upgrading to the 1.9.0 kudu-client and kudu-spark jars as
a first step.

Дмитрий Павлов