impala-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeszy <jes...@gmail.com>
Subject Re: Estimate peak memory VS used peak memory
Date Fri, 23 Feb 2018 11:47:43 GMT
Again, the 8TB estimate would not be relevant if the query had a mem_limit set.
I think all that we discussed is covered in the docs, but if you feel
like specific parts need clarification, please file a jira.

On 23 February 2018 at 11:51, Fawze Abujaber <fawzeaj@gmail.com> wrote:
> Sorry for  asking many questions, but i see your answers are closing the
> gaps that i cannot find in the documentation.
>
> So how we can explain that there was an estimate for 8T per node and impala
> decided to submit this query?
>
> My goal that each query running beyond the actual limit per node to fail (
> and this is what i setup in the default memory per node per pool) an want
> all other queries to be queue and not killed, so what i understand that i
> need to setup the max queue query to unlimited and the queue timeout to
> hours.
>
> And in order to reach that i need to setup the default memory per node for
> each pool and setting either max concurrency or the max memory per pool that
> will help to measure the max concurrent queries that can run in specific
> pool.
>
> I think reaching this goal will close all my gaps.
>
>
>
> On Fri, Feb 23, 2018 at 11:49 AM, Jeszy <jeszyb@gmail.com> wrote:
>>
>> > Do queuing query or not is based on the prediction which based on the
>> > estimate and of course the concurrency that can run in a pool.
>>
>> Yes, it is.
>>
>> > If I have memory limit per pool and memory limit per node for a pool, so
>> > it
>> > can be used to estimate number of queries that can run concurrently, is
>> > this
>> > also based on the prediction and not the actual use.
>>
>> Also on prediction.
>
>

Mime
View raw message