hadoop-yarn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandy Ryza <sandy.r...@cloudera.com>
Subject Re: Llama - Low Latency Application MAster
Date Thu, 26 Sep 2013 23:55:40 GMT
Link corrections:
http://cloudera.github.io/llama
https://github.com/cloudera/llama

-Sandy


On Thu, Sep 26, 2013 at 4:48 PM, Alejandro Abdelnur <tucu@cloudera.com>wrote:

> Earlier this week I've posted the following comment for tomorrow's Yarn
> meetup. I just realized that most folks may miss that post, thus sending it
> to the alias.
>
> We've been working on getting Impala running on YARN and wanted to share
> Llama, a system that mediates between Impala and YARN.
>
> Our design and some of the rationale behind it and the code are available
> at:
>
> Docs: http://cloudera.github.io/llama...
> Code: https://github.com/cloudera/llam...
>
> We think our approach will be applicable to similar frameworks - those with
> low latency requirements that seek to run work in processes outside of the
> typical container lifecycle.
>
> Thanks for taking a look and for any feedback!
>
> -Alex, Eli, Henry, Karthik, Sandy, and Tucu
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message