hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Kerzner <mark.kerz...@shmsoft.com>
Subject Re: Best way to collect Hadoop logs across cluster
Date Fri, 19 Apr 2013 05:01:30 GMT
So you are saying, the problem is very simple. Just before you destroy the
cluster, simply collect the logs to S3. Anyway, I only need them after I
have completed with a specific computation. So I don't need any special
requirements.

In regular permanent clusters, is there something that allows you to see
all logs in one place?

Thank you,
Mark


On Thu, Apr 18, 2013 at 11:51 PM, Marcos Luis Ortiz Valmaseda <
marcosluis2186@gmail.com> wrote:

> When you destroy an EC2 instance, the correct behavior is to erase all
> data.
> Why don't you create a service to collect the logs directly to a S3 bucket
> in real-time or in a batch of 5 mins?
>
>
> 2013/4/18 Mark Kerzner <mark.kerzner@shmsoft.com>
>
>> Hi,
>>
>> my clusters are on EC2, and they disappear after the cluster's instances
>> are destroyed. What is the best practice to collect the logs for later
>> storage?
>>
>> EC2 does exactly that with their EMR, how do they do it?
>>
>> Thank you,
>> Mark
>>
>
>
>
> --
> Marcos Ortiz Valmaseda,
> *Data-Driven Product Manager* at PDVSA
> *Blog*: http://dataddict.wordpress.com/
> *LinkedIn: *http://www.linkedin.com/in/marcosluis2186
> *Twitter*: @marcosluis2186 <http://twitter.com/marcosluis2186>
>

Mime
View raw message