hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Is there a better way to handle too much log
Date Tue, 18 Mar 2014 23:49:08 GMT
Here is a related post:
http://stackoverflow.com/questions/13864899/log4j-dailyrollingfileappender-are-rolled-files-deleted-after-some-amount-of


On Tue, Mar 18, 2014 at 2:25 PM, Enis Söztutar <enis.soz@gmail.com> wrote:

> DFRA already deletes old logs, you do not necessarily have to have a cron
> job.
>
> You can use RollingFileAppender to limit the max file size, and number of
> log files to keep around.
>
> Check out conf/log4j.properties.
> Enis
>
>
> On Tue, Mar 18, 2014 at 7:22 AM, haosdent <haosdent@gmail.com> wrote:
>
> > Yep, I use INFO level. Let me think about this later. If I found a better
> > way, I would open a issue and record it. Thanks for your great help.
> @tedyu
> >
> >
> > On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu <yuzhihong@gmail.com> wrote:
> >
> > > If log grows so fast that disk space is to be exhausted, verbosity
> should
> > > be lowered.
> > >
> > > Do you turn on DEBUG logging ?
> > >
> > > Cheers
> > >
> > > On Mar 18, 2014, at 6:08 AM, haosdent <haosdent@gmail.com> wrote:
> > >
> > > > Thanks for your reply. DailyRollingFileAppender and a cron job could
> > > works
> > > > in normal scenario. But sometimes log grow too fast, or disk space
> may
> > > use
> > > > by other applications. Is there a way make Log more "smart" and
> choose
> > > > policy according to current disk space?
> > > >
> > > >
> > > > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu <yuzhihong@gmail.com> wrote:
> > > >
> > > >> Can you utilize
> > > >>
> > >
> >
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> > > ?
> > > >>
> > > >> And have a cron job cleanup old logs ?
> > > >>
> > > >> Cheers
> > > >>
> > > >> On Mar 18, 2014, at 5:29 AM, haosdent <haosdent@gmail.com> wrote:
> > > >>
> > > >>> Sometimes the call of Log.xxx couldn't return if the disk partition
> > of
> > > >> Log
> > > >>> path is full. And HBase would hang because of this. So I think
if
> > there
> > > >> is
> > > >>> a better way to handle too much log. For example, through a
> > > configuration
> > > >>> item in hbase-site.xml, we could delete the old logs periodically
> or
> > > >> delete
> > > >>> old logs when this disk didn't have enough space.
> > > >>>
> > > >>> I think HBase hang when disk space isn't enough is unacceptable.
> > > Looking
> > > >>> forward your ideas. Thanks in advance.
> > > >>>
> > > >>> --
> > > >>> Best Regards,
> > > >>> Haosdent Huang
> > > >
> > > >
> > > >
> > > > --
> > > > Best Regards,
> > > > Haosdent Huang
> > >
> >
> >
> >
> > --
> > Best Regards,
> > Haosdent Huang
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message