logging-log4j-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache <ralph.go...@dslextreme.com>
Subject Re: RollingAppenderSizeTest
Date Wed, 25 Jan 2017 01:35:37 GMT
That is in 2.8. Just don’t specify a file name. The only downside is that when the system
is shut down the file isn’t compressed. That is because it will be reused if you immediately
restart. But if it no longer matches the file pattern it will never be compressed. Of course,
if you don’t compress the files you will never have this problem.

FWIW, the RollingFileAppender in Log4j 1.x and Logback have always been the most problematic
components just by the nature of what they are trying to do and how they work.

Ralph

> On Jan 24, 2017, at 3:24 PM, Gary Gregory <garydgregory@gmail.com> wrote:
> 
> It feels like we've had so many issues with the rolling file appender for so long. For
me, I'd rather have a "rolling" file appender be more like a "time window" file appender where
NO files are renamed. For example, with a time window/rollover of 1 day you start with a file
called 2017-01-01.txt and on "rollover", we start a new file 2017-01-02.txt. The "rolling"
is that we "roll" logging to the new file. The old file can still be compressed like before.
And old old files can still be deleted based on the file pattern.
> 
> When I first started using the RFA, I was surprised at the behavior for always writing
to the same file name and then copying that on rollover. Diff'rent strokes I guess ;-)
> 
> Gary
> 
> On Tue, Jan 24, 2017 at 12:03 PM, Apache <ralph.goers@dslextreme.com <mailto:ralph.goers@dslextreme.com>>
wrote:
> I am looking at the failures that are occurring in the RollingAppenderSizeTest in the
Jenkins build. I am working at modifying the code so that the asynchronous tasks will complete
before shutdown is allowed to complete. But I am running into an issue I am not sure how to
resolve. The test is configured to have the log file roll over after 500 bytes have been written.
It seems that some of the compression algorithms take longer than it takes to write 500 bytes,
so after the maximum number of files are reached the system is rolling over on top of files
that are in the process of being archived, so we get rename/move and delete problems.
> 
> Any ideas?
> 
> Ralph
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: log4j-dev-unsubscribe@logging.apache.org <mailto:log4j-dev-unsubscribe@logging.apache.org>
> For additional commands, e-mail: log4j-dev-help@logging.apache.org <mailto:log4j-dev-help@logging.apache.org>
> 
> 
> 
> 
> -- 
> E-Mail: garydgregory@gmail.com <mailto:garydgregory@gmail.com> | ggregory@apache.org
 <mailto:ggregory@apache.org>
> Java Persistence with Hibernate, Second Edition <https://www.amazon.com/gp/product/1617290459/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1617290459&linkCode=as2&tag=garygregory-20&linkId=cadb800f39946ec62ea2b1af9fe6a2b8> 
<http://ir-na.amazon-adsystem.com/e/ir?t=garygregory-20&l=am2&o=1&a=1617290459>
> JUnit in Action, Second Edition <https://www.amazon.com/gp/product/1935182021/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1935182021&linkCode=as2&tag=garygregory-20&linkId=31ecd1f6b6d1eaf8886ac902a24de418%22> 
<http://ir-na.amazon-adsystem.com/e/ir?t=garygregory-20&l=am2&o=1&a=1935182021>
> Spring Batch in Action <https://www.amazon.com/gp/product/1935182951/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1935182951&linkCode=%7B%7BlinkCode%7D%7D&tag=garygregory-20&linkId=%7B%7Blink_id%7D%7D%22%3ESpring+Batch+in+Action>
 <http://ir-na.amazon-adsystem.com/e/ir?t=garygregory-20&l=am2&o=1&a=1935182951>
> Blog: http://garygregory.wordpress.com <http://garygregory.wordpress.com/> 
> Home: http://garygregory.com/ <http://garygregory.com/>
> Tweet! http://twitter.com/GaryGregory <http://twitter.com/GaryGregory>

Mime
View raw message