jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Julien Poffet <julienpof...@gmail.com>
Subject Re: tmp files filling up tomcat
Date Thu, 26 Nov 2009 09:44:59 GMT
Ok thanks for your advise, I'll give it a try...

Regards,
Julien

On Thu, Nov 26, 2009 at 10:36 AM, Martijn Hendriks <mhndrks@gmail.com>wrote:

> Hi Julien,
>
> You can try to set the CacheManager values for cache sizes to 0 and to
> set the resize interval to say 10 ms. In that way your caches are kept
> small very aggresively.
>
> Deleting the files older than 30 minutes will give broken properties
> in Jackrabbit. But if you only read them once directly after retrieval
> then this might just work in your situation.
>
> Best regards,
> Martijn
>
> On Thu, Nov 26, 2009 at 8:21 AM, Julien Poffet <julienpoffet@gmail.com>
> wrote:
> > Hi,
> >
> > Sorry to bother you with that but I really need to fix this issue ASAP...
> My
> > importation process is suppose to run the whole week end but now the
> server
> > crashes every time after two hours. Moving the temporary directory to a
> fs
> > with a lot of storage isn't a good solution for my situation...
> >
> > What if I delete the files on disk which are older than 30 minutes for
> > instance? Would it work or I still have chance to get broken properties?
> >
> > What are the minimal value for the cache manager?
> >
> > Best regards,
> > Julien
> >
> > On Wed, Nov 25, 2009 at 11:25 AM, Thomas Müller <thomas.mueller@day.com
> >wrote:
> >
> >> Hi,
> >>
> >> Could you provide more details as described in
> >> http://wiki.apache.org/jackrabbit/QuestionsAndAnswers "Reporting
> >> Problems", specially:
> >>
> >> * The configuration (repository.xml and all workspace.xml files).
> >> * The versions of the Jackrabbit jar files you use (the list of all
> >> jar file names).
> >>
> >> What would also help a lot is a simple, standalone test case that
> >> reproduces the problem.
> >>
> >> Regards,
> >> Thomas
> >>
> >>
> >>
> >>
> >> On Wed, Nov 25, 2009 at 11:09 AM, Julien Poffet <julienpoffet@gmail.com
> >
> >> wrote:
> >> > Hi Martijn,
> >> >
> >> > There are many tiny files. The biggest files are about 230K.
> >> >
> >> > The thing which is weird is that the size grows and decrease when I
> start
> >> > parsing the WebDav. It goes up to ~30MB and then down again to ~2MB.
> So
> >> this
> >> > behavior let thinks that the cache manager deletes the files which are
> no
> >> > longer used...  But after a moment the size increase indefinitely.
> >> >
> >> > Cheers,
> >> > Julien
> >> >
> >> > On Wed, Nov 25, 2009 at 10:01 AM, Martijn Hendriks <mhndrks@gmail.com
> >> >wrote:
> >> >
> >> >> Hi Julien,
> >> >>
> >> >> Deleting the files on disk will not work. Then you get broken
> >> >> properties in the Jackrabbit caches. Are there many files in your
> temp
> >> >> dir, or just a couple of big ones?
> >> >>
> >> >> Best regards,
> >> >> Martijn
> >> >>
> >> >> On Wed, Nov 25, 2009 at 9:41 AM, Julien Poffet <
> julienpoffet@gmail.com>
> >> >> wrote:
> >> >> > Hi Martijn,
> >> >> > I tried to setup minimal values to the cache manager:
> >> >> > CacheManager cm = repository.getCacheManager();
> >> >> > cm.setMaxMemory(16 * 1024);
> >> >> > cm.setMaxMemoryPerCache(8 * 1024);
> >> >> > cm.setMinMemoryPerCache(1024);
> >> >> > cm.setMinResizeInterval(500);
> >> >> > Even with this settings my temp directory grows quickly up to
1
> GB...
> >> >> > Another questions, why Jackrabbit do not recreate these cache
files
> if
> >> >> they
> >> >> > are deleted. I tried to remove them but then the WebDav can't
> render
> >> the
> >> >> > files any more. I was supposing that if the cache file is missing,
> it
> >> >> should
> >> >> > be created again?
> >> >> > Thanks for the JIRA,
> >> >> > Best Regards,
> >> >> > Julien
> >> >> > On Wed, Nov 25, 2009 at 8:42 AM, Martijn Hendriks <
> mhndrks@gmail.com>
> >> >> wrote:
> >> >> >>
> >> >> >> Hi Julien,
> >> >> >>
> >> >> >> Ok, I see why you choose another approach. I created an issue
for
> >> >> >> this: https://issues.apache.org/jira/browse/JCR-2407
> >> >> >>
> >> >> >> Best regards,
> >> >> >>
> >> >> >> Martijn
> >> >> >>
> >> >> >> On Tue, Nov 24, 2009 at 10:04 AM, Julien Poffet <
> >> julienpoffet@gmail.com
> >> >> >
> >> >> >> wrote:
> >> >> >> > Hi Martijn,
> >> >> >> > Thanks for the reply.
> >> >> >> > Yes the files look like bin1965159231182123515.tmp.
> >> >> >> > Ok I'll try to configure smaller cache sizes.
> >> >> >> > As fare as I know the import/export API use XML. My source
> database
> >> is
> >> >> >> > about
> >> >> >> > 60go so I don't believe it will work out of the box...
> >> >> >> > Best regards,
> >> >> >> > Julien
> >> >> >> > On Mon, Nov 23, 2009 at 4:33 PM, Martijn Hendriks <
> >> mhndrks@gmail.com>
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >> Hi Julien,
> >> >> >> >>
> >> >> >> >> Do these files look like bin1965159231182123515.tmp?
If so,
> these
> >> are
> >> >> >> >> the contents of binary properties which are cached
by
> Jackrabbit
> >> and
> >> >> I
> >> >> >> >> know no way to avoid them. These files should be
deleted
> >> >> automatically
> >> >> >> >> when the associated properties are garbage collected.
If you
> have
> >> a
> >> >> >> >> lot of big binary properties the contents on disk
can indeed
> grow
> >> >> very
> >> >> >> >> fast. I know of two workarounds: (i) point the java.io.tmpdir
> to
> >> an
> >> >> fs
> >> >> >> >> with a lot of space, and (ii) configure smaller cache
sizes in
> >> >> >> >> org.apache.jackrabbit.core.state.CacheManager (available
> through a
> >> >> >> >> org.apache.jackrabbit.core.RepositoryImpl instance).
> >> >> >> >>
> >> >> >> >> Btw, have you tried to use the import/export API
for migrating
> >> your
> >> >> >> >> content?
> >> >> >> >>
> >> >> >> >> Best regards,
> >> >> >> >> Martijn
> >> >> >> >>
> >> >> >> >> On Mon, Nov 23, 2009 at 4:17 PM, Julien Poffet <
> >> >> julienpoffet@gmail.com>
> >> >> >> >> wrote:
> >> >> >> >> > Here is my situation,
> >> >> >> >> >
> >> >> >> >> > I was using jackrabbit with a non-datastore
config. So all
> the
> >> >> >> >> > content
> >> >> >> >> > of
> >> >> >> >> > jackrabbit were stored in my database. Now I
just migrated to
> a
> >> >> >> >> > cluster/datastore config with a brand new database
prefix.
> >> >> >> >> >
> >> >> >> >> > At this point I'm trying to import the content
of the old
> >> >> repository
> >> >> >> >> > to
> >> >> >> >> > the
> >> >> >> >> > new one. I have setup the SimpleWebDavServlet
to expose the
> >> content
> >> >> >> >> > of
> >> >> >> >> > the
> >> >> >> >> > old repository through WebDav. By doing this
I can parse the
> >> WebDav
> >> >> >> >> > and
> >> >> >> >> > get
> >> >> >> >> > the files to import them in the new repository.
So far it's a
> >> >> little
> >> >> >> >> > bit
> >> >> >> >> > slow but it works fine. My problem is that when
the source
> >> WebDav
> >> >> is
> >> >> >> >> > parsed,
> >> >> >> >> > a lot of binary files (which I assume are a
kind of BLOB
> cache)
> >> are
> >> >> >> >> > created
> >> >> >> >> > in my tomcat temp dir. This temporary files
are never deleted
> >> and
> >> >> my
> >> >> >> >> > server
> >> >> >> >> > runs out of space very quickly.
> >> >> >> >> >
> >> >> >> >> > Is there a way to avoid theses temporary files?
> >> >> >> >> >
> >> >> >> >> > Cheers,
> >> >> >> >> > Julien
> >> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >
> >> >> >
> >> >>
> >> >
> >>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message