incubator-kato-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lukasz <flo...@intercel.com.pl>
Subject Re: Processing huge heap dump.
Date Sun, 10 Jan 2010 17:01:25 GMT
Hi Steve,

Thanks for answer.

Following is the code I used.
I created a dump by running GenHeap and invoking dumpHeap(..) from jconsole.
I wrote, that my dump was around ~370MB, but today when I regenerate it 
it has ~440MB (maybe last time I took dump before all instances where 
created).

Yesterday I tried to take a look into parser code, but I haven't yet 
good image of what (and way) is going on there.

Regards,
Lukasz



--------------------------- Heap generator 
------------------------------------------
import java.util.concurrent.ConcurrentLinkedQueue;

public class GenHeap {

    private ConcurrentLinkedQueue<Data> queue;

    public GenHeap(int size) {
        queue = new ConcurrentLinkedQueue<Data>();
        for (int i=0; i<size; ++i) {
            queue.add(new Data());
        }
    }

    private static GenHeap root;

    public static void main(String[] args) throws Exception {
        root = new GenHeap(Integer.parseInt(args[0]));
        System.out.println("Ready for taking a dump.");
        Thread.sleep(1000000);
        System.out.println("root: "+root);
    }
}

public class Data {
    private static int counter = 0;
    public int myid = ++counter;
}
---------------------------------------------------------------------------------------------------
------------------------ Analyzer 
----------------------------------------------------------
package mykato;

import java.io.File;
import java.lang.management.ManagementFactory;
import java.lang.management.MemoryPoolMXBean;
import javax.tools.diagnostics.FactoryRegistry;
import javax.tools.diagnostics.image.Image;
import org.apache.kato.hprof.datalayer.HProfFactory;
import org.apache.kato.hprof.datalayer.HProfFile;
import org.apache.kato.hprof.datalayer.IHProfRecord;
import org.apache.kato.hprof.datalayer.IHeapDumpHProfRecord;

public class App {

    public static void main(String[] args) throws Exception {
        System.out.println("Hello World!");
        String pathToDump = "/tmp/kato.hprof";
        File dump = new File(pathToDump);

//        defaultApproach(dump);
        hprofReaderApproach(dump);

        System.out.println("TaDa!");
    }

    private static void defaultApproach(File dump) throws Exception {
        Image image = FactoryRegistry.getDefaultRegistry().getImage(dump);
    }

    private static void hprofReaderApproach(File dump) throws Exception {
        HProfFile hprofFile = HProfFactory.createReader(dump);
//        HProfFile hprofFile = new HProfFile(new 
RandomAccesDataProvider(dump));
        hprofFile.open();

        int r = 0;
        IHProfRecord record;
        while ((record = hprofFile.getRecord(r)) != null) {
            if (record instanceof IHeapDumpHProfRecord) {
                processHeapDump((IHeapDumpHProfRecord)record);
            }
            ++r;
        }

    }

    private static void processHeapDump(IHeapDumpHProfRecord heapDump) {
        System.out.println("heapDump: "+heapDump);

        int r = 0;
        long lastTimestamp = System.currentTimeMillis();
        IHProfRecord record;
        while ((record = heapDump.getSubRecord(r)) != null) {
            if (++r % 100000 == 0) {
                long timestamp = System.currentTimeMillis();
                System.out.println("HeapSubRecord: "+r+" 
("+(timestamp-lastTimestamp)+"ms, "+getMemoryUsed()+"kB)");
                lastTimestamp = timestamp;
            }
        }
    }

    private static MemoryPoolMXBean oldGenPool;
    static {
        for (MemoryPoolMXBean pool : 
ManagementFactory.getMemoryPoolMXBeans()) {
            if (pool.getName().toLowerCase().contains("old")) {
                oldGenPool = pool;
                break;
            }
        }
        System.out.println("MemoryPool: "+oldGenPool.getName());
    }

    private static long getMemoryUsed() {
        return (int)(oldGenPool.getUsage().getUsed() / 1024);
    }
}
--------------------------------------------------------------------------------------------------------------




Steve Poole wrote:
>
>> Thanks for this Luksz -   you are probably the first person to use this
>>     
> code other than the developers and its great to get some feedback.  Can you
> share the  code you used to create the dump and to visit the HPROF records?
> Stuart has made some performance adjustments to the hprof code and we'll see
> if we can do better.
>
> On the spec list we're discussing the basics of a "snapshot" dump concept
> where only what you need gets dumped.    I wonder if the same idea could be
> applied to opening a dump.   It would be great to know when reading a dump
> that certain information is not required -  that should improve
> performance.
>
>   

Mime
View raw message