harmony-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gregory Shimansky (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HARMONY-3581) [drlvm][shutdown][interpreter] stress.Sync crashes on shutdown on interpreter on Linux
Date Thu, 04 Oct 2007 14:59:50 GMT

     [ https://issues.apache.org/jira/browse/HARMONY-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Gregory Shimansky updated HARMONY-3581:
---------------------------------------

    Environment: Linux / x86_64  (was: Linux@x86_64)

> [drlvm][shutdown][interpreter] stress.Sync crashes on shutdown on interpreter on Linux
> --------------------------------------------------------------------------------------
>
>                 Key: HARMONY-3581
>                 URL: https://issues.apache.org/jira/browse/HARMONY-3581
>             Project: Harmony
>          Issue Type: Bug
>          Components: DRLVM
>         Environment: Linux / x86_64
>            Reporter: Gregory Shimansky
>            Priority: Minor
>
> When running stress.Sync test on interpreter it crashes quite often after printing message
PASSES. The test crashes on shutdown. It spawns lots of daemon threads and when it exits,
these daemon threads have to be stopped. VM uses hythread_cancel (which actually executes
pthread_cancel) to stop these threads and then continues with shutdown process. But pthread_cancel
doesn't kill daemon threads immidiatelly, especially when there are many of them. Some of
them are still alive while VM shuts down. For some unknown reason pthread cancellation handler
may crash in such conditions on SLES9. When this happens, crash handler also may crash since
interpreter library is already unloaded, and there is no way to get stack trace from interpreter.
The stack looks like this (see crash in uw_frame_state_for):
> #1  0x0000002a96aae955 in walk_native_stack_interpreter (pregs=0x41492d70, 
>     pthread=0xf550d8, max_depth=-1, frame_array=0x0)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/vmcore/src/util/native_stack.cpp:395
> #2  0x0000002a96aae224 in walk_native_stack_registers (pregs=0x41492d70, 
>     pthread=0xf550d8, max_depth=-1, frame_array=0x0)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/vmcore/src/util/native_stack.cpp:123
> #3  0x0000002a96ac0f6d in st_print_stack (regs=0x41492d70)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/vmcore/src/stack/stack_dump.cpp:393
> #4  0x0000002a96abfcdf in null_java_reference_handler (signum=11, 
>     info=0x41492f70, context=0x41492e40)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/vmcore/src/util/linux/signals_em64t.cpp:383
> #5  <signal handler called>
> #6  0x0000002a9771ecb7 in uw_frame_state_for () from /lib64/libgcc_s.so.1
> #7  0x0000002a9771f0b5 in _Unwind_ForcedUnwind_Phase2 ()
>    from /lib64/libgcc_s.so.1
> #8  0x0000002a9771fb2c in _Unwind_ForcedUnwind () from /lib64/libgcc_s.so.1
> #9  0x0000002a96711aa3 in _Unwind_ForcedUnwind ()
>    from /lib64/tls/libpthread.so.0
> #10 0x0000002a9670f9b0 in __pthread_unwind () from /lib64/tls/libpthread.so.0
> #11 0x0000002a9670a8bb in sigcancel_handler () from /lib64/tls/libpthread.so.0
> #12 <signal handler called>
> #13 0x0000002a9670d89d in pthread_cond_wait@@GLIBC_2.3.2 ()
>    from /lib64/tls/libpthread.so.0
> #14 0x0000002a96243089 in pthread_cond_wait@@GLIBC_2.3.2 ()
>    from /lib64/tls/libc.so.6
> #15 0x0000002a9567351a in os_cond_timedwait (cond=0xf69c18, mutex=0xf69bf0, 
>     ms=0, nano=0)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/thread/src/linux/os_condvar.c:41
> #16 0x0000002a95675a1a in condvar_wait_impl (cond=0xf69c18, mutex=0xf69bf0, 
>     ms=0, nano=0, interruptable=1)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/thread/src/thread_native_condvar.c:55
> #17 0x0000002a95675fe8 in monitor_wait_impl (mon_ptr=0xf69bf0, ms=0, nano=0, 
>     interruptable=1)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/thread/src/thread_native_fat_monitor.c:189
> #18 0x0000002a956788ce in thin_monitor_wait_impl (lockword_ptr=0x2a99491cdc, 
>     ms=0, nano=0, interruptable=1)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/thread/src/thread_native_thin_monitor.c:439
> #19 0x0000002a95678954 in hythread_thin_monitor_wait_interruptable (
>     lockword_ptr=0x2a99491cdc, ms=0, nano=0)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/thread/src/thread_native_thin_monitor.c:491
> #20 0x0000002a96bb7f7c in jthread_monitor_timed_wait (monitor=0x2aa9e01660, 
>     millis=0, nanos=0)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/thread/src/thread_java_monitors.c:336
> #21 0x0000002a96a3f6e0 in Java_java_lang_VMThreadManager_wait (env=0xf55430, 
>     clazz=0x416806e0, monitor=0x2aa9e01660, millis=0, nanos=0)
>     at /nfs/ims/proj/drl/mrt1/users/gregory/em64t/trunk/working_vm/vm/vmcore/src/kernel_classes/native/java_lang_VMThreadManager.cpp:202

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message