ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anil <anilk...@gmail.com>
Subject Re: Node fauliure
Date Fri, 24 Feb 2017 12:40:37 GMT
HI Andrey,

i have attached the log.

Compute Job ran without issues yesterday without setLocal(true) on scan
query and without order by on detailsCache in the given code.

I am not sure adding this these two caused the issue. But as per
ignite-examples setLocal(true) is required when compute task is broadcast-ed

Thanks.


On 24 February 2017 at 18:03, Andrey Gura <agura@apache.org> wrote:

> Hi, Anil
>
> Could you please provide crash dump? In your case it is
> /opt/ignite-manager/api/hs_err_pid18543.log file.
>
> On Fri, Feb 24, 2017 at 9:05 AM, Anil <anilklce@gmail.com> wrote:
> > Hi ,
> >
> > I see the node is down with following error while running compute task
> >
> >
> > # A fatal error has been detected by the Java Runtime Environment:
> > #
> > #  SIGSEGV (0xb) at pc=0x00007facd5cae561, pid=18543,
> tid=0x00007fab8a9ea700
> > #
> > # JRE version: OpenJDK Runtime Environment (8.0_111-b14) (build
> > 1.8.0_111-8u111-b14-3~14.04.1-b14)
> > # Java VM: OpenJDK 64-Bit Server VM (25.111-b14 mixed mode linux-amd64
> > compressed oops)
> > # Problematic frame:
> > # J 8676 C2
> > org.apache.ignite.internal.processors.query.h2.opt.
> GridH2KeyValueRowOffheap.getOffheapValue(I)Lorg/h2/value/Value;
> > (290 bytes) @ 0x00007facd5cae561 [0x00007facd5cae180+0x3e1]
> > #
> > # Failed to write core dump. Core dumps have been disabled. To enable
> core
> > dumping, try "ulimit -c unlimited" before starting Java again
> > #
> > # An error report file with more information is saved as:
> > # /opt/ignite-manager/api/hs_err_pid18543.log
> > #
> > # If you would like to submit a bug report, please visit:
> > #   http://bugreport.java.com/bugreport/crash.jsp
> > #
> >
> >
> > I have two 2 caches on 4 node cluster each cache is configured with 10 gb
> > off heap.
> >
> > ComputeTask performs the following execution and it is broad casted to
> all
> > nodes.
> >
> >                for (Integer part : parts) {
> > ScanQuery<String, Person> scanQuery = new ScanQuery<String, Person>();
> > scanQuery.setLocal(true);
> > scanQuery.setPartition(part);
> >
> > Iterator<Cache.Entry<String, Person>> iterator =
> > cache.query(scanQuery).iterator();
> >
> > while (iterator.hasNext()) {
> > Cache.Entry<String, Person> row = iterator.next();
> > String eqId =   row.getValue().getEqId();
> > try {
> > QueryCursor<Entry<AffinityKey<String>, Contract>> pdCursor =
> > detailsCache.query(new SqlQuery<AffinityKey<String>,
> > PersonDetail>(PersonDetail.class,
> > "select * from DETAIL_CACHE.PersonDetail where eqId = ? order by enddate
> > desc").setLocal(true).setArgs(eqId));
> > Long prev = null;
> > for (Entry<AffinityKey<String>, PersonDetail> d : pdCursor) {
> > // populate person info into person detail
> > dataStreamer.addData(new AffinityKey<String>(detaildId, eqId),
> > d);
> > }
> > pdCursor.close();
> > }catch (Exception ex){
> > }
> > }
> >
> > }
> >
> >
> > Please let me know if you see any issues with approach or any
> > configurations.
> >
> > Thanks.
> >
>

Mime
View raw message