mesos-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kevin Klues (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (MESOS-2017) Segfault with "Pure virtual method called" when tests fail
Date Tue, 09 Feb 2016 19:19:18 GMT

    [ https://issues.apache.org/jira/browse/MESOS-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15139458#comment-15139458
] 

Kevin Klues edited comment on MESOS-2017 at 2/9/16 7:19 PM:
------------------------------------------------------------

I propose to mark this ticket as closed for the following reasons.  If you feel this is in
error, please reopen it with specific details and a way to reproduce the bug.  Thanks!

1) Unfortunately, we don't have archives of the Jenkins builds before April 2015, so I don't
have any easy way of seeing the exact last time this error occurred.  However, we do know
it has not cropped up on the Jenkins builds since at least December 2015:
http://www.mail-archive.com/search?q=pure+virtual&l=builds%40mesos.apache.org

2) As suggested above, I reset my Mesos tree to:
{noformat}
commit 5d5b65e5722a21868315d80eb05428b11b4d87d0
Author: Adam B <adam@mesosphere.io>
Date:   Mon Oct 13 21:33:11 2014 -0700

    Enabled whitespace/semicolon rule for cpplint.
{noformat}

and ran the following command, trying to trigger the "RAW: Pure virtual method called" error
for the two test mentioned in this thread as well as the test mentioned in the related issue
(MESOS-2007).

GTEST_FILTER="DRFAllocatorTest.DRFAllocatorProcess:ReconciliationTest.TaskStateMismatch:AllocatorTest/0.SlaveReregistersFirst"
run-one-until-failure src/mesos-tests --verbose

I ran this test continuously overnight on 2 different setups without error:
-- My native Mac OS X setup
-- A docker container based on the support/docker_build.sh script in newer Mesoses.

I set up the docker with the following configuration:
COMPILER=clang, CONFIGURATION=--verbose, ENVIRONMENT=GLOG_v=1, MESOS_VERBOSE=1, OS=ubuntu:14.04,
label_exp=docker||Hadoop/1638/changes

3) Recently a new ticket with a similar error has cropped up (MESOS-4604), but we have diagnosed
this problem already and it is unrelated to the errors seen in this test (details are in the
ticket itself).


was (Author: klueska):
I am marking this ticket as closed for the following reasons.  If you feel this is in error,
please reopen it with specific details and a way to reproduce the bug.  Thanks!

1) Unfortunately, we don't have archives of the Jenkins builds before April 2015, so I don't
have any easy way of seeing the exact last time this error occurred.  However, we do know
it has not cropped up on the Jenkins builds since at least December 2015:
http://www.mail-archive.com/search?q=pure+virtual&l=builds%40mesos.apache.org

2) As suggested above, I reset my Mesos tree to:
{noformat}
commit 5d5b65e5722a21868315d80eb05428b11b4d87d0
Author: Adam B <adam@mesosphere.io>
Date:   Mon Oct 13 21:33:11 2014 -0700

    Enabled whitespace/semicolon rule for cpplint.
{noformat}

and ran the following command, trying to trigger the "RAW: Pure virtual method called" error
for the two test mentioned in this thread as well as the test mentioned in the related issue
(MESOS-2007).

GTEST_FILTER="DRFAllocatorTest.DRFAllocatorProcess:ReconciliationTest.TaskStateMismatch:AllocatorTest/0.SlaveReregistersFirst"
run-one-until-failure src/mesos-tests --verbose

I ran this test continuously overnight on 2 different setups without error:
-- My native Mac OS X setup
-- A docker container based on the support/docker_build.sh script in newer Mesoses.

I set up the docker with the following configuration:
COMPILER=clang, CONFIGURATION=--verbose, ENVIRONMENT=GLOG_v=1, MESOS_VERBOSE=1, OS=ubuntu:14.04,
label_exp=docker||Hadoop/1638/changes

3) Recently a new ticket with a similar error has cropped up (MESOS-4604), but we have diagnosed
this problem already and it is unrelated to the errors seen in this test (details are in the
ticket itself).

> Segfault with "Pure virtual method called" when tests fail
> ----------------------------------------------------------
>
>                 Key: MESOS-2017
>                 URL: https://issues.apache.org/jira/browse/MESOS-2017
>             Project: Mesos
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 0.21.0
>            Reporter: Yan Xu
>            Assignee: Kevin Klues
>              Labels: mesosphere, tests
>
> The most recent one:
> {noformat:title=DRFAllocatorTest.DRFAllocatorProcess}
> [ RUN      ] DRFAllocatorTest.DRFAllocatorProcess
> Using temporary directory '/tmp/DRFAllocatorTest_DRFAllocatorProcess_BI905j'
> I1030 05:55:06.934813 24459 leveldb.cpp:176] Opened db in 3.175202ms
> I1030 05:55:06.935925 24459 leveldb.cpp:183] Compacted db in 1.077924ms
> I1030 05:55:06.935976 24459 leveldb.cpp:198] Created db iterator in 16460ns
> I1030 05:55:06.935995 24459 leveldb.cpp:204] Seeked to beginning of db in 2018ns
> I1030 05:55:06.936005 24459 leveldb.cpp:273] Iterated through 0 keys in the db in 335ns
> I1030 05:55:06.936039 24459 replica.cpp:741] Replica recovered with log positions 0 ->
0 with 1 holes and 0 unlearned
> I1030 05:55:06.936705 24480 recover.cpp:437] Starting replica recovery
> I1030 05:55:06.937023 24480 recover.cpp:463] Replica is in EMPTY status
> I1030 05:55:06.938158 24475 replica.cpp:638] Replica in EMPTY status received a broadcasted
recover request
> I1030 05:55:06.938859 24482 recover.cpp:188] Received a recover response from a replica
in EMPTY status
> I1030 05:55:06.939486 24474 recover.cpp:554] Updating replica status to STARTING
> I1030 05:55:06.940249 24489 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb
took 591981ns
> I1030 05:55:06.940274 24489 replica.cpp:320] Persisted replica status to STARTING
> I1030 05:55:06.940752 24481 recover.cpp:463] Replica is in STARTING status
> I1030 05:55:06.940820 24489 master.cpp:312] Master 20141030-055506-3142697795-40429-24459
(pomona.apache.org) started on 67.195.81.187:40429
> I1030 05:55:06.940871 24489 master.cpp:358] Master only allowing authenticated frameworks
to register
> I1030 05:55:06.940891 24489 master.cpp:363] Master only allowing authenticated slaves
to register
> I1030 05:55:06.940908 24489 credentials.hpp:36] Loading credentials for authentication
from '/tmp/DRFAllocatorTest_DRFAllocatorProcess_BI905j/credentials'
> I1030 05:55:06.941215 24489 master.cpp:392] Authorization enabled
> I1030 05:55:06.941751 24475 master.cpp:120] No whitelist given. Advertising offers for
all slaves
> I1030 05:55:06.942227 24474 replica.cpp:638] Replica in STARTING status received a broadcasted
recover request
> I1030 05:55:06.942401 24476 hierarchical_allocator_process.hpp:299] Initializing hierarchical
allocator process with master : master@67.195.81.187:40429
> I1030 05:55:06.942895 24483 recover.cpp:188] Received a recover response from a replica
in STARTING status
> I1030 05:55:06.943035 24474 master.cpp:1242] The newly elected leader is master@67.195.81.187:40429
with id 20141030-055506-3142697795-40429-24459
> I1030 05:55:06.943063 24474 master.cpp:1255] Elected as the leading master!
> I1030 05:55:06.943079 24474 master.cpp:1073] Recovering from registrar
> I1030 05:55:06.943313 24480 registrar.cpp:313] Recovering registrar
> I1030 05:55:06.943455 24475 recover.cpp:554] Updating replica status to VOTING
> I1030 05:55:06.944144 24474 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb
took 536365ns
> I1030 05:55:06.944172 24474 replica.cpp:320] Persisted replica status to VOTING
> I1030 05:55:06.944355 24489 recover.cpp:568] Successfully joined the Paxos group
> I1030 05:55:06.944576 24489 recover.cpp:452] Recover process terminated
> I1030 05:55:06.945155 24486 log.cpp:656] Attempting to start the writer
> I1030 05:55:06.947013 24473 replica.cpp:474] Replica received implicit promise request
with proposal 1
> I1030 05:55:06.947854 24473 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb
took 806463ns
> I1030 05:55:06.947883 24473 replica.cpp:342] Persisted promised to 1
> I1030 05:55:06.948547 24481 coordinator.cpp:230] Coordinator attemping to fill missing
position
> I1030 05:55:06.950269 24479 replica.cpp:375] Replica received explicit promise request
for position 0 with proposal 2
> I1030 05:55:06.950933 24479 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took
603843ns
> I1030 05:55:06.950961 24479 replica.cpp:676] Persisted action at 0
> I1030 05:55:06.952180 24476 replica.cpp:508] Replica received write request for position
0
> I1030 05:55:06.952239 24476 leveldb.cpp:438] Reading position from leveldb took 28437ns
> I1030 05:55:06.952896 24476 leveldb.cpp:343] Persisting action (14 bytes) to leveldb
took 623980ns
> I1030 05:55:06.952926 24476 replica.cpp:676] Persisted action at 0
> I1030 05:55:06.953543 24485 replica.cpp:655] Replica received learned notice for position
0
> I1030 05:55:06.954082 24485 leveldb.cpp:343] Persisting action (16 bytes) to leveldb
took 511807ns
> I1030 05:55:06.954107 24485 replica.cpp:676] Persisted action at 0
> I1030 05:55:06.954128 24485 replica.cpp:661] Replica learned NOP action at position 0
> I1030 05:55:06.954710 24473 log.cpp:672] Writer started with ending position 0
> I1030 05:55:06.956215 24478 leveldb.cpp:438] Reading position from leveldb took 33085ns
> I1030 05:55:06.959481 24475 registrar.cpp:346] Successfully fetched the registry (0B)
in 16.11904ms
> I1030 05:55:06.959616 24475 registrar.cpp:445] Applied 1 operations in 28239ns; attempting
to update the 'registry'
> I1030 05:55:06.962514 24487 log.cpp:680] Attempting to append 139 bytes to the log
> I1030 05:55:06.962646 24474 coordinator.cpp:340] Coordinator attempting to write APPEND
action at position 1
> I1030 05:55:06.964146 24486 replica.cpp:508] Replica received write request for position
1
> I1030 05:55:06.964962 24486 leveldb.cpp:343] Persisting action (158 bytes) to leveldb
took 743389ns
> I1030 05:55:06.964993 24486 replica.cpp:676] Persisted action at 1
> I1030 05:55:06.965895 24473 replica.cpp:655] Replica received learned notice for position
1
> I1030 05:55:06.966531 24473 leveldb.cpp:343] Persisting action (160 bytes) to leveldb
took 607242ns
> I1030 05:55:06.966555 24473 replica.cpp:676] Persisted action at 1
> I1030 05:55:06.966578 24473 replica.cpp:661] Replica learned APPEND action at position
1
> I1030 05:55:06.967706 24481 registrar.cpp:490] Successfully updated the 'registry' in
8.036096ms
> I1030 05:55:06.967895 24481 registrar.cpp:376] Successfully recovered registrar
> I1030 05:55:06.967993 24482 log.cpp:699] Attempting to truncate the log to 1
> I1030 05:55:06.968258 24479 coordinator.cpp:340] Coordinator attempting to write TRUNCATE
action at position 2
> I1030 05:55:06.968268 24475 master.cpp:1100] Recovered 0 slaves from the Registry (101B)
; allowing 10mins for slaves to re-register
> I1030 05:55:06.969156 24476 replica.cpp:508] Replica received write request for position
2
> I1030 05:55:06.969678 24476 leveldb.cpp:343] Persisting action (16 bytes) to leveldb
took 491913ns
> I1030 05:55:06.969703 24476 replica.cpp:676] Persisted action at 2
> I1030 05:55:06.970459 24478 replica.cpp:655] Replica received learned notice for position
2
> I1030 05:55:06.971060 24478 leveldb.cpp:343] Persisting action (18 bytes) to leveldb
took 573076ns
> I1030 05:55:06.971124 24478 leveldb.cpp:401] Deleting ~1 keys from leveldb took 35339ns
> I1030 05:55:06.971145 24478 replica.cpp:676] Persisted action at 2
> I1030 05:55:06.971168 24478 replica.cpp:661] Replica learned TRUNCATE action at position
2
> I1030 05:55:06.980211 24459 containerizer.cpp:100] Using isolation: posix/cpu,posix/mem
> I1030 05:55:06.984153 24473 slave.cpp:169] Slave started on 203)@67.195.81.187:40429
> I1030 05:55:07.055308 24473 credentials.hpp:84] Loading credential for authentication
from '/tmp/DRFAllocatorTest_DRFAllocatorProcess_wULx31/credential'
> I1030 05:55:06.988750 24459 sched.cpp:137] Version: 0.21.0
> I1030 05:55:07.055521 24473 slave.cpp:276] Slave using credential for: test-principal
> I1030 05:55:07.055726 24473 slave.cpp:289] Slave resources: cpus(*):2; mem(*):1024; disk(*):0;
ports(*):[31000-32000]
> I1030 05:55:07.055865 24473 slave.cpp:318] Slave hostname: pomona.apache.org
> I1030 05:55:07.055881 24473 slave.cpp:319] Slave checkpoint: false
> W1030 05:55:07.055889 24473 slave.cpp:321] Disabling checkpointing is deprecated and
the --checkpoint flag will be removed in a future release. Please avoid using this flag
> I1030 05:55:07.056172 24485 sched.cpp:233] New master detected at master@67.195.81.187:40429
> I1030 05:55:07.056222 24485 sched.cpp:283] Authenticating with master master@67.195.81.187:40429
> I1030 05:55:07.056717 24485 state.cpp:33] Recovering state from '/tmp/DRFAllocatorTest_DRFAllocatorProcess_wULx31/meta'
> I1030 05:55:07.056851 24475 authenticatee.hpp:133] Creating new client SASL connection
> I1030 05:55:07.057003 24473 status_update_manager.cpp:197] Recovering status update manager
> I1030 05:55:07.057252 24488 master.cpp:3853] Authenticating scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429
> I1030 05:55:07.057502 24489 containerizer.cpp:281] Recovering containerizer
> I1030 05:55:07.057524 24475 authenticator.hpp:161] Creating new server SASL connection
> I1030 05:55:07.057688 24475 authenticatee.hpp:224] Received SASL authentication mechanisms:
CRAM-MD5
> I1030 05:55:07.057719 24475 authenticatee.hpp:250] Attempting to authenticate with mechanism
'CRAM-MD5'
> I1030 05:55:07.057919 24481 authenticator.hpp:267] Received SASL authentication start
> I1030 05:55:07.057968 24481 authenticator.hpp:389] Authentication requires more steps
> I1030 05:55:07.058070 24473 authenticatee.hpp:270] Received SASL authentication step
> I1030 05:55:07.058199 24485 authenticator.hpp:295] Received SASL authentication step
> I1030 05:55:07.058223 24485 auxprop.cpp:81] Request to lookup properties for user: 'test-principal'
realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH:
false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
> I1030 05:55:07.058233 24485 auxprop.cpp:153] Looking up auxiliary property '*userPassword'
> I1030 05:55:07.058259 24485 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> I1030 05:55:07.058290 24485 auxprop.cpp:81] Request to lookup properties for user: 'test-principal'
realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH:
false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
> I1030 05:55:07.058302 24485 auxprop.cpp:103] Skipping auxiliary property '*userPassword'
since SASL_AUXPROP_AUTHZID == true
> I1030 05:55:07.058307 24485 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5'
since SASL_AUXPROP_AUTHZID == true
> I1030 05:55:07.058320 24485 authenticator.hpp:381] Authentication success
> I1030 05:55:07.058467 24480 master.cpp:3893] Successfully authenticated principal 'test-principal'
at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429
> I1030 05:55:07.058493 24485 slave.cpp:3456] Finished recovery
> I1030 05:55:07.058593 24478 authenticatee.hpp:310] Authentication success
> I1030 05:55:07.058838 24478 sched.cpp:357] Successfully authenticated with master master@67.195.81.187:40429
> I1030 05:55:07.058861 24478 sched.cpp:476] Sending registration request to master@67.195.81.187:40429
> I1030 05:55:07.058969 24475 slave.cpp:602] New master detected at master@67.195.81.187:40429
> I1030 05:55:07.058969 24487 status_update_manager.cpp:171] Pausing sending status updates
> I1030 05:55:07.059026 24475 slave.cpp:665] Authenticating with master master@67.195.81.187:40429
> I1030 05:55:07.059061 24481 master.cpp:1362] Received registration request for framework
'framework1' at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429
> I1030 05:55:07.059131 24481 master.cpp:1321] Authorizing framework principal 'test-principal'
to receive offers for role 'role1'
> I1030 05:55:07.059171 24475 slave.cpp:638] Detecting new master
> I1030 05:55:07.059214 24482 authenticatee.hpp:133] Creating new client SASL connection
> I1030 05:55:07.059550 24481 master.cpp:3853] Authenticating slave(203)@67.195.81.187:40429
> I1030 05:55:07.059787 24487 authenticator.hpp:161] Creating new server SASL connection
> I1030 05:55:07.059922 24481 master.cpp:1426] Registering framework 20141030-055506-3142697795-40429-24459-0000
(framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429
> I1030 05:55:07.059996 24474 authenticatee.hpp:224] Received SASL authentication mechanisms:
CRAM-MD5
> I1030 05:55:07.060034 24474 authenticatee.hpp:250] Attempting to authenticate with mechanism
'CRAM-MD5'
> I1030 05:55:07.060117 24474 authenticator.hpp:267] Received SASL authentication start
> I1030 05:55:07.060165 24474 authenticator.hpp:389] Authentication requires more steps
> I1030 05:55:07.060377 24476 hierarchical_allocator_process.hpp:329] Added framework 20141030-055506-3142697795-40429-24459-0000
> I1030 05:55:07.060394 24488 sched.cpp:407] Framework registered with 20141030-055506-3142697795-40429-24459-0000
> I1030 05:55:07.060403 24476 hierarchical_allocator_process.hpp:697] No resources available
to allocate!
> I1030 05:55:07.060431 24476 hierarchical_allocator_process.hpp:659] Performed allocation
for 0 slaves in 29857ns
> I1030 05:55:07.060443 24488 sched.cpp:421] Scheduler::registered took 19407ns
> I1030 05:55:07.060545 24478 authenticatee.hpp:270] Received SASL authentication step
> I1030 05:55:07.060645 24478 authenticator.hpp:295] Received SASL authentication step
> I1030 05:55:07.060673 24478 auxprop.cpp:81] Request to lookup properties for user: 'test-principal'
realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH:
false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: false 
> I1030 05:55:07.060685 24478 auxprop.cpp:153] Looking up auxiliary property '*userPassword'
> I1030 05:55:07.060714 24478 auxprop.cpp:153] Looking up auxiliary property '*cmusaslsecretCRAM-MD5'
> I1030 05:55:07.060740 24478 auxprop.cpp:81] Request to lookup properties for user: 'test-principal'
realm: 'pomona.apache.org' server FQDN: 'pomona.apache.org' SASL_AUXPROP_VERIFY_AGAINST_HASH:
false SASL_AUXPROP_OVERRIDE: false SASL_AUXPROP_AUTHZID: true 
> I1030 05:55:07.060760 24478 auxprop.cpp:103] Skipping auxiliary property '*userPassword'
since SASL_AUXPROP_AUTHZID == true
> I1030 05:55:07.060770 24478 auxprop.cpp:103] Skipping auxiliary property '*cmusaslsecretCRAM-MD5'
since SASL_AUXPROP_AUTHZID == true
> I1030 05:55:07.060788 24478 authenticator.hpp:381] Authentication success
> I1030 05:55:07.060920 24474 authenticatee.hpp:310] Authentication success
> I1030 05:55:07.060945 24485 master.cpp:3893] Successfully authenticated principal 'test-principal'
at slave(203)@67.195.81.187:40429
> I1030 05:55:07.061388 24489 slave.cpp:722] Successfully authenticated with master master@67.195.81.187:40429
> I1030 05:55:07.061504 24489 slave.cpp:1050] Will retry registration in 4.778336ms if
necessary
> I1030 05:55:07.061718 24480 master.cpp:3032] Registering slave at slave(203)@67.195.81.187:40429
(pomona.apache.org) with id 20141030-055506-3142697795-40429-24459-S0
> I1030 05:55:07.062119 24489 registrar.cpp:445] Applied 1 operations in 53691ns; attempting
to update the 'registry'
> I1030 05:55:07.065182 24479 log.cpp:680] Attempting to append 316 bytes to the log
> I1030 05:55:07.065337 24487 coordinator.cpp:340] Coordinator attempting to write APPEND
action at position 3
> I1030 05:55:07.066359 24474 replica.cpp:508] Replica received write request for position
3
> I1030 05:55:07.066643 24474 leveldb.cpp:343] Persisting action (335 bytes) to leveldb
took 249579ns
> I1030 05:55:07.066671 24474 replica.cpp:676] Persisted action at 3
> I../../src/tests/allocator_tests.cpp:120: Failure
> Failed to wait 10secs for offers1
> 1030 05:55:07.067101 24477 slave.cpp:1050] Will retry registration in 24.08243ms if necessary
> I1030 05:55:07.067140 24473 master.cpp:3020] Ignoring register slave message from slave(203)@67.195.81.187:40429
(pomona.apache.org) as admission is already in progress
> I1030 05:55:07.067395 24488 replica.cpp:655] Replica received learned notice for position
3
> I1030 05:55:07.943416 24478 hierarchical_allocator_process.hpp:697] No resources available
to allocate!
> I1030 05:55:19.804687 24478 hierarchical_allocator_process.hpp:659] Performed allocation
for 0 slaves in 11.861261123secs
> I1030 05:55:11.942713 24474 master.cpp:120] No whitelist given. Advertising offers for
all slaves
> I1030 05:55:19.805850 24488 leveldb.cpp:343] Persisting action (337 bytes) to leveldb
took 1.067224ms
> I1030 05:55:19.806012 24488 replica.cpp:676] Persisted action at 3
> ../../src/tests/allocator_tests.cpp:115: Failure
> Actual function call count doesn't match EXPECT_CALL(sched1, resourceOffers(_, _))...
>          Expected: to be called once
>            Actual: never called - unsatisfied and active
> I1030 05:55:19.806144 24488 replica.cpp:661] Replica learned APPEND action at position
3
> I1030 05:55:19.806695 24473 master.cpp:768] Framework 20141030-055506-3142697795-40429-24459-0000
(framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 disconnected
> I1030 05:55:19.806726 24473 master.cpp:1731] Disconnecting framework 20141030-055506-3142697795-40429-24459-0000
(framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429
> I1030 05:55:19.806751 24473 master.cpp:1747] Deactivating framework 20141030-055506-3142697795-40429-24459-0000
(framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429
> I1030 05:55:19.806967 24473 master.cpp:790] Giving framework 20141030-055506-3142697795-40429-24459-0000
(framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429 0ns to
failover
> ../../src/tests/allocator_tests.cpp:94: Failure
> Actual function call count doesn't match EXPECT_CALL(allocator, slaveAdded(_, _, _))...
>          Expected: to be called once
>            Actual: never called - unsatisfied and active
> F1030 05:55:19.806967 24480 logging.cpp:57] RAW: Pure virtual method called
> I1030 05:55:19.807348 24488 master.cpp:3665] Framework failover timeout, removing framework
20141030-055506-3142697795-40429-24459-0000 (framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429
> I1030 05:55:19.807370 24488 master.cpp:4201] Removing framework 20141030-055506-3142697795-40429-24459-0000
(framework1) at scheduler-c98e7aac-d03f-464a-aa75-61208600e196@67.195.81.187:40429
> *** Aborted at 1414648519 (unix time) try "date -d @1414648519" if you are using GNU
date ***
> PC: @           0x91bc86 process::PID<>::PID()
> *** SIGSEGV (@0x0) received by PID 24459 (TID 0x2b86c919a700) from PID 0; stack trace:
***
> I1030 05:55:19.808631 24489 registrar.cpp:490] Successfully updated the 'registry' in
12.746377984secs
>     @     0x2b86c55fc340 (unknown)
> I1030 05:55:19.808938 24473 log.cpp:699] Attempting to truncate the log to 3
>     @     0x2b86c3327174  google::LogMessage::Fail()
> I1030 05:55:19.809084 24481 coordinator.cpp:340] Coordinator attempting to write TRUNCATE
action at position 4
>     @           0x91bc86 process::PID<>::PID()
>     @     0x2b86c332c868  google::RawLog__()
> I1030 05:55:19.810191 24479 replica.cpp:508] Replica received write request for position
4
> I1030 05:55:19.810899 24479 leveldb.cpp:343] Persisting action (16 bytes) to leveldb
took 678090ns
> I1030 05:55:19.810919 24479 replica.cpp:676] Persisted action at 4
>     @           0x91bf24 process::Process<>::self()
> I1030 05:55:19.811635 24485 replica.cpp:655] Replica received learned notice for position
4
> I1030 05:55:19.812180 24485 leveldb.cpp:343] Persisting action (18 bytes) to leveldb
took 523927ns
> I1030 05:55:19.812228 24485 leveldb.cpp:401] Deleting ~2 keys from leveldb took 29523ns
> I1030 05:55:19.812242 24485 replica.cpp:676] Persisted action at 4
> I    @     0x2b86c29d2a36  __cxa_pure_virtual
> 1030 05:55:19.812258 24485 replica.cpp:661] Replica learned TRUNCATE action at position
4
>     @          0x1046936  testing::internal::UntypedFunctionMockerBase::UntypedInvokeWith()
> I1030 05:55:19.829655 24474 slave.cpp:1050] Will retry registration in 31.785967ms if
necessary
>     @           0x9c0633  testing::internal::FunctionMockerBase<>::InvokeWith()
>     @           0x9b6152  testing::internal::FunctionMocker<>::Invoke()
>     @           0x9abdeb  mesos::internal::tests::MockAllocatorProcess<>::frameworkDeactivated()
>     @           0x91c78f  _ZZN7process8dispatchIN5mesos8internal6master9allocator16AllocatorProcessERKNS1_11FrameworkIDES6_EEvRKNS_3PIDIT_EEMSA_FvT0_ET1_ENKUlPNS_11ProcessBaseEE_clESJ_
>     @           0x959ad7  _ZNSt17_Function_handlerIFvPN7process11ProcessBaseEEZNS0_8dispatchIN5mesos8internal6master9allocator16AllocatorProcessERKNS5_11FrameworkIDESA_EEvRKNS0_3PIDIT_EEMSE_FvT0_ET1_EUlS2_E_E9_M_invokeERKSt9_Any_dataS2_
>     @     0x2b86c32d174f  std::function<>::operator()()
>     @     0x2b86c32b2a17  process::ProcessBase::visit()
>     @     0x2b86c32bd34c  process::DispatchEvent::visit()
>     @           0x8e0812  process::ProcessBase::serve()
>     @     0x2b86c32aec8c  process::ProcessManager::resume()
> I1030 05:55:22.050081 24478 slave.cpp:1050] Will retry registration in 25.327301ms if
necessary
>     @     0x2b86c32a5351  process::schedule()
>     @     0x2b86c55f4182  start_thread
>     @     0x2b86c5904fbd  (unknown)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message