Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 592E7200CD7 for ; Tue, 1 Aug 2017 09:50:16 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 5736A166AAA; Tue, 1 Aug 2017 07:50:16 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id D2D83166AA5 for ; Tue, 1 Aug 2017 09:50:13 +0200 (CEST) Received: (qmail 57194 invoked by uid 500); 1 Aug 2017 07:50:12 -0000 Mailing-List: contact commits-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@lucene.apache.org Delivered-To: mailing list commits@lucene.apache.org Received: (qmail 57178 invoked by uid 99); 1 Aug 2017 07:50:12 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 01 Aug 2017 07:50:12 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 781B0EAD90; Tue, 1 Aug 2017 07:50:12 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: datcm@apache.org To: commits@lucene.apache.org Date: Tue, 01 Aug 2017 07:50:13 -0000 Message-Id: In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [2/2] lucene-solr:branch_7_0: SOLR-9321: Remove deprecated methods of ClusterState archived-at: Tue, 01 Aug 2017 07:50:16 -0000 SOLR-9321: Remove deprecated methods of ClusterState Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/8f6546e5 Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/8f6546e5 Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/8f6546e5 Branch: refs/heads/branch_7_0 Commit: 8f6546e54046756fb4c0f7d4ed3e8f38e10ba7f5 Parents: 4d8c25d Author: Cao Manh Dat Authored: Tue Aug 1 14:49:57 2017 +0700 Committer: Cao Manh Dat Committed: Tue Aug 1 14:49:57 2017 +0700 ---------------------------------------------------------------------- solr/CHANGES.txt | 5 + .../java/org/apache/solr/cloud/CloudUtil.java | 7 +- .../org/apache/solr/cloud/DeleteShardCmd.java | 14 +-- .../org/apache/solr/cloud/ElectionContext.java | 22 ++-- .../cloud/OverseerCollectionMessageHandler.java | 7 +- .../org/apache/solr/cloud/RecoveryStrategy.java | 4 +- .../org/apache/solr/cloud/ZkController.java | 16 +-- .../apache/solr/handler/CdcrRequestHandler.java | 4 +- .../apache/solr/handler/SolrConfigHandler.java | 6 +- .../solr/handler/admin/CollectionsHandler.java | 6 +- .../handler/component/HttpShardHandler.java | 2 +- .../apache/solr/schema/ManagedIndexSchema.java | 6 +- .../search/join/ScoreJoinQParserPlugin.java | 2 +- .../org/apache/solr/servlet/HttpSolrCall.java | 3 +- .../processor/DistributedUpdateProcessor.java | 10 +- .../DocExpirationUpdateProcessorFactory.java | 2 +- .../src/java/org/apache/solr/util/SolrCLI.java | 6 +- .../solr/cloud/BasicDistributedZkTest.java | 7 +- .../solr/cloud/ChaosMonkeyShardSplitTest.java | 2 +- .../org/apache/solr/cloud/ClusterStateTest.java | 4 +- .../solr/cloud/ClusterStateUpdateTest.java | 4 +- .../CollectionsAPIAsyncDistributedZkTest.java | 4 +- .../org/apache/solr/cloud/ForceLeaderTest.java | 20 ++-- .../apache/solr/cloud/HttpPartitionTest.java | 10 +- .../org/apache/solr/cloud/OverseerTest.java | 40 +++---- .../solr/cloud/ReplicaPropertiesBase.java | 6 +- .../org/apache/solr/cloud/ShardSplitTest.java | 14 +-- .../cloud/SharedFSAutoReplicaFailoverTest.java | 4 +- .../org/apache/solr/cloud/SliceStateTest.java | 2 +- .../solr/cloud/TestCloudDeleteByQuery.java | 2 +- .../apache/solr/cloud/TestCollectionAPI.java | 6 +- .../TestLeaderElectionWithEmptyReplica.java | 2 +- .../TestLeaderInitiatedRecoveryThread.java | 6 +- .../solr/cloud/TestMiniSolrCloudCluster.java | 7 +- .../cloud/TestRandomRequestDistribution.java | 2 +- .../solr/cloud/TestReplicaProperties.java | 2 +- .../solr/cloud/TestShortCircuitedRequests.java | 2 +- .../cloud/TestTolerantUpdateProcessorCloud.java | 2 +- .../solr/cloud/UnloadDistributedZkTest.java | 4 +- .../apache/solr/cloud/hdfs/StressHdfsTest.java | 7 +- .../TestCloudManagedSchemaConcurrent.java | 4 +- .../apache/solr/schema/TestCloudSchemaless.java | 2 +- .../apache/solr/common/cloud/ClusterState.java | 107 +------------------ .../solr/common/cloud/ClusterStateUtil.java | 2 +- .../apache/solr/common/cloud/ZkStateReader.java | 7 +- .../solr/cloud/AbstractDistribZkTestBase.java | 15 ++- .../cloud/AbstractFullDistribZkTestBase.java | 26 +++-- .../java/org/apache/solr/cloud/ChaosMonkey.java | 2 +- 48 files changed, 197 insertions(+), 249 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/CHANGES.txt ---------------------------------------------------------------------- diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt index 4280135..8ade3cf 100644 --- a/solr/CHANGES.txt +++ b/solr/CHANGES.txt @@ -149,6 +149,9 @@ Upgrading from Solr 6.x replaced with 'sourceNode' and 'targetNode' instead. The old names will continue to work for back-compatibility but they will be removed in 8.0. See SOLR-11068 for more details. +* All deperated methods of ClusterState (except getZkClusterStateVersion()) + have been removed. Use DocCollection methods instead. + New Features ---------------------- * SOLR-9857, SOLR-9858: Collect aggregated metrics from nodes and shard leaders in overseer. (ab) @@ -492,6 +495,8 @@ Other Changes * SOLR-10033: Provide a clear exception when attempting to facet with facet.mincount=0 over points fields. (Steve Rowe) +* SOLR-9321: Remove deprecated methods of ClusterState. (Jason Gerlowski, ishan, Cao Manh Dat) + ================== 6.7.0 ================== Consult the LUCENE_CHANGES.txt file for additional, low level, changes in this release. http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java b/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java index c05072d..81de5cd 100644 --- a/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java +++ b/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java @@ -27,6 +27,7 @@ import java.util.Map; import org.apache.commons.io.FileUtils; import org.apache.solr.common.SolrException; import org.apache.solr.common.SolrException.ErrorCode; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.cloud.SolrZkClient; @@ -57,9 +58,9 @@ public class CloudUtil { log.debug("checkSharedFSFailoverReplaced running for coreNodeName={} baseUrl={}", thisCnn, thisBaseUrl); // if we see our core node name on a different base url, unload - Map slicesMap = zkController.getClusterState().getSlicesMap(desc.getCloudDescriptor().getCollectionName()); - - if (slicesMap != null) { + final DocCollection docCollection = zkController.getClusterState().getCollectionOrNull(desc.getCloudDescriptor().getCollectionName()); + if (docCollection != null && docCollection.getSlicesMap() != null) { + Map slicesMap = docCollection.getSlicesMap(); for (Slice slice : slicesMap.values()) { for (Replica replica : slice.getReplicas()) { http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/cloud/DeleteShardCmd.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/cloud/DeleteShardCmd.java b/solr/core/src/java/org/apache/solr/cloud/DeleteShardCmd.java index 71d9c46..43bd6bd 100644 --- a/solr/core/src/java/org/apache/solr/cloud/DeleteShardCmd.java +++ b/solr/core/src/java/org/apache/solr/cloud/DeleteShardCmd.java @@ -65,16 +65,10 @@ public class DeleteShardCmd implements Cmd { String sliceId = message.getStr(ZkStateReader.SHARD_ID_PROP); log.info("Delete shard invoked"); - Slice slice = clusterState.getSlice(collectionName, sliceId); - - if (slice == null) { - if (clusterState.hasCollection(collectionName)) { - throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, - "No shard with name " + sliceId + " exists for collection " + collectionName); - } else { - throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "No collection with the specified name exists: " + collectionName); - } - } + Slice slice = clusterState.getCollection(collectionName).getSlice(sliceId); + if (slice == null) throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, + "No shard with name " + sliceId + " exists for collection " + collectionName); + // For now, only allow for deletions of Inactive slices or custom hashes (range==null). // TODO: Add check for range gaps on Slice deletion final Slice.State state = slice.getState(); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java b/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java index 588262d..491ae00 100644 --- a/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java +++ b/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java @@ -31,6 +31,7 @@ import org.apache.solr.cloud.overseer.OverseerAction; import org.apache.solr.common.SolrException; import org.apache.solr.common.SolrException.ErrorCode; import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.cloud.SolrZkClient; @@ -498,8 +499,7 @@ final class ShardLeaderElectionContext extends ShardLeaderElectionContextBase { ZkStateReader zkStateReader = zkController.getZkStateReader(); zkStateReader.forceUpdateCollection(collection); ClusterState clusterState = zkStateReader.getClusterState(); - Replica rep = (clusterState == null) ? null - : clusterState.getReplica(collection, leaderProps.getStr(ZkStateReader.CORE_NODE_NAME_PROP)); + Replica rep = getReplica(clusterState, collection, leaderProps.getStr(ZkStateReader.CORE_NODE_NAME_PROP)); if (rep != null && rep.getState() != Replica.State.ACTIVE && rep.getState() != Replica.State.RECOVERING) { log.debug("We have become the leader after core registration but are not in an ACTIVE state - publishing ACTIVE"); @@ -507,6 +507,13 @@ final class ShardLeaderElectionContext extends ShardLeaderElectionContextBase { } } } + + private Replica getReplica(ClusterState clusterState, String collectionName, String replicaName) { + if (clusterState == null) return null; + final DocCollection docCollection = clusterState.getCollectionOrNull(collectionName); + if (docCollection == null) return null; + return docCollection.getReplica(replicaName); + } public void checkLIR(String coreName, boolean allReplicasInLine) throws InterruptedException, KeeperException, IOException { @@ -604,7 +611,8 @@ final class ShardLeaderElectionContext extends ShardLeaderElectionContextBase { long timeoutAt = System.nanoTime() + TimeUnit.NANOSECONDS.convert(timeoutms, TimeUnit.MILLISECONDS); final String shardsElectZkPath = electionPath + LeaderElector.ELECTION_NODE; - Slice slices = zkController.getClusterState().getSlice(collection, shardId); + DocCollection docCollection = zkController.getClusterState().getCollectionOrNull(collection); + Slice slices = (docCollection == null) ? null : docCollection.getSlice(shardId); int cnt = 0; while (!isClosed && !cc.isShutDown()) { // wait for everyone to be up @@ -649,7 +657,8 @@ final class ShardLeaderElectionContext extends ShardLeaderElectionContextBase { } Thread.sleep(500); - slices = zkController.getClusterState().getSlice(collection, shardId); + docCollection = zkController.getClusterState().getCollectionOrNull(collection); + slices = (docCollection == null) ? null : docCollection.getSlice(shardId); cnt++; } return false; @@ -658,9 +667,10 @@ final class ShardLeaderElectionContext extends ShardLeaderElectionContextBase { // returns true if all replicas are found to be up, false if not private boolean areAllReplicasParticipating() throws InterruptedException { final String shardsElectZkPath = electionPath + LeaderElector.ELECTION_NODE; - Slice slices = zkController.getClusterState().getSlice(collection, shardId); + final DocCollection docCollection = zkController.getClusterState().getCollectionOrNull(collection); - if (slices != null) { + if (docCollection != null && docCollection.getSlice(shardId) != null) { + final Slice slices = docCollection.getSlice(shardId); int found = 0; try { found = zkClient.getChildren(shardsElectZkPath, null, true).size(); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java b/solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java index 2c55f3c..095578f 100644 --- a/solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java +++ b/solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java @@ -522,10 +522,9 @@ public class OverseerCollectionMessageHandler implements OverseerMessageHandler String waitForCoreNodeName(String collectionName, String msgNodeName, String msgCore) { int retryCount = 320; while (retryCount-- > 0) { - Map slicesMap = zkStateReader.getClusterState() - .getSlicesMap(collectionName); - if (slicesMap != null) { - + final DocCollection docCollection = zkStateReader.getClusterState().getCollectionOrNull(collectionName); + if (docCollection != null && docCollection.getSlicesMap() != null) { + Map slicesMap = docCollection.getSlicesMap(); for (Slice slice : slicesMap.values()) { for (Replica replica : slice.getReplicas()) { // TODO: for really large clusters, we could 'index' on this http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java b/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java index 563cccf..8a6b99b 100644 --- a/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java +++ b/solr/core/src/java/org/apache/solr/cloud/RecoveryStrategy.java @@ -545,8 +545,8 @@ public class RecoveryStrategy implements Runnable, Closeable { zkController.publish(core.getCoreDescriptor(), Replica.State.RECOVERING); - final Slice slice = zkStateReader.getClusterState().getSlice(cloudDesc.getCollectionName(), - cloudDesc.getShardId()); + final Slice slice = zkStateReader.getClusterState().getCollection(cloudDesc.getCollectionName()) + .getSlice(cloudDesc.getShardId()); try { prevSendPreRecoveryHttpUriRequest.abort(); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/cloud/ZkController.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/cloud/ZkController.java b/solr/core/src/java/org/apache/solr/cloud/ZkController.java index dee833f..9a12a5b 100644 --- a/solr/core/src/java/org/apache/solr/cloud/ZkController.java +++ b/solr/core/src/java/org/apache/solr/cloud/ZkController.java @@ -938,7 +938,8 @@ public class ZkController { try { // If we're a preferred leader, insert ourselves at the head of the queue boolean joinAtHead = false; - Replica replica = zkStateReader.getClusterState().getReplica(collection, coreZkNodeName); + final DocCollection docCollection = zkStateReader.getClusterState().getCollectionOrNull(collection); + Replica replica = (docCollection == null) ? null : docCollection.getReplica(coreZkNodeName); if (replica != null) { joinAtHead = replica.getBool(SliceMutator.PREFERRED_LEADER_PROP, false); } @@ -990,7 +991,7 @@ public class ZkController { // we will call register again after zk expiration and on reload if (!afterExpiration && !core.isReloaded() && ulog != null && !isTlogReplicaAndNotLeader) { // disable recovery in case shard is in construction state (for shard splits) - Slice slice = getClusterState().getSlice(collection, shardId); + Slice slice = getClusterState().getCollection(collection).getSlice(shardId); if (slice.getState() != Slice.State.CONSTRUCTION || !isLeader) { Future recoveryFuture = core.getUpdateHandler().getUpdateLog().recoverFromLog(); if (recoveryFuture != null) { @@ -1347,7 +1348,8 @@ public class ZkController { assert false : "No collection was specified [" + collection + "]"; return; } - Replica replica = zkStateReader.getClusterState().getReplica(collection, coreNodeName); + final DocCollection docCollection = zkStateReader.getClusterState().getCollectionOrNull(collection); + Replica replica = (docCollection == null) ? null : docCollection.getReplica(coreNodeName); if (replica == null || replica.getType() != Type.PULL) { ElectionContext context = electionContexts.remove(new ContextKey(collection, coreNodeName)); @@ -1401,10 +1403,10 @@ public class ZkController { int retryCount = 320; log.debug("look for our core node name"); while (retryCount-- > 0) { - Map slicesMap = zkStateReader.getClusterState() - .getSlicesMap(descriptor.getCloudDescriptor().getCollectionName()); - if (slicesMap != null) { - + final DocCollection docCollection = zkStateReader.getClusterState() + .getCollectionOrNull(descriptor.getCloudDescriptor().getCollectionName()); + if (docCollection != null && docCollection.getSlicesMap() != null) { + final Map slicesMap = docCollection.getSlicesMap(); for (Slice slice : slicesMap.values()) { for (Replica replica : slice.getReplicas()) { // TODO: for really large clusters, we could 'index' on this http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java b/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java index de86164..38d3866 100644 --- a/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java +++ b/solr/core/src/java/org/apache/solr/handler/CdcrRequestHandler.java @@ -42,6 +42,7 @@ import org.apache.solr.client.solrj.request.UpdateRequest; import org.apache.solr.cloud.ZkController; import org.apache.solr.common.SolrException; import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.cloud.ZkCoreNodeProps; import org.apache.solr.common.cloud.ZkNodeProps; @@ -397,7 +398,8 @@ public class CdcrRequestHandler extends RequestHandlerBase implements SolrCoreAw log.warn("Error when updating cluster state", e); } ClusterState cstate = zkController.getClusterState(); - Collection shards = cstate.getActiveSlices(collection); + DocCollection docCollection = cstate.getCollectionOrNull(collection); + Collection shards = docCollection == null? null : docCollection.getActiveSlices(); ExecutorService parallelExecutor = ExecutorUtil.newMDCAwareCachedThreadPool(new DefaultSolrThreadFactory("parallelCdcrExecutor")); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java b/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java index 92a773a..8345b3c 100644 --- a/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java +++ b/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java @@ -47,6 +47,7 @@ import org.apache.solr.cloud.ZkController; import org.apache.solr.cloud.ZkSolrResourceLoader; import org.apache.solr.common.SolrException; import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.params.CommonParams; @@ -789,8 +790,9 @@ public class SolrConfigHandler extends RequestHandlerBase implements SolrCoreAwa List activeReplicaCoreUrls = new ArrayList<>(); ClusterState clusterState = zkController.getZkStateReader().getClusterState(); Set liveNodes = clusterState.getLiveNodes(); - Collection activeSlices = clusterState.getActiveSlices(collection); - if (activeSlices != null && activeSlices.size() > 0) { + final DocCollection docCollection = clusterState.getCollectionOrNull(collection); + if (docCollection != null && docCollection.getActiveSlices() != null && docCollection.getActiveSlices().size() > 0) { + final Collection activeSlices = docCollection.getActiveSlices(); for (Slice next : activeSlices) { Map replicasMap = next.getReplicasMap(); if (replicasMap != null) { http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java b/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java index 3afad2f..e2880a8 100644 --- a/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java +++ b/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java @@ -1036,8 +1036,10 @@ public class CollectionsHandler extends RequestHandlerBase implements Permission for (int i = 0; i < numRetries; i++) { ClusterState clusterState = zkStateReader.getClusterState(); - Collection shards = clusterState.getSlices(collectionName); - if (shards != null) { + final DocCollection docCollection = clusterState.getCollectionOrNull(collectionName); + + if (docCollection != null && docCollection.getSlices() != null) { + Collection shards = docCollection.getSlices(); replicaNotAlive = null; for (Slice shard : shards) { Collection replicas; http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java b/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java index 1c98f58..33b1642 100644 --- a/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java +++ b/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java @@ -387,7 +387,7 @@ public class HttpShardHandler extends ShardHandler { } else { if (clusterState == null) { clusterState = zkController.getClusterState(); - slices = clusterState.getSlicesMap(cloudDescriptor.getCollectionName()); + slices = clusterState.getCollection(cloudDescriptor.getCollectionName()).getSlicesMap(); } String sliceName = rb.slices[i]; http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java b/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java index 078edfd..c141e26 100644 --- a/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java +++ b/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java @@ -53,6 +53,7 @@ import org.apache.solr.cloud.ZkSolrResourceLoader; import org.apache.solr.common.SolrException; import org.apache.solr.common.SolrException.ErrorCode; import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.cloud.SolrZkClient; @@ -287,8 +288,9 @@ public final class ManagedIndexSchema extends IndexSchema { ZkStateReader zkStateReader = zkController.getZkStateReader(); ClusterState clusterState = zkStateReader.getClusterState(); Set liveNodes = clusterState.getLiveNodes(); - Collection activeSlices = clusterState.getActiveSlices(collection); - if (activeSlices != null && activeSlices.size() > 0) { + final DocCollection docCollection = clusterState.getCollectionOrNull(collection); + if (docCollection != null && docCollection.getActiveSlices() != null && docCollection.getActiveSlices().size() > 0) { + final Collection activeSlices = docCollection.getActiveSlices(); for (Slice next : activeSlices) { Map replicasMap = next.getReplicasMap(); if (replicasMap != null) { http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java b/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java index a49195c..6715fd8 100644 --- a/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java +++ b/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java @@ -310,7 +310,7 @@ public class ScoreJoinQParserPlugin extends QParserPlugin { String fromReplica = null; String nodeName = zkController.getNodeName(); - for (Slice slice : zkController.getClusterState().getActiveSlices(fromIndex)) { + for (Slice slice : zkController.getClusterState().getCollection(fromIndex).getActiveSlices()) { if (fromReplica != null) throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "SolrCloud join: multiple shards not yet supported " + fromIndex); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java b/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java index a548c05..0a6e62c 100644 --- a/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java +++ b/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java @@ -898,7 +898,8 @@ public class HttpSolrCall { private String getRemotCoreUrl(String collectionName, String origCorename) { ClusterState clusterState = cores.getZkController().getClusterState(); - Collection slices = clusterState.getActiveSlices(collectionName); + final DocCollection docCollection = clusterState.getCollectionOrNull(collectionName); + Collection slices = (docCollection != null) ? docCollection.getActiveSlices() : null; boolean byCoreName = false; if (slices == null) { http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java b/solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java index 5269ecb..45f6ea2 100644 --- a/solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java +++ b/solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java @@ -551,8 +551,9 @@ public class DistributedUpdateProcessor extends UpdateRequestProcessor { if (id == null) { for (Entry entry : routingRules.entrySet()) { String targetCollectionName = entry.getValue().getTargetCollectionName(); - Collection activeSlices = cstate.getActiveSlices(targetCollectionName); - if (activeSlices != null && !activeSlices.isEmpty()) { + final DocCollection docCollection = cstate.getCollectionOrNull(targetCollectionName); + if (docCollection != null && docCollection.getActiveSlices() != null && !docCollection.getActiveSlices().isEmpty()) { + final Collection activeSlices = docCollection.getActiveSlices(); Slice any = activeSlices.iterator().next(); if (nodes == null) nodes = new ArrayList<>(); nodes.add(new StdNode(new ZkCoreNodeProps(any.getLeader()))); @@ -1973,11 +1974,12 @@ public class DistributedUpdateProcessor extends UpdateRequestProcessor { private List getCollectionUrls(SolrQueryRequest req, String collection, EnumSet types) { ClusterState clusterState = req.getCore() .getCoreContainer().getZkController().getClusterState(); - Map slices = clusterState.getSlicesMap(collection); - if (slices == null) { + final DocCollection docCollection = clusterState.getCollectionOrNull(collection); + if (collection == null || docCollection.getSlicesMap() == null) { throw new ZooKeeperException(ErrorCode.BAD_REQUEST, "Could not find collection in zk: " + clusterState); } + Map slices = docCollection.getSlicesMap(); final List urls = new ArrayList<>(slices.size()); for (Map.Entry sliceEntry : slices.entrySet()) { Slice replicas = slices.get(sliceEntry.getKey()); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/update/processor/DocExpirationUpdateProcessorFactory.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/update/processor/DocExpirationUpdateProcessorFactory.java b/solr/core/src/java/org/apache/solr/update/processor/DocExpirationUpdateProcessorFactory.java index 9c2d08d..4cbea31 100644 --- a/solr/core/src/java/org/apache/solr/update/processor/DocExpirationUpdateProcessorFactory.java +++ b/solr/core/src/java/org/apache/solr/update/processor/DocExpirationUpdateProcessorFactory.java @@ -469,7 +469,7 @@ public final class DocExpirationUpdateProcessorFactory CloudDescriptor desc = core.getCoreDescriptor().getCloudDescriptor(); String col = desc.getCollectionName(); - List slices = new ArrayList(zk.getClusterState().getActiveSlices(col)); + List slices = new ArrayList(zk.getClusterState().getCollection(col).getActiveSlices()); Collections.sort(slices, COMPARE_SLICES_BY_NAME); if (slices.isEmpty()) { log.error("Collection {} has no active Slices?", col); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/java/org/apache/solr/util/SolrCLI.java ---------------------------------------------------------------------- diff --git a/solr/core/src/java/org/apache/solr/util/SolrCLI.java b/solr/core/src/java/org/apache/solr/util/SolrCLI.java index 657b402..da53aff 100644 --- a/solr/core/src/java/org/apache/solr/util/SolrCLI.java +++ b/solr/core/src/java/org/apache/solr/util/SolrCLI.java @@ -106,6 +106,7 @@ import org.apache.solr.client.solrj.request.ContentStreamUpdateRequest; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrException; import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.cloud.SolrZkClient; @@ -1170,10 +1171,11 @@ public class SolrCLI { ClusterState clusterState = zkStateReader.getClusterState(); Set liveNodes = clusterState.getLiveNodes(); - Collection slices = clusterState.getSlices(collection); - if (slices == null) + final DocCollection docCollection = clusterState.getCollectionOrNull(collection); + if (docCollection == null || docCollection.getSlices() == null) throw new IllegalArgumentException("Collection "+collection+" not found!"); + Collection slices = docCollection.getSlices(); // Test http code using a HEAD request first, fail fast if authentication failure String urlForColl = zkStateReader.getLeaderUrl(collection, slices.stream().findFirst().get().getName(), 1000); attemptHttpHead(urlForColl, cloudSolrClient.getHttpClient()); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java b/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java index c095c25..66b7866 100644 --- a/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java @@ -649,7 +649,7 @@ public class BasicDistributedZkTest extends AbstractFullDistribZkTestBase { protected ZkCoreNodeProps getLeaderUrlFromZk(String collection, String slice) { ClusterState clusterState = getCommonCloudSolrClient().getZkStateReader().getClusterState(); - ZkNodeProps leader = clusterState.getLeader(collection, slice); + ZkNodeProps leader = clusterState.getCollection(collection).getLeader(slice); if (leader == null) { throw new RuntimeException("Could not find leader:" + collection + " " + slice); } @@ -850,10 +850,11 @@ public class BasicDistributedZkTest extends AbstractFullDistribZkTestBase { // we added a role of none on these creates - check for it ZkStateReader zkStateReader = getCommonCloudSolrClient().getZkStateReader(); zkStateReader.forceUpdateCollection(oneInstanceCollection2); - Map slices = zkStateReader.getClusterState().getSlicesMap(oneInstanceCollection2); + Map slices = zkStateReader.getClusterState().getCollection(oneInstanceCollection2).getSlicesMap(); assertNotNull(slices); - ZkCoreNodeProps props = new ZkCoreNodeProps(getCommonCloudSolrClient().getZkStateReader().getClusterState().getLeader(oneInstanceCollection2, "shard1")); + ZkCoreNodeProps props = new ZkCoreNodeProps(getCommonCloudSolrClient().getZkStateReader().getClusterState() + .getCollection(oneInstanceCollection2).getLeader("shard1")); // now test that unloading a core gets us a new leader try (HttpSolrClient unloadClient = getHttpSolrClient(jettys.get(0).getBaseUrl().toString(), 15000, 60000)) { http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java b/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java index 7e840da..1a01386 100644 --- a/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java @@ -60,7 +60,7 @@ public class ChaosMonkeyShardSplitTest extends ShardSplitTest { ClusterState clusterState = cloudClient.getZkStateReader().getClusterState(); final DocRouter router = clusterState.getCollection(AbstractDistribZkTestBase.DEFAULT_COLLECTION).getRouter(); - Slice shard1 = clusterState.getSlice(AbstractDistribZkTestBase.DEFAULT_COLLECTION, SHARD1); + Slice shard1 = clusterState.getCollection(AbstractDistribZkTestBase.DEFAULT_COLLECTION).getSlice(SHARD1); DocRouter.Range shard1Range = shard1.getRange() != null ? shard1.getRange() : router.fullRange(); final List ranges = router.partitionRange(2, shard1Range); final int[] docCounts = new int[ranges.size()]; http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/ClusterStateTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/ClusterStateTest.java b/solr/core/src/test/org/apache/solr/cloud/ClusterStateTest.java index b38d459..58e0a60 100644 --- a/solr/core/src/test/org/apache/solr/cloud/ClusterStateTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/ClusterStateTest.java @@ -62,8 +62,8 @@ public class ClusterStateTest extends SolrTestCaseJ4 { assertEquals("Provided liveNodes not used properly", 2, loadedClusterState .getLiveNodes().size()); assertEquals("No collections found", 2, loadedClusterState.getCollectionsMap().size()); - assertEquals("Properties not copied properly", replica.getStr("prop1"), loadedClusterState.getSlice("collection1", "shard1").getReplicasMap().get("node1").getStr("prop1")); - assertEquals("Properties not copied properly", replica.getStr("prop2"), loadedClusterState.getSlice("collection1", "shard1").getReplicasMap().get("node1").getStr("prop2")); + assertEquals("Properties not copied properly", replica.getStr("prop1"), loadedClusterState.getCollection("collection1").getSlice("shard1").getReplicasMap().get("node1").getStr("prop1")); + assertEquals("Properties not copied properly", replica.getStr("prop2"), loadedClusterState.getCollection("collection1").getSlice("shard1").getReplicasMap().get("node1").getStr("prop2")); loadedClusterState = ClusterState.load(-1, new byte[0], liveNodes); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/ClusterStateUpdateTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/ClusterStateUpdateTest.java b/solr/core/src/test/org/apache/solr/cloud/ClusterStateUpdateTest.java index 0c331ae..2e1c01f 100644 --- a/solr/core/src/test/org/apache/solr/cloud/ClusterStateUpdateTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/ClusterStateUpdateTest.java @@ -24,6 +24,7 @@ import java.util.Set; import org.apache.lucene.util.LuceneTestCase.Slow; import org.apache.solr.client.solrj.request.CollectionAdminRequest; import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.cloud.ZkStateReader; @@ -75,7 +76,8 @@ public class ClusterStateUpdateTest extends SolrCloudTestCase { Map slices = null; for (int i = 75; i > 0; i--) { clusterState2 = zkController2.getClusterState(); - slices = clusterState2.getSlicesMap("testcore"); + DocCollection docCollection = clusterState2.getCollectionOrNull("testcore"); + slices = docCollection == null ? null : docCollection.getSlicesMap(); if (slices != null && slices.containsKey("shard1") && slices.get("shard1").getReplicasMap().size() > 0) { http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/CollectionsAPIAsyncDistributedZkTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/CollectionsAPIAsyncDistributedZkTest.java b/solr/core/src/test/org/apache/solr/cloud/CollectionsAPIAsyncDistributedZkTest.java index 1474b5c..c3dc44b 100644 --- a/solr/core/src/test/org/apache/solr/cloud/CollectionsAPIAsyncDistributedZkTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/CollectionsAPIAsyncDistributedZkTest.java @@ -125,7 +125,7 @@ public class CollectionsAPIAsyncDistributedZkTest extends SolrCloudTestCase { assertSame("AddReplica did not complete", RequestStatusState.COMPLETED, state); //cloudClient watch might take a couple of seconds to reflect it - Slice shard1 = client.getZkStateReader().getClusterState().getSlice(collection, "shard1"); + Slice shard1 = client.getZkStateReader().getClusterState().getCollection(collection).getSlice("shard1"); int count = 0; while (shard1.getReplicas().size() != 2) { if (count++ > 1000) { @@ -163,7 +163,7 @@ public class CollectionsAPIAsyncDistributedZkTest extends SolrCloudTestCase { } } - shard1 = client.getZkStateReader().getClusterState().getSlice(collection, "shard1"); + shard1 = client.getZkStateReader().getClusterState().getCollection(collection).getSlice("shard1"); String replicaName = shard1.getReplicas().iterator().next().getName(); state = CollectionAdminRequest.deleteReplica(collection, "shard1", replicaName) .processAndWait(client, MAX_TIMEOUT_SECONDS); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java b/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java index b60609f..749abdf 100644 --- a/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java @@ -102,7 +102,7 @@ public class ForceLeaderTest extends HttpPartitionTest { "; clusterState: " + printClusterStateInfo(), 0, numActiveReplicas); int numReplicasOnLiveNodes = 0; - for (Replica rep : clusterState.getSlice(testCollectionName, SHARD1).getReplicas()) { + for (Replica rep : clusterState.getCollection(testCollectionName).getSlice(SHARD1).getReplicas()) { if (clusterState.getLiveNodes().contains(rep.getNodeName())) { numReplicasOnLiveNodes++; } @@ -110,8 +110,8 @@ public class ForceLeaderTest extends HttpPartitionTest { assertEquals(2, numReplicasOnLiveNodes); log.info("Before forcing leader: " + printClusterStateInfo()); // Assert there is no leader yet - assertNull("Expected no leader right now. State: " + clusterState.getSlice(testCollectionName, SHARD1), - clusterState.getSlice(testCollectionName, SHARD1).getLeader()); + assertNull("Expected no leader right now. State: " + clusterState.getCollection(testCollectionName).getSlice(SHARD1), + clusterState.getCollection(testCollectionName).getSlice(SHARD1).getLeader()); assertSendDocFails(3); @@ -122,9 +122,9 @@ public class ForceLeaderTest extends HttpPartitionTest { cloudClient.getZkStateReader().forceUpdateCollection(testCollectionName); clusterState = cloudClient.getZkStateReader().getClusterState(); - log.info("After forcing leader: " + clusterState.getSlice(testCollectionName, SHARD1)); + log.info("After forcing leader: " + clusterState.getCollection(testCollectionName).getSlice(SHARD1)); // we have a leader - Replica newLeader = clusterState.getSlice(testCollectionName, SHARD1).getLeader(); + Replica newLeader = clusterState.getCollectionOrNull(testCollectionName).getSlice(SHARD1).getLeader(); assertNotNull(newLeader); // leader is active assertEquals(State.ACTIVE, newLeader.getState()); @@ -216,7 +216,7 @@ public class ForceLeaderTest extends HttpPartitionTest { boolean transition = false; for (int counter = 10; counter > 0; counter--) { clusterState = zkStateReader.getClusterState(); - Replica newLeader = clusterState.getSlice(collection, slice).getLeader(); + Replica newLeader = clusterState.getCollection(collection).getSlice(slice).getLeader(); if (newLeader == null) { transition = true; break; @@ -250,7 +250,7 @@ public class ForceLeaderTest extends HttpPartitionTest { Replica.State replicaState = null; for (int counter = 10; counter > 0; counter--) { ClusterState clusterState = zkStateReader.getClusterState(); - replicaState = clusterState.getSlice(collection, slice).getReplica(replica.getName()).getState(); + replicaState = clusterState.getCollection(collection).getSlice(slice).getReplica(replica.getName()).getState(); if (replicaState == state) { transition = true; break; @@ -349,7 +349,7 @@ public class ForceLeaderTest extends HttpPartitionTest { for (State lirState : lirStates) if (Replica.State.DOWN.equals(lirState) == false) allDown = false; - if (allDown && clusterState.getSlice(collectionName, shard).getLeader() == null) { + if (allDown && clusterState.getCollection(collectionName).getSlice(shard).getLeader() == null) { break; } log.warn("Attempt " + i + ", waiting on for 1 sec to settle down in the steady state. State: " + @@ -381,7 +381,7 @@ public class ForceLeaderTest extends HttpPartitionTest { waitForRecoveriesToFinish(collection, cloudClient.getZkStateReader(), true); cloudClient.getZkStateReader().forceUpdateCollection(collection); ClusterState clusterState = cloudClient.getZkStateReader().getClusterState(); - log.info("After bringing back leader: " + clusterState.getSlice(collection, SHARD1)); + log.info("After bringing back leader: " + clusterState.getCollection(collection).getSlice(SHARD1)); int numActiveReplicas = getNumberOfActiveReplicas(clusterState, collection, SHARD1); assertEquals(1+notLeaders.size(), numActiveReplicas); log.info("Sending doc "+docid+"..."); @@ -423,7 +423,7 @@ public class ForceLeaderTest extends HttpPartitionTest { protected int getNumberOfActiveReplicas(ClusterState clusterState, String collection, String sliceId) { int numActiveReplicas = 0; // Assert all replicas are active - for (Replica rep : clusterState.getSlice(collection, sliceId).getReplicas()) { + for (Replica rep : clusterState.getCollection(collection).getSlice(sliceId).getReplicas()) { if (rep.getState().equals(State.ACTIVE)) { numActiveReplicas++; } http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java b/solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java index aeaa7e9..0a56e76 100644 --- a/solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/HttpPartitionTest.java @@ -42,6 +42,7 @@ import org.apache.solr.client.solrj.request.UpdateRequest; import org.apache.solr.common.SolrException; import org.apache.solr.common.SolrInputDocument; import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.cloud.SolrZkClient; @@ -233,7 +234,7 @@ public class HttpPartitionTest extends AbstractFullDistribZkTestBase { ZkStateReader zkr = cloudClient.getZkStateReader(); zkr.forceUpdateCollection(testCollectionName);; // force the state to be fresh ClusterState cs = zkr.getClusterState(); - Collection slices = cs.getActiveSlices(testCollectionName); + Collection slices = cs.getCollection(testCollectionName).getActiveSlices(); Slice slice = slices.iterator().next(); Replica partitionedReplica = slice.getReplica(notLeaders.get(0).getName()); assertEquals("The partitioned replica did not get marked down", @@ -522,7 +523,7 @@ public class HttpPartitionTest extends AbstractFullDistribZkTestBase { ZkStateReader zkr = cloudClient.getZkStateReader(); ClusterState cs = zkr.getClusterState(); assertNotNull(cs); - for (Slice shard : cs.getActiveSlices(testCollectionName)) { + for (Slice shard : cs.getCollection(testCollectionName).getActiveSlices()) { if (shard.getName().equals(shardId)) { for (Replica replica : shard.getReplicas()) { final Replica.State state = replica.getState(); @@ -629,14 +630,15 @@ public class HttpPartitionTest extends AbstractFullDistribZkTestBase { ZkStateReader zkr = cloudClient.getZkStateReader(); zkr.forceUpdateCollection(testCollectionName); ClusterState cs = zkr.getClusterState(); - Collection slices = cs.getActiveSlices(testCollectionName); boolean allReplicasUp = false; long waitMs = 0L; long maxWaitMs = maxWaitSecs * 1000L; while (waitMs < maxWaitMs && !allReplicasUp) { cs = cloudClient.getZkStateReader().getClusterState(); assertNotNull(cs); - Slice shard = cs.getSlice(testCollectionName, shardId); + final DocCollection docCollection = cs.getCollectionOrNull(testCollectionName); + assertNotNull(docCollection); + Slice shard = docCollection.getSlice(shardId); assertNotNull("No Slice for "+shardId, shard); allReplicasUp = true; // assume true http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java b/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java index f6abb54..1fbf98c 100644 --- a/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java @@ -205,7 +205,9 @@ public class OverseerTest extends SolrTestCaseJ4 { } private String getShardId(String collection, String coreNodeName) { - Map slices = zkStateReader.getClusterState().getSlicesMap(collection); + DocCollection dc = zkStateReader.getClusterState().getCollectionOrNull(collection); + if (dc == null) return null; + Map slices = dc.getSlicesMap(); if (slices != null) { for (Slice slice : slices.values()) { for (Replica replica : slice.getReplicas()) { @@ -291,10 +293,10 @@ public class OverseerTest extends SolrTestCaseJ4 { for (int i = 0; i < numShards; i++) { assertNotNull("shard got no id?", zkController.publishState(COLLECTION, "core" + (i+1), "node" + (i+1), "shard"+((i%3)+1), Replica.State.ACTIVE, 3)); } - final Map rmap = reader.getClusterState().getSlice(COLLECTION, "shard1").getReplicasMap(); + final Map rmap = reader.getClusterState().getCollection(COLLECTION).getSlice("shard1").getReplicasMap(); assertEquals(rmap.toString(), 2, rmap.size()); - assertEquals(rmap.toString(), 2, reader.getClusterState().getSlice(COLLECTION, "shard2").getReplicasMap().size()); - assertEquals(rmap.toString(), 2, reader.getClusterState().getSlice(COLLECTION, "shard3").getReplicasMap().size()); + assertEquals(rmap.toString(), 2, reader.getClusterState().getCollection(COLLECTION).getSlice("shard2").getReplicasMap().size()); + assertEquals(rmap.toString(), 2, reader.getClusterState().getCollection(COLLECTION).getSlice("shard3").getReplicasMap().size()); //make sure leaders are in cloud state assertNotNull(reader.getLeaderUrl(COLLECTION, "shard1", 15000)); @@ -343,9 +345,9 @@ public class OverseerTest extends SolrTestCaseJ4 { "node" + (i+1), "shard"+((i%3)+1) , Replica.State.ACTIVE, 3)); } - assertEquals(1, reader.getClusterState().getSlice(COLLECTION, "shard1").getReplicasMap().size()); - assertEquals(1, reader.getClusterState().getSlice(COLLECTION, "shard2").getReplicasMap().size()); - assertEquals(1, reader.getClusterState().getSlice(COLLECTION, "shard3").getReplicasMap().size()); + assertEquals(1, reader.getClusterState().getCollection(COLLECTION).getSlice("shard1").getReplicasMap().size()); + assertEquals(1, reader.getClusterState().getCollection(COLLECTION).getSlice("shard2").getReplicasMap().size()); + assertEquals(1, reader.getClusterState().getCollection(COLLECTION).getSlice("shard3").getReplicasMap().size()); //make sure leaders are in cloud state assertNotNull(reader.getLeaderUrl(COLLECTION, "shard1", 15000)); @@ -364,9 +366,9 @@ public class OverseerTest extends SolrTestCaseJ4 { "core" + (i + 1), "node" + (i + 1),"shard"+((i%3)+1), Replica.State.ACTIVE, 3)); } - assertEquals(1, reader.getClusterState().getSlice("collection2", "shard1").getReplicasMap().size()); - assertEquals(1, reader.getClusterState().getSlice("collection2", "shard2").getReplicasMap().size()); - assertEquals(1, reader.getClusterState().getSlice("collection2", "shard3").getReplicasMap().size()); + assertEquals(1, reader.getClusterState().getCollection("collection2").getSlice("shard1").getReplicasMap().size()); + assertEquals(1, reader.getClusterState().getCollection("collection2").getSlice("shard2").getReplicasMap().size()); + assertEquals(1, reader.getClusterState().getCollection("collection2").getSlice("shard3").getReplicasMap().size()); //make sure leaders are in cloud state assertNotNull(reader.getLeaderUrl("collection2", "shard1", 15000)); @@ -474,7 +476,7 @@ public class OverseerTest extends SolrTestCaseJ4 { private void verifyShardLeader(ZkStateReader reader, String collection, String shard, String expectedCore) throws InterruptedException, KeeperException { int maxIterations = 200; while(maxIterations-->0) { - ZkNodeProps props = reader.getClusterState().getLeader(collection, shard); + ZkNodeProps props = reader.getClusterState().getCollection(collection).getLeader(shard); if(props!=null) { if(expectedCore.equals(props.getStr(ZkStateReader.CORE_NAME_PROP))) { return; @@ -482,8 +484,9 @@ public class OverseerTest extends SolrTestCaseJ4 { } Thread.sleep(200); } - - assertEquals("Unexpected shard leader coll:" + collection + " shard:" + shard, expectedCore, (reader.getClusterState().getLeader(collection, shard)!=null)?reader.getClusterState().getLeader(collection, shard).getStr(ZkStateReader.CORE_NAME_PROP):null); + DocCollection docCollection = reader.getClusterState().getCollection(collection); + assertEquals("Unexpected shard leader coll:" + collection + " shard:" + shard, expectedCore, + (docCollection.getLeader(shard)!=null)?docCollection.getLeader(shard).getStr(ZkStateReader.CORE_NAME_PROP):null); } @Test @@ -553,7 +556,7 @@ public class OverseerTest extends SolrTestCaseJ4 { assertEquals("Live nodes count does not match", 1, reader .getClusterState().getLiveNodes().size()); assertEquals(shard+" replica count does not match", 1, reader.getClusterState() - .getSlice(COLLECTION, shard).getReplicasMap().size()); + .getCollection(COLLECTION).getSlice(shard).getReplicasMap().size()); version = getClusterStateVersion(zkClient); mockController.publishState(COLLECTION, core, core_node, "shard1", null, numShards); while (version == getClusterStateVersion(zkClient)); @@ -1004,12 +1007,13 @@ public class OverseerTest extends SolrTestCaseJ4 { queue.offer(Utils.toJSON(m)); for(int i=0;i<100;i++) { - Slice s = reader.getClusterState().getSlice(COLLECTION, "shard1"); + DocCollection dc = reader.getClusterState().getCollectionOrNull(COLLECTION); + Slice s = dc == null? null : dc.getSlice("shard1"); if(s!=null && s.getReplicasMap().size()==3) break; Thread.sleep(100); } - assertNotNull(reader.getClusterState().getSlice(COLLECTION, "shard1")); - assertEquals(3, reader.getClusterState().getSlice(COLLECTION, "shard1").getReplicasMap().size()); + assertNotNull(reader.getClusterState().getCollection(COLLECTION).getSlice("shard1")); + assertEquals(3, reader.getClusterState().getCollection(COLLECTION).getSlice("shard1").getReplicasMap().size()); } finally { close(overseerClient); close(zkClient); @@ -1278,7 +1282,7 @@ public class OverseerTest extends SolrTestCaseJ4 { { int iterationsLeft = 100; while (iterationsLeft-- > 0) { - final Slice slice = zkStateReader.getClusterState().getSlice(COLLECTION, "shard"+ss); + final Slice slice = zkStateReader.getClusterState().getCollection(COLLECTION).getSlice("shard"+ss); if (null == slice || null == slice.getReplicasMap().get("core_node"+N)) { break; } http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/ReplicaPropertiesBase.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/ReplicaPropertiesBase.java b/solr/core/src/test/org/apache/solr/cloud/ReplicaPropertiesBase.java index fe83a84..0cb3f8f 100644 --- a/solr/core/src/test/org/apache/solr/cloud/ReplicaPropertiesBase.java +++ b/solr/core/src/test/org/apache/solr/cloud/ReplicaPropertiesBase.java @@ -57,7 +57,8 @@ public abstract class ReplicaPropertiesBase extends AbstractFullDistribZkTestBas Replica replica = null; for (int idx = 0; idx < 300; ++idx) { clusterState = client.getZkStateReader().getClusterState(); - replica = clusterState.getReplica(collectionName, replicaName); + final DocCollection docCollection = clusterState.getCollectionOrNull(collectionName); + replica = (docCollection == null) ? null : docCollection.getReplica(replicaName); if (replica == null) { fail("Could not find collection/replica pair! " + collectionName + "/" + replicaName); } @@ -82,7 +83,8 @@ public abstract class ReplicaPropertiesBase extends AbstractFullDistribZkTestBas for (int idx = 0; idx < 300; ++idx) { // Keep trying while Overseer writes the ZK state for up to 30 seconds. clusterState = client.getZkStateReader().getClusterState(); - replica = clusterState.getReplica(collectionName, replicaName); + final DocCollection docCollection = clusterState.getCollectionOrNull(collectionName); + replica = (docCollection == null) ? null : docCollection.getReplica(replicaName); if (replica == null) { fail("Could not find collection/replica pair! " + collectionName + "/" + replicaName); } http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/ShardSplitTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/ShardSplitTest.java b/solr/core/src/test/org/apache/solr/cloud/ShardSplitTest.java index 21f5b3c..1593e78 100644 --- a/solr/core/src/test/org/apache/solr/cloud/ShardSplitTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/ShardSplitTest.java @@ -537,7 +537,7 @@ public class ShardSplitTest extends BasicDistributedZkTest { private void incompleteOrOverlappingCustomRangeTest() throws Exception { ClusterState clusterState = cloudClient.getZkStateReader().getClusterState(); final DocRouter router = clusterState.getCollection(AbstractDistribZkTestBase.DEFAULT_COLLECTION).getRouter(); - Slice shard1 = clusterState.getSlice(AbstractDistribZkTestBase.DEFAULT_COLLECTION, SHARD1); + Slice shard1 = clusterState.getCollection(AbstractDistribZkTestBase.DEFAULT_COLLECTION).getSlice(SHARD1); DocRouter.Range shard1Range = shard1.getRange() != null ? shard1.getRange() : router.fullRange(); List subRanges = new ArrayList<>(); @@ -581,7 +581,7 @@ public class ShardSplitTest extends BasicDistributedZkTest { private void splitByUniqueKeyTest() throws Exception { ClusterState clusterState = cloudClient.getZkStateReader().getClusterState(); final DocRouter router = clusterState.getCollection(AbstractDistribZkTestBase.DEFAULT_COLLECTION).getRouter(); - Slice shard1 = clusterState.getSlice(AbstractDistribZkTestBase.DEFAULT_COLLECTION, SHARD1); + Slice shard1 = clusterState.getCollection(AbstractDistribZkTestBase.DEFAULT_COLLECTION).getSlice(SHARD1); DocRouter.Range shard1Range = shard1.getRange() != null ? shard1.getRange() : router.fullRange(); List subRanges = new ArrayList<>(); if (usually()) { @@ -696,7 +696,7 @@ public class ShardSplitTest extends BasicDistributedZkTest { ClusterState clusterState = cloudClient.getZkStateReader().getClusterState(); final DocRouter router = clusterState.getCollection(collectionName).getRouter(); - Slice shard1 = clusterState.getSlice(collectionName, SHARD1); + Slice shard1 = clusterState.getCollection(collectionName).getSlice(SHARD1); DocRouter.Range shard1Range = shard1.getRange() != null ? shard1.getRange() : router.fullRange(); final List ranges = router.partitionRange(2, shard1Range); final int[] docCounts = new int[ranges.size()]; @@ -772,7 +772,7 @@ public class ShardSplitTest extends BasicDistributedZkTest { ClusterState clusterState = cloudClient.getZkStateReader().getClusterState(); final DocRouter router = clusterState.getCollection(collectionName).getRouter(); - Slice shard1 = clusterState.getSlice(collectionName, SHARD1); + Slice shard1 = clusterState.getCollection(collectionName).getSlice(SHARD1); DocRouter.Range shard1Range = shard1.getRange() != null ? shard1.getRange() : router.fullRange(); final List ranges = ((CompositeIdRouter) router).partitionRangeByKey(splitKey, shard1Range); final int[] docCounts = new int[ranges.size()]; @@ -835,8 +835,8 @@ public class ShardSplitTest extends BasicDistributedZkTest { for (i = 0; i < 10; i++) { ZkStateReader zkStateReader = cloudClient.getZkStateReader(); clusterState = zkStateReader.getClusterState(); - slice1_0 = clusterState.getSlice(AbstractDistribZkTestBase.DEFAULT_COLLECTION, "shard1_0"); - slice1_1 = clusterState.getSlice(AbstractDistribZkTestBase.DEFAULT_COLLECTION, "shard1_1"); + slice1_0 = clusterState.getCollection(AbstractDistribZkTestBase.DEFAULT_COLLECTION).getSlice("shard1_0"); + slice1_1 = clusterState.getCollection(AbstractDistribZkTestBase.DEFAULT_COLLECTION).getSlice("shard1_1"); if (slice1_0.getState() == Slice.State.ACTIVE && slice1_1.getState() == Slice.State.ACTIVE) { break; } @@ -887,7 +887,7 @@ public class ShardSplitTest extends BasicDistributedZkTest { query.set("distrib", false); ClusterState clusterState = cloudClient.getZkStateReader().getClusterState(); - Slice slice = clusterState.getSlice(AbstractDistribZkTestBase.DEFAULT_COLLECTION, shard); + Slice slice = clusterState.getCollection(AbstractDistribZkTestBase.DEFAULT_COLLECTION).getSlice(shard); long[] numFound = new long[slice.getReplicasMap().size()]; int c = 0; for (Replica replica : slice.getReplicas()) { http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverTest.java b/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverTest.java index 9c839f6..e2174de 100644 --- a/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverTest.java @@ -388,7 +388,7 @@ public class SharedFSAutoReplicaFailoverTest extends AbstractFullDistribZkTestBa private void assertSingleReplicationAndShardSize(String collection, int numSlices) { Collection slices; - slices = cloudClient.getZkStateReader().getClusterState().getActiveSlices(collection); + slices = cloudClient.getZkStateReader().getClusterState().getCollection(collection).getActiveSlices(); assertEquals(numSlices, slices.size()); for (Slice slice : slices) { assertEquals(1, slice.getReplicas().size()); @@ -397,7 +397,7 @@ public class SharedFSAutoReplicaFailoverTest extends AbstractFullDistribZkTestBa private void assertSliceAndReplicaCount(String collection) { Collection slices; - slices = cloudClient.getZkStateReader().getClusterState().getActiveSlices(collection); + slices = cloudClient.getZkStateReader().getClusterState().getCollection(collection).getActiveSlices(); assertEquals(2, slices.size()); for (Slice slice : slices) { assertEquals(2, slice.getReplicas().size()); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/SliceStateTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/SliceStateTest.java b/solr/core/src/test/org/apache/solr/cloud/SliceStateTest.java index 6a633fa..f235643 100644 --- a/solr/core/src/test/org/apache/solr/cloud/SliceStateTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/SliceStateTest.java @@ -53,6 +53,6 @@ public class SliceStateTest extends SolrTestCaseJ4 { byte[] bytes = Utils.toJSON(clusterState); ClusterState loadedClusterState = ClusterState.load(-1, bytes, liveNodes); - assertSame("Default state not set to active", Slice.State.ACTIVE, loadedClusterState.getSlice("collection1", "shard1").getState()); + assertSame("Default state not set to active", Slice.State.ACTIVE, loadedClusterState.getCollection("collection1").getSlice("shard1").getState()); } } http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/TestCloudDeleteByQuery.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/TestCloudDeleteByQuery.java b/solr/core/src/test/org/apache/solr/cloud/TestCloudDeleteByQuery.java index ec2dac6..d85b139 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestCloudDeleteByQuery.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestCloudDeleteByQuery.java @@ -126,7 +126,7 @@ public class TestCloudDeleteByQuery extends SolrCloudTestCase { urlMap.put(nodeKey, jettyURL.toString()); } ClusterState clusterState = zkStateReader.getClusterState(); - for (Slice slice : clusterState.getSlices(COLLECTION_NAME)) { + for (Slice slice : clusterState.getCollection(COLLECTION_NAME).getSlices()) { String shardName = slice.getName(); Replica leader = slice.getLeader(); assertNotNull("slice has null leader: " + slice.toString(), leader); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/TestCollectionAPI.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/TestCollectionAPI.java b/solr/core/src/test/org/apache/solr/cloud/TestCollectionAPI.java index abe4ed3..037a3e6 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestCollectionAPI.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestCollectionAPI.java @@ -37,6 +37,7 @@ import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrException; import org.apache.solr.common.SolrInputDocument; import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.params.CollectionParams; @@ -768,10 +769,11 @@ public class TestCollectionAPI extends ReplicaPropertiesBase { client.getZkStateReader().forceUpdateCollection(collectionName); ClusterState clusterState = client.getZkStateReader().getClusterState(); - Replica replica = clusterState.getReplica(collectionName, replicaName); - if (replica == null) { + final DocCollection docCollection = clusterState.getCollectionOrNull(collectionName); + if (docCollection == null || docCollection.getReplica(replicaName) == null) { fail("Could not find collection/replica pair! " + collectionName + "/" + replicaName); } + Replica replica = docCollection.getReplica(replicaName); Map propMap = new HashMap<>(); for (String prop : props) { propMap.put(prop, replica.getStr(prop)); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/TestLeaderElectionWithEmptyReplica.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/TestLeaderElectionWithEmptyReplica.java b/solr/core/src/test/org/apache/solr/cloud/TestLeaderElectionWithEmptyReplica.java index 84b3901..5221e81 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestLeaderElectionWithEmptyReplica.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestLeaderElectionWithEmptyReplica.java @@ -98,7 +98,7 @@ public class TestLeaderElectionWithEmptyReplica extends SolrCloudTestCase { (n, c) -> DocCollection.isFullyActive(n, c, 1, 2)); // now query each replica and check for consistency - assertConsistentReplicas(solrClient, solrClient.getZkStateReader().getClusterState().getSlice(COLLECTION_NAME, "shard1")); + assertConsistentReplicas(solrClient, solrClient.getZkStateReader().getClusterState().getCollection(COLLECTION_NAME).getSlice("shard1")); // sanity check that documents still exist QueryResponse response = solrClient.query(new SolrQuery("*:*")); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/TestLeaderInitiatedRecoveryThread.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/TestLeaderInitiatedRecoveryThread.java b/solr/core/src/test/org/apache/solr/cloud/TestLeaderInitiatedRecoveryThread.java index 11858f8..b6efa53 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestLeaderInitiatedRecoveryThread.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestLeaderInitiatedRecoveryThread.java @@ -57,7 +57,7 @@ public class TestLeaderInitiatedRecoveryThread extends AbstractFullDistribZkTest } } assertNotNull(notLeader); - Replica replica = cloudClient.getZkStateReader().getClusterState().getReplica(DEFAULT_COLLECTION, notLeader.coreNodeName); + Replica replica = cloudClient.getZkStateReader().getClusterState().getCollection(DEFAULT_COLLECTION).getReplica(notLeader.coreNodeName); ZkCoreNodeProps replicaCoreNodeProps = new ZkCoreNodeProps(replica); MockCoreDescriptor cd = new MockCoreDescriptor() { @@ -175,7 +175,7 @@ public class TestLeaderInitiatedRecoveryThread extends AbstractFullDistribZkTest timeOut = new TimeOut(30, TimeUnit.SECONDS); while (!timeOut.hasTimedOut()) { - Replica r = cloudClient.getZkStateReader().getClusterState().getReplica(DEFAULT_COLLECTION, replica.getName()); + Replica r = cloudClient.getZkStateReader().getClusterState().getCollection(DEFAULT_COLLECTION).getReplica(replica.getName()); if (r.getState() == Replica.State.DOWN) { break; } @@ -183,7 +183,7 @@ public class TestLeaderInitiatedRecoveryThread extends AbstractFullDistribZkTest } assertNull(zkController.getLeaderInitiatedRecoveryState(DEFAULT_COLLECTION, SHARD1, replica.getName())); - assertEquals(Replica.State.DOWN, cloudClient.getZkStateReader().getClusterState().getReplica(DEFAULT_COLLECTION, replica.getName()).getState()); + assertEquals(Replica.State.DOWN, cloudClient.getZkStateReader().getClusterState().getCollection(DEFAULT_COLLECTION).getReplica(replica.getName()).getState()); /* 6. Test that non-leader cannot set LIR nodes http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java b/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java index dd4d13c..d4a131b 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java @@ -186,7 +186,7 @@ public class TestMiniSolrCloudCluster extends LuceneTestCase { String key = jetty.getBaseUrl().toString().substring((jetty.getBaseUrl().getProtocol() + "://").length()); jettyMap.put(key, jetty); } - Collection slices = clusterState.getSlices(collectionName); + Collection slices = clusterState.getCollection(collectionName).getSlices(); // track the servers not host repliacs for (Slice slice : slices) { jettyMap.remove(slice.getLeader().getNodeName().replace("_solr", "/solr")); @@ -256,7 +256,8 @@ public class TestMiniSolrCloudCluster extends LuceneTestCase { // check the collection's corelessness { int coreCount = 0; - for (Map.Entry entry : zkStateReader.getClusterState().getSlicesMap(collectionName).entrySet()) { + for (Map.Entry entry : zkStateReader.getClusterState() + .getCollection(collectionName).getSlicesMap().entrySet()) { coreCount += entry.getValue().getReplicasMap().entrySet().size(); } assertEquals(0, coreCount); @@ -318,7 +319,7 @@ public class TestMiniSolrCloudCluster extends LuceneTestCase { final HashSet followerIndices = new HashSet(); { final HashMap shardLeaderMap = new HashMap(); - for (final Slice slice : clusterState.getSlices(collectionName)) { + for (final Slice slice : clusterState.getCollection(collectionName).getSlices()) { for (final Replica replica : slice.getReplicas()) { shardLeaderMap.put(replica.getNodeName().replace("_solr", "/solr"), Boolean.FALSE); } http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/TestRandomRequestDistribution.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/TestRandomRequestDistribution.java b/solr/core/src/test/org/apache/solr/cloud/TestRandomRequestDistribution.java index d3fc679..415f80f 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestRandomRequestDistribution.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestRandomRequestDistribution.java @@ -159,7 +159,7 @@ public class TestRandomRequestDistribution extends AbstractFullDistribZkTestBase Replica leader = null; Replica notLeader = null; - Collection replicas = cloudClient.getZkStateReader().getClusterState().getSlice("football", "shard1").getReplicas(); + Collection replicas = cloudClient.getZkStateReader().getClusterState().getCollection("football").getSlice("shard1").getReplicas(); for (Replica replica : replicas) { if (replica.getStr(ZkStateReader.LEADER_PROP) != null) { leader = replica; http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/TestReplicaProperties.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/TestReplicaProperties.java b/solr/core/src/test/org/apache/solr/cloud/TestReplicaProperties.java index fc2a7e2..9a9af97 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestReplicaProperties.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestReplicaProperties.java @@ -193,7 +193,7 @@ public class TestReplicaProperties extends ReplicaPropertiesBase { for (int idx = 0; idx < 300; ++idx) { // Keep trying while Overseer writes the ZK state for up to 30 seconds. lastFailMsg = ""; ClusterState clusterState = client.getZkStateReader().getClusterState(); - for (Slice slice : clusterState.getSlices(collectionName)) { + for (Slice slice : clusterState.getCollection(collectionName).getSlices()) { Boolean foundLeader = false; Boolean foundPreferred = false; for (Replica replica : slice.getReplicas()) { http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/TestShortCircuitedRequests.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/TestShortCircuitedRequests.java b/solr/core/src/test/org/apache/solr/cloud/TestShortCircuitedRequests.java index 4233e9d..08e0eb5 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestShortCircuitedRequests.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestShortCircuitedRequests.java @@ -45,7 +45,7 @@ public class TestShortCircuitedRequests extends AbstractFullDistribZkTestBase { doQuery("a!doc1", "q", "*:*", ShardParams._ROUTE_, "a!"); // can go to any random node // query shard3 directly with _route_=a! so that we trigger the short circuited request path - Replica shard3 = cloudClient.getZkStateReader().getClusterState().getLeader(DEFAULT_COLLECTION, "shard3"); + Replica shard3 = cloudClient.getZkStateReader().getClusterState().getCollection(DEFAULT_COLLECTION).getLeader("shard3"); String nodeName = shard3.getNodeName(); SolrClient shard3Client = getClient(nodeName); QueryResponse response = shard3Client.query(new SolrQuery("*:*").add(ShardParams._ROUTE_, "a!").add(ShardParams.SHARDS_INFO, "true")); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/TestTolerantUpdateProcessorCloud.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/TestTolerantUpdateProcessorCloud.java b/solr/core/src/test/org/apache/solr/cloud/TestTolerantUpdateProcessorCloud.java index 8641720..8a84724 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestTolerantUpdateProcessorCloud.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestTolerantUpdateProcessorCloud.java @@ -134,7 +134,7 @@ public class TestTolerantUpdateProcessorCloud extends SolrCloudTestCase { } zkStateReader.forceUpdateCollection(COLLECTION_NAME); ClusterState clusterState = zkStateReader.getClusterState(); - for (Slice slice : clusterState.getSlices(COLLECTION_NAME)) { + for (Slice slice : clusterState.getCollection(COLLECTION_NAME).getSlices()) { String shardName = slice.getName(); Replica leader = slice.getLeader(); assertNotNull("slice has null leader: " + slice.toString(), leader); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java b/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java index 45f8b81..28a0a4e 100644 --- a/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java @@ -25,6 +25,7 @@ import org.apache.solr.client.solrj.impl.HttpSolrClient; import org.apache.solr.client.solrj.request.CollectionAdminRequest; import org.apache.solr.client.solrj.request.CoreAdminRequest.Unload; import org.apache.solr.common.SolrInputDocument; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.cloud.ZkCoreNodeProps; @@ -75,7 +76,8 @@ public class UnloadDistributedZkTest extends BasicDistributedZkTest { final TimeOut timeout = new TimeOut(45, TimeUnit.SECONDS); Boolean isPresent = null; // null meaning "don't know" while (null == isPresent || shouldBePresent != isPresent.booleanValue()) { - final Collection slices = getCommonCloudSolrClient().getZkStateReader().getClusterState().getSlices(collectionName); + final DocCollection docCollection = getCommonCloudSolrClient().getZkStateReader().getClusterState().getCollectionOrNull(collectionName); + final Collection slices = (docCollection != null) ? docCollection.getSlices() : null; if (timeout.hasTimedOut()) { printLayout(); fail("checkCoreNamePresenceAndSliceCount failed:" http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/cloud/hdfs/StressHdfsTest.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/cloud/hdfs/StressHdfsTest.java b/solr/core/src/test/org/apache/solr/cloud/hdfs/StressHdfsTest.java index 62e3f5f..329de79 100644 --- a/solr/core/src/test/org/apache/solr/cloud/hdfs/StressHdfsTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/hdfs/StressHdfsTest.java @@ -33,6 +33,7 @@ import org.apache.solr.client.solrj.request.QueryRequest; import org.apache.solr.cloud.BasicDistributedZkTest; import org.apache.solr.cloud.ChaosMonkey; import org.apache.solr.common.cloud.ClusterState; +import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.params.CollectionParams.CollectionAction; @@ -160,8 +161,10 @@ public class StressHdfsTest extends BasicDistributedZkTest { // data dirs should be in zk, SOLR-8913 ClusterState clusterState = cloudClient.getZkStateReader().getClusterState(); - Slice slice = clusterState.getSlice(DELETE_DATA_DIR_COLLECTION, "shard1"); - assertNotNull(clusterState.getSlices(DELETE_DATA_DIR_COLLECTION).toString(), slice); + final DocCollection docCollection = clusterState.getCollectionOrNull(DELETE_DATA_DIR_COLLECTION); + assertNotNull("Could not find :"+DELETE_DATA_DIR_COLLECTION, docCollection); + Slice slice = docCollection.getSlice("shard1"); + assertNotNull(docCollection.getSlices().toString(), slice); Collection replicas = slice.getReplicas(); for (Replica replica : replicas) { assertNotNull(replica.getProperties().toString(), replica.get("dataDir")); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/schema/TestCloudManagedSchemaConcurrent.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/schema/TestCloudManagedSchemaConcurrent.java b/solr/core/src/test/org/apache/solr/schema/TestCloudManagedSchemaConcurrent.java index 703b42b..f21df6c 100644 --- a/solr/core/src/test/org/apache/solr/schema/TestCloudManagedSchemaConcurrent.java +++ b/solr/core/src/test/org/apache/solr/schema/TestCloudManagedSchemaConcurrent.java @@ -342,7 +342,7 @@ public class TestCloudManagedSchemaConcurrent extends AbstractFullDistribZkTestB String testCollectionName = "collection1"; ClusterState clusterState = cloudClient.getZkStateReader().getClusterState(); - Replica shard1Leader = clusterState.getLeader(testCollectionName, "shard1"); + Replica shard1Leader = clusterState.getCollection(testCollectionName).getLeader("shard1"); final String coreUrl = (new ZkCoreNodeProps(shard1Leader)).getCoreUrl(); assertNotNull(coreUrl); @@ -362,7 +362,7 @@ public class TestCloudManagedSchemaConcurrent extends AbstractFullDistribZkTestB // now loop over all replicas and verify each has the same schema version Replica randomReplicaNotLeader = null; - for (Slice slice : clusterState.getActiveSlices(testCollectionName)) { + for (Slice slice : clusterState.getCollection(testCollectionName).getActiveSlices()) { for (Replica replica : slice.getReplicas()) { validateZkVersion(replica, schemaZkVersion, 0, false); http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8f6546e5/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java ---------------------------------------------------------------------- diff --git a/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java b/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java index de774f7..b479024 100644 --- a/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java +++ b/solr/core/src/test/org/apache/solr/schema/TestCloudSchemaless.java @@ -105,7 +105,7 @@ public class TestCloudSchemaless extends AbstractFullDistribZkTestBase { // This tests that the replicas properly handle schema additions. int slices = getCommonCloudSolrClient().getZkStateReader().getClusterState() - .getActiveSlices("collection1").size(); + .getCollection("collection1").getActiveSlices().size(); int trials = 50; // generate enough docs so that we can expect at least a doc per slice int numDocsPerTrial = (int)(slices * (Math.log(slices) + 1));