Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 01EAD20049E for ; Thu, 10 Aug 2017 12:54:26 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 0010716B167; Thu, 10 Aug 2017 10:54:26 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 1EDB216B166 for ; Thu, 10 Aug 2017 12:54:24 +0200 (CEST) Received: (qmail 43405 invoked by uid 500); 10 Aug 2017 10:54:23 -0000 Mailing-List: contact solr-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: solr-user@lucene.apache.org Delivered-To: mailing list solr-user@lucene.apache.org Received: (qmail 43388 invoked by uid 99); 10 Aug 2017 10:54:23 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Aug 2017 10:54:23 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 97326C00CE for ; Thu, 10 Aug 2017 10:54:22 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.901 X-Spam-Level: X-Spam-Status: No, score=-0.901 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-2.8, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id N5KlHLFsvhyw for ; Thu, 10 Aug 2017 10:54:20 +0000 (UTC) Received: from mail-pf0-f176.google.com (mail-pf0-f176.google.com [209.85.192.176]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 1A98B5FDB2 for ; Thu, 10 Aug 2017 10:54:20 +0000 (UTC) Received: by mail-pf0-f176.google.com with SMTP id t86so1608523pfe.2 for ; Thu, 10 Aug 2017 03:54:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=acQEPj6rfEv343abXc0XswxknsJTRm8j3vhd7fBA5QE=; b=NSA2IYlYDKCVX8tMAd0FI25YUjvrpLyAtUpoeTyNRZrL+PkaI8M7061/UZ5UBsVat8 6aR8dmwL8/x+TzKQnECeVuwKo1z0c6Kz/pQJRlXW3LCUVgSWJlM8Y4GI4txlEJ5n+Kcr CP3XXlVP3NdDqwvbD0FHDOvRXKKmiSVtygmPCOuZDyMpRD4tL1RR5jXRRgaCTKoHZTcg auuRPJh51c81m7ZXMFiJ8GlPumWb395UH9na34SGjketBHU48v9ZuDxuc7tDq0N0Ce5t 2dwGtbLMLbuBH0n+/l6pSJ0UvBZzN9DpMd/cx8lgTbRVMibkRbZGuxl4cOTWWrhRKHjl L49A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=acQEPj6rfEv343abXc0XswxknsJTRm8j3vhd7fBA5QE=; b=qCQEFMB51J3ODYqFaweAwqDIRqCVqhpksra2PmSV4nfqADGDRP2Pp6jNmspWn7snC/ gyBBMx05YGywEixfDbJ5YkX20V0WsAGDsbLArxdhfoI+lQYNw4hpCqLqEcyFe1emT5wJ GtsAnun/AzkMNL9/QlAh+b3GsTmD7hWTaXap2btVYomab5c8oG+RA/T76or6gBb7n7ys 8K7Er7hVzgqwHSNXppyayagNJ/OtzlzA0mHyVsP8k9C0A+wVREpKIImUq5Xj1fKw41jN utYQaWyinkes18EmaIXkqatA3fzA6e/P9yA2qwMH2pIUnlNL+plvEGU69Ec5TUgfrQmv On0A== X-Gm-Message-State: AHYfb5hT5/9rOV4wzg3wHR3WlPlrKQtk4TcLmUz52HQ+UvUhtVRnMzAr LnTJt1qZ+S85ifx9efUnCtKypa1VYg+o X-Received: by 10.98.31.195 with SMTP id l64mr11809163pfj.128.1502362452731; Thu, 10 Aug 2017 03:54:12 -0700 (PDT) MIME-Version: 1.0 Received: by 10.100.186.165 with HTTP; Thu, 10 Aug 2017 03:54:11 -0700 (PDT) In-Reply-To: References: <994770af-b494-1098-cadb-212757160f42@elyograg.org> <7eb6da4e-1d1c-036b-45da-04cb29f752e0@elyograg.org> <1B5D1E7A-3ED5-4045-AE86-62AF8001A69C@gmail.com> From: Mahmoud Almokadem Date: Thu, 10 Aug 2017 12:54:11 +0200 Message-ID: Subject: Re: Move index directory to another partition To: "solr-user@lucene.apache.org" Content-Type: multipart/alternative; boundary="001a11c0c57251aee10556640413" archived-at: Thu, 10 Aug 2017 10:54:26 -0000 --001a11c0c57251aee10556640413 Content-Type: text/plain; charset="UTF-8" Thanks all for your commits. I followed Shawn steps (rsync) cause everything on that volume (ZooKeeper, Solr home and data) and everything went great. Thanks again, Mahmoud On Sun, Aug 6, 2017 at 12:47 AM, Erick Erickson wrote: > bq: I was envisioning a scenario where the entire solr home is on the old > volume that's going away. If I were setting up a Solr install where the > large/fast storage was a separate filesystem, I would put the solr home > (or possibly even the entire install) under that mount point. It would > be a lot easier than setting dataDir in core.properties for every core, > especially in a cloud install. > > Agreed. Nothing in what I said precludes this. If you don't specify > dataDir, > then the index for a new replica goes in the default place, i.e. under > your install > directory usually. In your case under your new mount point. I usually don't > recommend trying to take control of where dataDir points, just let it > default. > I only mentioned it so you'd be aware it exists. So if your new install > is associated with a bigger/better/larger EBS it's all automatic. > > bq: If the dataDir property is already in use to relocate index data, then > ADDREPLICA and DELETEREPLICA would be a great way to go. I would not > expect most SolrCloud users to use that method. > > I really don't understand this. Each Solr replica has an associated > dataDir whether you specified it or not (the default is relative to > the core.properties file). ADDREPLICA creates a new replica in a new > place, initially the data directory and index are empty. The new > replica goes into recovery and uses the standard replication process > to copy the index via HTTP from a healthy replica and write it to its > data directory. Once that's done, the replica becomes live. There's > nothing about dataDir already being in use here at all. > > When you start Solr there's the default place Solr expects to find the > replicas. This is not necessarily where Solr is executing from, see > the "-s" option in bin/solr start -s..... > > If you're talking about using dataDir to point to an existing index, > yes that would be a problem and not something I meant to imply at all. > > Why wouldn't most SolrCloud users use ADDREPLICA/DELTEREPLICA? It's > commonly used to more replicas around a cluster. > > Best, > Erick > > On Fri, Aug 4, 2017 at 11:15 AM, Shawn Heisey wrote: > > On 8/2/2017 9:17 AM, Erick Erickson wrote: > >> Not entirely sure about AWS intricacies, but getting a new replica to > >> use a particular index directory in the general case is just > >> specifying dataDir=some_directory on the ADDREPLICA command. The index > >> just needs an HTTP connection (uses the old replication process) so > >> nothing huge there. Then DELETEREPLICA for the old one. There's > >> nothing that ZK has to know about to make this work, it's all local to > >> the Solr instance. > > > > I was envisioning a scenario where the entire solr home is on the old > > volume that's going away. If I were setting up a Solr install where the > > large/fast storage was a separate filesystem, I would put the solr home > > (or possibly even the entire install) under that mount point. It would > > be a lot easier than setting dataDir in core.properties for every core, > > especially in a cloud install. > > > > If the dataDir property is already in use to relocate index data, then > > ADDREPLICA and DELETEREPLICA would be a great way to go. I would not > > expect most SolrCloud users to use that method. > > > > Thanks, > > Shawn > > > --001a11c0c57251aee10556640413--