geode-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kmil...@apache.org
Subject [49/76] [abbrv] [partial] incubator-geode git commit: GEODE-1952 Consolidated docs under a single geode-docs directory
Date Wed, 12 Oct 2016 17:12:09 GMT
http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/basic_config/the_cache/intro_cache_management.html.md.erb
----------------------------------------------------------------------
diff --git a/basic_config/the_cache/intro_cache_management.html.md.erb b/basic_config/the_cache/intro_cache_management.html.md.erb
deleted file mode 100644
index 2d21a60..0000000
--- a/basic_config/the_cache/intro_cache_management.html.md.erb
+++ /dev/null
@@ -1,79 +0,0 @@
----
-title:  Introduction to Cache Management
----
-
-The cache provides in-memory storage and management for your data.
-
-<a id="concept_F8BA7F2D3B5A40D78461E78BC5FB31FA__section_B364B076EB5843DAAC28EE2805686453"></a>
-You organize your data in the cache into *data regions*, each with its own configurable behavior. You store your data into your regions in key/value pairs called *data entries*. The cache also provides features like transactions, data querying, disk storage management, and logging. See the Javadocs for `org.apache.geode.cache.Cache`.
-
-You generally configure caches using the `gfsh` command-line utility or a combination of XML declarations and API calls. Geode loads and processes your XML declarations when you first create the cache.
-
-Geode has one cache type for managing server and peer caches and one for managing client caches. The cache server process automatically creates its server cache at startup. In your application process, the cache creation returns an instance of the server/peer or client cache. From that point on, you manage the cache through API calls in your application.
-
-## <a id="concept_F8BA7F2D3B5A40D78461E78BC5FB31FA__section_20973C59F1C94E35A02CE6582503205A" class="no-quick-link"></a>The Caching APIs
-
-Geode's caching APIs provide specialized behavior for different system member types and security settings.
-
--   **`org.apache.geode.cache.RegionService`**. Generally, you use the `RegionService` functionality through instances of `Cache` and `ClientCache`. You only specifically use instances of `RegionService` for limited-access users in secure client applications that service many users. The `RegionService` API provides access to existing cache data regions and to the standard query service for the cache. For client caches, queries are sent to the server tier. For server and peer caches, queries are run in the current cache and any available peers. `RegionService` is implemented by `GemFireCache`.
--   **`org.apache.geode.cache.GemFireCache`**. You do not specifically use instances of `GemFireCache`, but you use `GemFireCache` functionality in your instances of `Cache` and `ClientCache`. `GemFireCache` extends `RegionService` and adds general caching features like region attributes, disk stores for region persistence and overflow, and access to the underlying distributed system. `GemFireCache` is implemented by `Cache` and `ClientCache`.
--   **`org.apache.geode.cache.Cache`**. Use the `Cache` interface to manage server and peer caches. You have one `Cache` per server or peer process. The `Cache` extends `GemFireCache` and adds server/peer caching features like communication within the distributed system, region creation, transactions and querying, and cache server functionality.
--   **`org.apache.geode≈setting_cache_initializer.cache.ClientCache`**. Use the `ClientCache` interface to manage the cache in your clients. You have one `ClientCache` per client process. The `ClientCache` extends `GemFireCache` and adds client-specific caching features like client region creation, subscription keep-alive management for durable clients, querying on server and client tiers, and RegionService creation for secure access by multiple users within the client.
-
-## <a id="concept_F8BA7F2D3B5A40D78461E78BC5FB31FA__section_6486BDAF06EC4B91A548872066F3EC8C" class="no-quick-link"></a>The Cache XML
-
-Your `cache.xml` must be formatted according to the product XML schema definition `cache-1.0.xsd`. The schema definition file is available in the product distribution at `$GEMFIRE/schemas/geode.apache.org/schema/cache/cache-1.0.xsd`.
-
-You use one format for peer and server caches and another for client caches.
-
-`cache.xml` for Peer/Server:
-
-``` pre
-<?xml version="1.0" encoding="UTF-8"?>
-<cache xmlns="http://geode.incubator.apache.org/schema/cache"
-    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-    xsi:schemaLocation="http://geode.incubator.apache.org/schema/cache http://geode.incubator.apache.org/schema/cache/cache-1.0.xsd"
-    version="1.0”>
-...
-</cache>
-```
-
-`cache.xml` for Client:
-
-``` pre
-<?xml version="1.0" encoding="UTF-8"?>
-<client-cache
-    xmlns="http://geode.incubator.apache.org/schema/cache"
-    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-    xsi:schemaLocation="http://geode.incubator.apache.org/schema/cache http://geode.incubator.apache.org/schema/cache/cache-1.0.xsd"
-    version="1.0”>
-...
-</client-cache>
-```
-
-For more information on the `cache.xml` file, see [cache.xml](../../reference/topics/chapter_overview_cache_xml.html#cache_xml).
-
-## <a id="concept_F8BA7F2D3B5A40D78461E78BC5FB31FA__section_B113BC6921DA434C947D4326DDB4526E" class="no-quick-link"></a>Create and Close a Cache
-
-Your system configuration and cache configuration are initialized when you start your member processes and create each member’s Geode cache. If you are using the cluster configuration service, member processes can pick up its cache configuration from the cluster or group's current configuration. See [Overview of the Cluster Configuration Service](../../configuring/cluster_config/gfsh_persist.html).
-
-The steps in this section use `gemfire.properties` and `cache.xml` file examples, except where API is required. You can configure your distributed system properties and cache through the API as well, and you can use a combination of file configuration and API configuration.
-
-The XML examples may not include the full `cache.xml` file listing. All of your declarative cache configuration must conform to the cache XSD in the product installation `$GEMFIRE/schemas/geode.apache.org/schema/cache/cache-1.0.xsd`.
-
-For all of your Geode applications:
-
-1.  Create your `Cache`, for peer/server applications, or `ClientCache`, for client applications. This connects to the Geode system you have configured and initializes any configured data regions. Use your cache instance to access your regions and perform your application work.
-2.  Close your cache when you are done. This frees up resources and disconnects your application from the distributed system in an orderly manner.
-
-Follow the instructions in the subtopics under [Cache Management](chapter_overview.html#the_cache) to customize your cache creation and closure for your application needs. You may need to combine more than one of the sets of instructions. For example, to create a client cache in a system with security, you would follow the instructions for creating and closing a client cache and for creating and closing a cache in a secure system.
-
-## <a id="concept_F8BA7F2D3B5A40D78461E78BC5FB31FA__section_E8781B263D434F6A9104194AE7BE1647" class="no-quick-link"></a>Export and Import a Cache Snapshot
-
-To aid in the administration of cache data and speed the setup of new environments, you can export a snapshot of the entire cache (all regions) and then import the snapshot into a new cache. For example, you could take a snapshot of the production environment cache in order to import the cache's data into a testing environment.
-
-For more details on exporting and importing snapshots of a cache, see [Cache and Region Snapshots](../../managing/cache_snapshots/chapter_overview.html#concept_E6AC3E25404D4D7788F2D52D83EE3071).
-
-## Cache Management with gfsh and the Cluster Configuration Service
-
-You can use gfsh commands to mange a server cache. There are gfsh commands to create regions, start servers, and to create queues and other objects. As you issue these commands, the Cluster Configuration Service saves cache.xml and gemfire.properties files on the locators and distributes those configurations to any new members that join the cluster. See [Overview of the Cluster Configuration Service](../../configuring/cluster_config/gfsh_persist.html).

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/basic_config/the_cache/managing_a_client_cache.html.md.erb
----------------------------------------------------------------------
diff --git a/basic_config/the_cache/managing_a_client_cache.html.md.erb b/basic_config/the_cache/managing_a_client_cache.html.md.erb
deleted file mode 100644
index 94099aa..0000000
--- a/basic_config/the_cache/managing_a_client_cache.html.md.erb
+++ /dev/null
@@ -1,67 +0,0 @@
----
-title:  Managing a Client Cache
----
-
-You have several options for client cache configuration. Start your client cache using a combination of XML declarations and API calls. Close the client cache when you are done.
-
-<a id="managing_a_client_cache__section_566044C44C434926A7A9FBAB2BF463BF"></a>
-Geode clients are processes that send most or all of their data requests and updates to a Geode server system. Clients run as standalone processes, without peers of their own.
-
-**Note:**
-Geode automatically configures the distributed system for your `ClientCache` as standalone, which means the client has no peers. Do not try to set the `gemfire.properties` `mcast-port` or `locators` for a client application or the system will throw an exception.
-
-1.  Create your client cache:
-    1.  In your `cache.xml`, use the `client-cache` DOCTYPE and configure your cache inside a `<client-cache>` element. Configure your server connection pool and your regions as needed. Example:
-
-        ``` pre
-        <?xml version="1.0" encoding="UTF-8"?>
-        <client-cache
-            xmlns="http://geode.incubator.apache.org/schema/cache"
-            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-            xsi:schemaLocation="http://geode.incubator.apache.org/schema/cache http://geode.incubator.apache.org/schema/cache/cache-1.0.xsd"
-            version="1.0">
-            <pool name="serverPool">
-                <locator host="host1" port="44444"/>
-            </pool>
-            <region name="exampleRegion" refid="PROXY"/>
-        </client-cache>
-        ```
-
-        **Note:**
-        Applications that use a `client-cache` may want to set `concurrency-checks-enabled` to false for a region in order to see all events for that region. Geode server members can continue using concurrency checks, but they will pass all events to the client cache. This configuration ensures that the client sees all region events, but it does not prevent the client cache region from becoming out-of-sync with the server cache. See [Consistency for Region Updates](../../developing/distributed_regions/region_entry_versions.html#topic_CF2798D3E12647F182C2CEC4A46E2045).
-
-    2.  If you use multiple server pools, configure the pool name explicitly for each client region. Example:
-
-        ``` pre
-        <pool name="svrPool1">
-            <locator host="host1" port="40404"/>
-        </pool>
-        <pool name="svrPool2">
-            <locator host="host2" port="40404"/>
-        </pool>
-        <region name="clientR1" refid="PROXY" pool-name="svrPool1"/>  
-        <region name="clientR2" refid="PROXY" pool-name="svrPool2"/>
-        <region name="clientsPrivateR" refid="LOCAL"/>
-        ```
-
-    3.  In your Java client application, create the cache using the `ClientCacheFactory` `create` method. Example:
-
-        ``` pre
-        ClientCache clientCache = new ClientCacheFactory().create();
-        ```
-
-        This creates the server connections and initializes the client’s cache according to your `gemfire.properties` and `cache.xml` specifications.
-
-2.  Close your cache when you are done using the `close` method of your `Cache` instance:
-
-    ``` pre
-    cache.close();
-    ```
-
-    If your client is durable and you want to maintain your durable queues while the client cache is closed, use:
-
-    ``` pre
-    clientCache.close(true);
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/basic_config/the_cache/managing_a_multiuser_cache.html.md.erb
----------------------------------------------------------------------
diff --git a/basic_config/the_cache/managing_a_multiuser_cache.html.md.erb b/basic_config/the_cache/managing_a_multiuser_cache.html.md.erb
deleted file mode 100644
index 76dc590..0000000
--- a/basic_config/the_cache/managing_a_multiuser_cache.html.md.erb
+++ /dev/null
@@ -1,49 +0,0 @@
----
-title:  Managing RegionServices for Multiple Secure Users
----
-
-In a secure system, you can create clients with multiple, secure connections to the servers from each client. The most common use case is a Geode client embedded in an application server that supports data requests from many users. Each user may be authorized to access a subset of data on the servers. For example, customer users may be allowed to see and update only their own orders and shipments.
-
-<a id="managing_a_multiuser_cache__section_A2A0F835DF35450E8E4B5304F4BC07E2"></a>
-
-In a single client, multiple authenticated users can all access the same `ClientCache` through instances of the `RegionService` interface. Because there are multiple users with varying authorization levels, access to cached data is done entirely through the servers, where each user’s authorization can be managed.
-Follow these steps in addition to the steps in [Managing a Cache in a Secure System](managing_a_secure_cache.html#managing_a_secure_cache).
-
-1.  Create your cache and `RegionService` instances:
-    1.  Configure your client’s server pool for multiple secure user authentication. Example:
-
-        ``` pre
-        <pool name="serverPool" multiuser-authentication="true">
-            <locator host="host1" port="44444"/>
-            </pool>
-        ```
-
-        This enables access through the pool for the `RegionService` instances and disables it for the `ClientCache` instance.
-
-    2.  After you create your `ClientCache`, from your `ClientCache` instance, for each user call the `createAuthenticatedView` method, providing the user’s particular credentials. These are create method calls for two users:
-
-        ``` pre
-        Properties properties = new Properties();
-        properties.setProperty("security-username", cust1Name);
-        properties.setProperty("security-password", cust1Pwd);
-        RegionService regionService1 = 
-            clientCache.createAuthenticatedView(properties);
-
-        properties = new Properties();
-        properties.setProperty("security-username", cust2Name);
-        properties.setProperty("security-password", cust2Pwd);
-        RegionService regionService2 =  
-            clientCache.createAuthenticatedView(properties);
-        ```
-
-    For each user, do all of your caching and region work through the assigned `RegionService` instance. Access to the server cache will be governed by the server’s configured authorization rules for each individual user.
-2.  Close your cache by closing the `ClientCache` instance only. Do not close the `RegionService` instances first. This is especially important for durable clients.
-
-## <a id="managing_a_multiuser_cache__section_692D9961E8224739903E483BF8AB4F84" class="no-quick-link"></a>Requirements and Caveats for RegionService
-
-Once each region is created, you can perform operations on it through the `ClientCache` instance or the `RegionService` instances, but not both.
-
-**Note:**
-You can use the `ClientCache` to create a region that uses a pool configured for multi-user authentication, then access and do work on the region using your `RegionService` instances.
-
-To use `RegionService`, regions must be configured as `EMPTY`. Depending on your data access requirements, this configuration might affect performance, because the client goes to the server for every get.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/basic_config/the_cache/managing_a_peer_server_cache.html.md.erb
----------------------------------------------------------------------
diff --git a/basic_config/the_cache/managing_a_peer_server_cache.html.md.erb b/basic_config/the_cache/managing_a_peer_server_cache.html.md.erb
deleted file mode 100644
index 89ad024..0000000
--- a/basic_config/the_cache/managing_a_peer_server_cache.html.md.erb
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title:  Managing a Peer or Server Cache
----
-
-You start your peer or server cache using a combination of XML declarations and API calls. Close the cache when you are done.
-
-<a id="creating_and_closing_a_peer_cache__section_1633A80F0DB04794BB6C3A7F05EED97E"></a>
-Geode peers are members of a Geode distributed system that do not act as clients to another Geode distributed system. Geode servers are peers that also listen for and process client requests.
-
-1.  Create your cache:
-    1.  Start up a cluster and the cluster configuration service:
-        1.  Start a locator with `--enable-cluster-configuration` set to true. (It is set true by default.)
-
-            ``` pre
-            gfsh>start locator --name=locator1
-            ```
-
-        2.  Start up member processes that use the cluster configuration service (enabled by default):
-
-            ``` pre
-            gfsh>start server --name=server1 --server-port=40404
-            ```
-
-        3.  Create regions:
-
-            ``` pre
-            gfsh>create region --name=customerRegion --type=REPLICATE
-
-            gfsh>create region --name=ordersRegion --type=PARTITION
-            ```
-
-    2.  Or if you are not using the cluster configuration service, directly configure cache.xml in each member of your cluster. In your `cache.xml`, use the `cache` DOCTYPE and configure your cache inside a `<cache>` element. Example:
-
-        ``` pre
-        <?xml version="1.0" encoding="UTF-8"?>
-        <cache
-            xmlns="http://geode.incubator.apache.org/schema/cache"
-            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-            xsi:schemaLocation="http://geode.incubator.apache.org/schema/cache http://geode.incubator.apache.org/schema/cache/cache-1.0.xsd"
-            version="1.0”>
-            // NOTE: Use this <cache-server> element only for server processes 
-            <cache-server port="40404"/>
-            <region name="customerRegion" refid="REPLICATE" />
-            <region name="ordersRegion" refid="PARTITION" />
-        </cache>
-        ```
-
-    3.  To programmatically create the `Cache` instance:
-        -   In your Java application, use the `CacheFactory` create method:
-
-            ``` pre
-            Cache cache = new CacheFactory().create();
-            ```
-        -   If you are running a server using the Geode `cacheserver` process, it automatically creates the cache and connection at startup and closes both when it exits.
-
-        The system creates the distributed system connection and initializes the cache according to your `gemfire.properties` and `cache.xml` specifications.
-
-2.  Close your cache when you are done using the inherited `close` method of the `Cache` instance:
-
-    ``` pre
-    cache.close();
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/basic_config/the_cache/managing_a_secure_cache.html.md.erb
----------------------------------------------------------------------
diff --git a/basic_config/the_cache/managing_a_secure_cache.html.md.erb b/basic_config/the_cache/managing_a_secure_cache.html.md.erb
deleted file mode 100644
index 6bd109c..0000000
--- a/basic_config/the_cache/managing_a_secure_cache.html.md.erb
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title:  Managing a Cache in a Secure System
----
-
-To create a cache in a secured system,
-authentication at connection time will require credentials.
-Authorization permits operations as configured.
-
-<a id="managing_a_secure_cache__section_11BF0F3F64504B74B39CD4C1CF58E6FC"></a>
-These steps demonstrate a programmatic cache creation.
-
-1.  To create the cache:
-    1.  Add necessary security properties to the `gemfire.properties` or `gfsecurity.properties` file, to configure for your particular security implementation. Examples:
-
-        ``` pre
-        security-client-auth-init=mySecurity.UserPasswordAuthInit.create
-        ```
-
-        ``` pre
-        security-peer-auth-init=myAuthPkg.myAuthInitImpl.create
-        ```
-
-    2.  When you create your cache, pass any properties required by your security implementation to the cache factory create call by using one of these methods:
-        -   `ClientCacheFactory` or `CacheFactory` `set` methods. Example:
-
-            ``` pre
-            ClientCache clientCache = new ClientCacheFactory()
-                .set("security-username", username)
-                .set("security-password", password)
-                .create();
-            ```
-        -   Properties object passed to the `ClientCacheFactory` or `CacheFactory` `create` method. These are usually properties of a sensitive nature that you do not want to put inside the `gfsecurity.properties` file. Example:
-
-            ``` pre
-            Properties properties = new Properties();
-            properties.setProperty("security-username", username);
-            properties.setProperty("security-password", password);
-            Cache cache = new CacheFactory(properties).create();
-            ```
-
-            **Note:**
-            Properties passed to a cache creation method override any settings in either the `gemfire.properties` file or `gfsecuirty.properties`.
-
-2.  Close your cache when you are done, using the `close` method of the `ClientCache` instance or the inherited `close` method of the `Cache` instance. Example:
-
-    ``` pre
-    cache.close();
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/basic_config/the_cache/setting_cache_initializer.html.md.erb
----------------------------------------------------------------------
diff --git a/basic_config/the_cache/setting_cache_initializer.html.md.erb b/basic_config/the_cache/setting_cache_initializer.html.md.erb
deleted file mode 100644
index 20cc2c6..0000000
--- a/basic_config/the_cache/setting_cache_initializer.html.md.erb
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title:  Launching an Application after Initializing the Cache
----
-
-You can specify a callback application that is launched after the cache initialization.
-
-By specifying an `<initializer>` element in your cache.xml file, you can trigger a callback application, which is run after the cache has been initialized. Applications that use the cacheserver script to start up a server can also use this feature to hook into a callback application. To use this feature, you need to specify the callback class within the `<initializer>` element. This element should be added to the end of your `cache.xml` file.
-
-You can specify the `<initializer>` element for either server caches or client caches.
-
-The callback class must implement the `Declarable` interface. When the callback class is loaded, its `init` method is called, and any parameters defined in the `<initializer>` element are passed as properties.
-
-The following is an example specification.
-
-In cache.xml:
-
-``` pre
-<initializer>
-   <class-name>MyInitializer</class-name>
-      <parameter name="members">
-         <string>2</string>
-      </parameter>
-</initializer>
-```
-
-Here's the corresponding class definition:
-
-``` pre
- 
-import org.apache.geode.cache.Declarable;
-
-public class MyInitializer implements Declarable {
-   public void init(Properties properties) {
-      System.out.println(properties.getProperty("members"));
-   }
-}
-```
-
-The following are some additional real-world usage scenarios:
-
-1.  Start a SystemMembershipListener
-
-    ``` pre
-    <initializer>
-       <class-name>TestSystemMembershipListener</class-name>
-    </initializer>
-    ```
-
-2.  Write a custom tool that monitors cache resources
-
-    ``` pre
-    <initializer>
-       <class-name>ResourceMonitorCacheXmlLoader</class-name>
-    </initializer>
-    ```
-
-Any singleton or timer task or thread can be instantiated and started using the initializer element.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/basic_config/the_cache/setting_cache_properties.html.md.erb
----------------------------------------------------------------------
diff --git a/basic_config/the_cache/setting_cache_properties.html.md.erb b/basic_config/the_cache/setting_cache_properties.html.md.erb
deleted file mode 100644
index 76d5066..0000000
--- a/basic_config/the_cache/setting_cache_properties.html.md.erb
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title:  Options for Configuring the Cache and Data Regions
----
-
-To populate your Apache Geode cache and fine-tune its storage and distribution behavior, you need to define cached data regions and provide custom configuration for the cache and regions.
-
-<a id="setting_cache_properties__section_FB536C90C219432D93E872CBD49D66B1"></a>
-Cache configuration properties define:
-
--   Cache-wide settings such as disk stores, communication timeouts, and settings designating the member as a server
--   Cache data regions
-
-Configure the cache and its data regions through one or more of these methods:
-
--   Through a persistent configuration that you define when issuing commands that use the gfsh command line utility. `gfsh` supports the administration, debugging, and deployment of Apache Geode processes and applications. You can use gfsh to configure regions, locators, servers, disk stores, event queues, and other objects.
-
-    As you issue commands, gfsh saves a set of configurations that apply to the entire cluster and also saves configurations that only apply to defined groups of members within the cluster. You can re-use these configurations to create a distributed system. See [Overview of the Cluster Configuration Service](../../configuring/cluster_config/gfsh_persist.html).
-
--   Through declarations in the XML file named in the `cache-xml-file` `gemfire.properties` setting. This file is generally referred to as the `cache.xml` file, but it can have any name. See [cache.xml](../../reference/topics/chapter_overview_cache_xml.html#cache_xml).
--   Through application calls to the `org.apache.geode.cache.CacheFactory`, `org.apache.geode.cache.Cache` and `org.apache.geode.cache.Region` APIs.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/chapter_overview.html.md.erb b/configuring/chapter_overview.html.md.erb
deleted file mode 100644
index 8026e72..0000000
--- a/configuring/chapter_overview.html.md.erb
+++ /dev/null
@@ -1,67 +0,0 @@
----
-title:  Configuring and Running a Cluster
----
-
-You use the `gfsh` command-line utility to configure your Apache Geode cluster (also called a "distributed system"). The cluster configuration service persists the cluster configurations and distributes the configurations to members of the cluster. There are also several additional ways to configure a cluster.
-
-You use `gfsh` to configure regions, disk stores, members, and other Geode objects. You also use `gfsh` to start and stop locators, servers, and Geode monitoring tools. As you execute these commands, the cluster configuration service persists the configuration. When new members join the cluster, the service distributes the configuration to the new members.
-
-`gfsh` is the recommended means of configuring and managing your Apache Geode cluster, however you can still configure many aspects of a cluster using the older methods of the cache.xml and gemfire.properties files. See [cache.xml](../reference/topics/chapter_overview_cache_xml.html#cache_xml) and the [Reference](../reference/book_intro.html#reference) for configuration parameters. You can also configure some aspects of a cluster using a Java API. See [Managing Apache Geode](../managing/book_intro.html#managing_gemfire_intro).
-
--   **[Overview of the Cluster Configuration Service](../configuring/cluster_config/gfsh_persist.html)**
-
-    The Apache Geode cluster configuration service persists cluster configurations created by `gfsh` commands to the locators in a cluster and distributes the configurations to members of the cluster.
-
--   **[Tutorial—Creating and Using a Cluster Configuration](../configuring/cluster_config/persisting_configurations.html)**
-
-    A short walk-through that uses a single computer to demonstrate how to use `gfsh` to create a cluster configuration for a Geode cluster.
-
--   **[Deploying Application JARs to Apache Geode Members](../configuring/cluster_config/deploying_application_jars.html)**
-
-    You can dynamically deploy your application JAR files to specific members or to all members in your distributed system. Geode automatically keeps track of JAR file versions; autoloads the deployed JAR files to the CLASSPATH; and auto-registers any functions that the JAR contains.
-
--   **[Using Member Groups](../configuring/cluster_config/using_member_groups.html)**
-
-    Apache Geode allows you to organize your distributed system members into logical member groups.
-
--   **[Exporting and Importing Cluster Configurations](../configuring/cluster_config/export-import.html)**
-
-    The cluster configuration service exports and imports configurations created using `gfsh` for an entire Apache Geode cluster.
-
--   **[Cluster Configuration Files and Troubleshooting](../configuring/cluster_config/gfsh_config_troubleshooting.html)**
-
-    When you use the cluster configuration service in Geode, you can examine the generated configuration files in the `cluster_config` directory on the locator. `gfsh` saves configuration files at the cluster-level and at the individual group-level.
-
--   **[Loading Existing Configuration Files into Cluster Configuration](../configuring/cluster_config/gfsh_load_from_shared_dir.html)**
-
-    To load an existing cache.xml or gemfire.properties configuration file into a new cluster, use the `--load-cluster-configuration-from-dir` parameter when starting up the locator.
-
--   **[Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS](../configuring/cluster_config/gfsh_remote.html)**
-
-    You can connect `gfsh` via HTTP or HTTPS to a remote cluster and manage the cluster using `gfsh` commands.
-
--   **[Deploying Configuration Files without the Cluster Configuration Service](../configuring/running/deploying_config_files.html)**
-
-    You can deploy your Apache Geode configuration files in your system directory structure or in jar files. You determine how you want to deploy your configuration files and set them up accordingly.
-
--   **[Starting Up and Shutting Down Your System](../configuring/running/starting_up_shutting_down.html)**
-
-    Determine the proper startup and shutdown procedures, and write your startup and shutdown scripts.
-
--   **[Running Geode Locator Processes](../configuring/running/running_the_locator.html)**
-
-    The locator is a Geode process that tells new, connecting members where running members are located and provides load balancing for server use.
-
--   **[Running Geode Server Processes](../configuring/running/running_the_cacheserver.html)**
-
-    A Geode server is a process that runs as a long-lived, configurable member of a client/server system.
-
--   **[Managing System Output Files](../configuring/running/managing_output_files.html)**
-
-    Geode output files are optional and can become quite large. Work with your system administrator to determine where to place them to avoid interfering with other system activities.
-
--   **[Firewall Considerations](../configuring/running/firewall_ports_config.html)**
-
-    You can configure and limit port usage for situations that involve firewalls, for example, between client-server or server-server connections.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/cluster_config/deploying_application_jars.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/cluster_config/deploying_application_jars.html.md.erb b/configuring/cluster_config/deploying_application_jars.html.md.erb
deleted file mode 100644
index 08eb1d5..0000000
--- a/configuring/cluster_config/deploying_application_jars.html.md.erb
+++ /dev/null
@@ -1,114 +0,0 @@
----
-title:  Deploying Application JARs to Apache Geode Members
----
-
-You can dynamically deploy your application JAR files to specific members or to all members in your distributed system. Geode automatically keeps track of JAR file versions; autoloads the deployed JAR files to the CLASSPATH; and auto-registers any functions that the JAR contains.
-
-To deploy and undeploy application JAR files in Apache Geode, use the `gfsh` `deploy` or `undeploy` command. You can deploy a single JAR or multiple JARs (by either specifying the JAR filenames or by specifying a directory that contains the JAR files), and you can also target the deployment to a member group or multiple member group. For example, after connecting to the distributed system where you want to deploy the JARs, you could type at the `gfsh` prompt:
-
-``` pre
-gfsh> deploy --jar=group1_functions.jar
-```
-
-This command deploys the `group1_functions.jar` file to all members in the distributed system.
-
-To deploy the JAR file to a subset of members, use the `--group` argument. For example:
-
-``` pre
-gfsh> deploy --jar=group1_functions.jar --group=MemberGroup1
-```
-
-In the example it is assumed that you have already defined the member group that you want to use when starting up your members. See [Configuring and Running a Cluster](../chapter_overview.html#concept_lrh_gyq_s4) for more information on how to define member groups and add a member to a group.
-
-To deploy all the JAR files that are located in a specific directory to all members:
-
-``` pre
-gfsh> deploy --dir=libs/group1-libs
-```
-
-You can either provide a JAR file name or a directory of JARs for deployment, but you cannot specify both at once.
-
-To undeploy all previously deployed JAR files throughout the distributed system:
-
-``` pre
-gfsh> undeploy
-```
-
-To undeploy a specific JAR file:
-
-``` pre
-gfsh> undeploy --jar=group1_functions.jar
-```
-
-To target a specific member group when undeploying all JAR files:
-
-``` pre
-gfsh> undeploy --group=MemberGroup1
-```
-
-Only JAR files that have been previously deployed on members in the MemberGroup1 group will be undeployed.
-
-To see a list of all deployed JARs in your distributed system:
-
-``` pre
-gfsh> list deployed
-```
-
-To see a list of all deployed JARs in a specific member group:
-
-``` pre
-gfsh> list deployed --group=MemberGroup1
-```
-
-Sample output:
-
-``` pre
- 
- Member   |     Deployed JAR     |                JAR Location            
---------- | -------------------- | ---------------------------------------------------
-datanode1 | group1_functions.jar | /usr/local/gemfire/deploy/vf.gf#group1_functions.jar#1
-datanode2 | group1_functions.jar | /usr/local/gemfire/deploy/vf.gf#group1_functions.jar#1
-```
-
-For more information on `gfsh` usage, see [gfsh (Geode SHell)](../../tools_modules/gfsh/chapter_overview.html).
-
-## <a id="concept_4436C021FB934EC4A330D27BD026602C__section_D36E345C6E254D27B0F4B0C8711F5E6A" class="no-quick-link"></a>Deployment Location for JAR Files
-
-The system location where JAR files are written on each member is determined by the `deploy-working-dir` Geode property configured for that member. For example, you could have the following configured in the `gemfire.properties` file for your member:
-
-``` pre
-#gemfire.properties
-deploy-working-dir=/usr/local/gemfire/deploy
-```
-
-This deployment location can be local or a shared network resource (such as a mount location) used by multiple members in order to reduce disk space usage. If you use a shared directory, you still need to deploy the JAR file on every member that you want to have access to the application, because deployment updates the CLASSPATH and auto-registers functions.
-
-## About Deploying JAR Files and the Cluster Configuration Service
-
-By default, the cluster configuration service distributes deployed JAR files to all locators in the distributed system. When you start a new server using `gfsh`, the locator supplies configuration files and deployed jar files to the member and writes them to the server's directory.
-
-See [Overview of the Cluster Configuration Service](gfsh_persist.html).
-
-## <a id="concept_4436C021FB934EC4A330D27BD026602C__section_D9219C5EEED64672930200677C2118C9" class="no-quick-link"></a>Versioning of JAR Files
-
-When you deploy JAR files to a distributed system or member group, the JAR file is modified to indicate version information in its name. Each JAR filename is prefixed with `vf.gf#` and contains a version number at the end of the filename. For example, if you deploy `MyClasses.jar` five times, the filename is displayed as `vf.gf#MyClasses.jar#5` when you list all deployed jars.
-
-When you deploy a new JAR file, the member receiving the deployment checks whether the JAR file is a duplicate, either because the JAR file has already been deployed on that member or because the JAR file has already been deployed to a shared deployment working directory that other members are also using. If another member has already deployed this JAR file to the shared directory (determined by doing a byte-for-byte compare to the latest version in its directory), the member receiving the latest deployment does not write the file to disk. Instead, the member updates the ClassPathLoader to use the already deployed JAR file. If a newer version of the JAR file is detected on disk and is already in use, the deployment is canceled.
-
-When a member begins using a JAR file, the member obtains a shared lock on the file. If the member receives a newer version by deployment, the member releases the shared lock and tries to delete the existing JAR file in favor of the newer version. If no other member has a shared lock on the existing JAR, the existing, older version JAR is deleted.
-
-## <a id="concept_4436C021FB934EC4A330D27BD026602C__section_F8AC59EEC8C5434FBC6F38A12A7371CE" class="no-quick-link"></a>Automatic Class Path Loading
-
-When a cache is started, the new cache requests that the latest versions of each JAR file in the current working directory be added to the ClassPathLoader. If a JAR file has already been deployed to the ClassPathLoader, the ClassPathLoader updates its loaded version if a newer version is found; otherwise, there is no change. If detected, older versions of the JAR files are deleted if no other member has a shared lock on them.
-
-Undeploying a JAR file does not automatically unload the classes that were loaded during deployment. You need to restart your members to unload those classes.
-
-When a cache is closed it requests that all currently deployed JAR files be removed from the ClassPathLoader.
-
-If you are using a shared deployment working directory, all members sharing the directory should belong to the same member group. Upon restart, all members that share the same deployment working directory will deploy and autoload their CLASSPATH with any JARs found in the current working directory. This means that some members may load the JARs even though they are not part of the member group that received the original deployment.
-
-## <a id="concept_4436C021FB934EC4A330D27BD026602C__section_C1ECA5A66C27403A9A18D0E04EFCC66D" class="no-quick-link"></a>Automatic Function Registration
-
-When you deploy a JAR file that contains a function (in other words, contains a class that implements the Function interface), the function is automatically registered through the `FunctionService.registerFunction` method. If another JAR file is deployed (either with the same JAR filename or another filename) with the same function, the new implementation of the function is registered, overwriting the old one. If a JAR file is undeployed, any functions that were auto-registered at the time of deployment are unregistered. Because deploying a JAR file that has the same name multiple times results in the JAR being un-deployed and re-deployed, functions in the JAR are unregistered and re-registered each time this occurs. If a function with the same ID is registered from multiple differently named JAR files, the function is unregistered if any of those JAR files are re-deployed or un-deployed.
-
-During `cache.xml` load, the parameters for any declarables are saved. If functions found in a JAR file are also declarable, and have the same class name as the declarables whose parameters were saved after loading cache.xml, then function instances are created using those Parameters and are also registered. Therefore, if the same function is declared multiple times in the `cache.xml` with different sets of parameters, when the JAR is deployed a function is instantiated for each set of parameters. If any functions are registered using parameters from a `cache.xml` load, the default, no-argument function is not registered.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/cluster_config/export-import.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/cluster_config/export-import.html.md.erb b/configuring/cluster_config/export-import.html.md.erb
deleted file mode 100644
index e730c5b..0000000
--- a/configuring/cluster_config/export-import.html.md.erb
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title:  Exporting and Importing Cluster Configurations
----
-
-The cluster configuration service exports and imports configurations created using `gfsh` for an entire Apache Geode cluster.
-
-The cluster configuration service saves the cluster configuration as you create a regions, disk-stores and other objects using `gfsh` commands. You can export this configuration as well as any jar files that contain application files to a zip file and then import this configuration to create a new cluster.
-
-## Exporting a Cluster Configuration
-
-You issue the `gfsh` `export cluster-configuration` command to save the configuration data for you cluster in a zip file. This zip file contains subdirectories for cluster-level configurations and a directory for each group specified in the cluster. The contents of these directories are described in [Cluster Configuration Files and Troubleshooting](gfsh_config_troubleshooting.html#concept_ylt_2cb_y4).
-
-To export a cluster configuration, run the `gfsh` `export cluster-configuration` command while connected to a Geode cluster. For example:
-
-``` pre
-export cluster-configuration --zip-file-name=myClusterConfig.zip --dir=/home/username/configs
-```
-
-See [export cluster-configuration](../../tools_modules/gfsh/command-pages/export.html#topic_mdv_jgz_ck).
-
-**Note:**
-`gfsh` only saves cluster configuration values for configurations specified using `gfsh`. Configurations created by the management API are not saved with the cluster configurations.
-
-## Importing a Cluster Configuration
-
-You can import a cluster configuration to a running locator. After importing the configuration, any servers you start receive this cluster configuration.
-
-To import a cluster configuration, start one or more locators and then run the `gfsh` `import cluster-configuration` command. For example:
-
-``` pre
-import cluster-configuration --zip-file-name=/home/username/configs/myClusterConfig.zip
-```
-
-See [import cluster-configuration](../../tools_modules/gfsh/command-pages/import.html#topic_vnv_grz_ck).
-
-**Note:**
-You cannot import a cluster configuration to a cluster where cache servers are already running.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/cluster_config/gfsh_config_troubleshooting.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/cluster_config/gfsh_config_troubleshooting.html.md.erb b/configuring/cluster_config/gfsh_config_troubleshooting.html.md.erb
deleted file mode 100644
index 51f89b0..0000000
--- a/configuring/cluster_config/gfsh_config_troubleshooting.html.md.erb
+++ /dev/null
@@ -1,58 +0,0 @@
----
-title:  Cluster Configuration Files and Troubleshooting
----
-
-When you use the cluster configuration service in Geode, you can examine the generated configuration files in the `cluster_config` directory on the locator. `gfsh` saves configuration files at the cluster-level and at the individual group-level.
-
-The following directories and configuration files are available on the locator running the cluster configuration service:
-
-**Cluster-level configuration**  
-For configurations that apply to all members of a cluster, the locator creates a `cluster` subdirectory within the `cluster_config` directory (or in the cluster configuration directory when starting up the locator with the `--cluster-config-dir=value` parameter) specified All servers receive this configuration when they are started using `gfsh`. This directory contains:
-
--   `cluster.xml` -- A Geode `cache.xml` file containing configuration common to all members
--   `cluster.properties` -- a Geode ` gemfire.properties` file containing properties common to all members
--   Jar files that are intended for deployment to all members
-
-<!-- -->
-
-**Group-level configuration**  
-When you specify the `--group` parameter in a `gfsh` command, (for example, `start server` or `create region`) the locator writes the configurations for each group in a subdirectory with the same name as the group. When you start a server that specifies one or more group names, the server receives both the cluster-level configurations and the configurations from all groups specified. This subdirectory contains:
-
--   `<group-name>.xml` -- A Geode `cache.xml` file containing configurations common to all members of the group
--   `<group-name>.properties` -- A Geode `gemfire.properties` file containing properties common to all members of the group
--   Jar files that are intended for deployment to all members of the group
-
-<img src="../../images_svg/cluster-group-config.svg" id="concept_ylt_2cb_y4__image_bs1_mcb_y4" class="image" />
-
-You can export a zip file that contains all artifacts of a cluster configuration. The zip file contains all of the files in the `cluster_config` (or otherwise specified) subdirectory of a locator. You can import this configuration to a new cluster. See [Exporting and Importing Cluster Configurations](export-import.html#concept_wft_dkq_34).
-
-## Individual Configuration Files and Cluster Configuration Files
-
-Geode applies the cluster-wide configuration files first and then group-level configurations next. If a member has its own configuration files defined (cache.xml and gemfire.properties files), those configurations are applied last. Whenever possible, use the member group-level configuration files in the cluster-configuration service to apply non-cluster-wide configurations on individual members.
-
-## Troubleshooting Tips
-
--   When you start a locator using `gfsh`, you should see the following message:
-
-    ``` pre
-    Cluster configuration service is up and running.
-    ```
-
-    If you do not see this message, there may be a problem with the cluster configuration service. Use the `status cluster-configuration-service` command to check the status of the cluster configuration.
-
-    -   If the command returns RUNNING, the cluster configuration is running normally.
-    -   If the command returns WAITING, run the `status locator` command. The output of this command returns the cause of the WAITING status.
--   If a server start fails with the following exception: `ClusterConfigurationNotAvailableException`, the cluster configuration service may not be in the RUNNING state. Because the server requests the cluster configuration from the locator, which is not available, the `start server` command fails.
--   You can determine what configurations a server received from a locator by examining the server's log file. See [Logging](../../managing/logging/logging.html#concept_30DB86B12B454E168B80BB5A71268865).
--   If a `start server` command specifies a cache.xml file that conflicts with the existing cluster configuration, the server startup may fail.
--   If a `gfsh` command fails because the cluster configuration cannot be saved, the following message displays:
-
-    ``` pre
-    Failed to persist the configuration changes due to this command, 
-    Revert the command to maintain consistency. Please use "status cluster-config-service" 
-    to determine whether Cluster configuration service is RUNNING."
-    ```
-
--   There are some types of configurations that cannot be made using `gfsh`. See [gfsh Limitations](gfsh_persist.html#concept_r22_hyw_bl__section_bn3_23p_y4).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/cluster_config/gfsh_load_from_shared_dir.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/cluster_config/gfsh_load_from_shared_dir.html.md.erb b/configuring/cluster_config/gfsh_load_from_shared_dir.html.md.erb
deleted file mode 100644
index b9e9a5d..0000000
--- a/configuring/cluster_config/gfsh_load_from_shared_dir.html.md.erb
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title:  Loading Existing Configuration Files into Cluster Configuration
----
-
-To load an existing cache.xml or gemfire.properties configuration file into a new cluster, use the `--load-cluster-configuration-from-dir` parameter when starting up the locator.
-
-You can use this technique to migrate a single server's configuration into the cluster configuration service. To load an existing cache.xml file or cluster configuration into a cluster, perform the following steps:
-
-1.  Make sure the locator is not currently running.
-2.  Within the locator's working directory, create a `cluster_config/cluster` directory if the directory does not already exist.
-3.  Copy the desired configuration files (cache.xml or gemfire.properties, or both) into the `cluster_config/cluster` directory.
-4.  Rename the configuration files as follows:
-    -   Rename `cache.xml` to `cluster.xml`
-    -   Rename `gemfire.properties` to `cluster.properties`
-
-5.  Start the locator in `gfsh` as follows:
-
-    ``` pre
-    gfsh>start locator --name=<locator_name> --enable-cluster-configuration=true --load-cluster-configuration-from-dir=true
-    ```
-
-    After successful startup, the locator should report that the "Cluster configuration service is up and running." Any servers that join this cluster and have `--use-cluster-configuration` set to true will pick up these configuration files.
-
-**Note:**
-If you make any manual modifications to the cluster.xml or cluster.properties (or group\_name.xml or group\_name.properties) files, you must stop the locator and then restart the locator using the `--load-cluster-configuration-from-dir` parameter. Direct file modifications are not picked up iby the cluster configuration service without a locator restart.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/cluster_config/gfsh_persist.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/cluster_config/gfsh_persist.html.md.erb b/configuring/cluster_config/gfsh_persist.html.md.erb
deleted file mode 100644
index 85be33c..0000000
--- a/configuring/cluster_config/gfsh_persist.html.md.erb
+++ /dev/null
@@ -1,108 +0,0 @@
----
-title:  Overview of the Cluster Configuration Service
----
-
-The Apache Geode cluster configuration service persists cluster configurations created by `gfsh` commands to the locators in a cluster and distributes the configurations to members of the cluster.
-
-## Why Use the Cluster Configuration Service
-
-We highly recommend that you use the `gfsh` command line and the cluster configuration service as the primary mechanism to manage your distributed system configuration. Using a common cluster configuration reduces the amount of time you spend configuring individual members and enforces consistent configurations when bringing up new members in your cluster. You no longer need to reconfigure each new member that you add to the cluster. You no longer need to worry about validating your cache.xml file. It also becomes easier to propagate configuration changes across your cluster and deploy your configuration changes to different environments.
-
-You can use the cluster configuration service to:
-
--   Save the configuration for an entire Apache Geode cluster.
--   Restart members using a previously-saved configuration.
--   Export a configuration from a development environment and migrate that configuration to create a testing or production system.
--   Start additional servers without having to configure each server separately.
--   Configure some servers to host certain regions and other servers to host different regions, and configure all servers to host a set of common regions.
-
-## Using the Cluster Configuration Service
-
-To use the cluster configuration service in Geode, you must use dedicated, standalone locators in your deployment. You cannot use the cluster configuration service with co-located locators (locators running in another process such as a server) or in multicast environments.
-
-The standalone locators distribute configuration to all locators in a cluster. Every locator in the cluster with `--enable-cluster-configuration` set to true keeps a record of all cluster-level and group-level configuration settings.
-
-**Note:**
-The default behavior for `gfsh` is to create and save cluster configurations. You can disable the cluster configuration service by using the `--enable-cluster-configuration=false` option when starting locators.
-
-Subsequently, any servers that you start with `gfsh` that have `--use-cluster-configuration` set to `true` will pick up the cluster configuration from the locator as well as any appropriate group-level configurations (for member groups they belong to). To disable the cluster configuration service on a server, you must start the server with the `--use-cluster-configuration` parameter set to `false`. By default, the parameter is set to true.
-
-You can also load existing configuration files into the cluster configuration service by starting up a standalone locator with the parameter `--load-cluster-configuration-from-dir` set to true. See [Loading Existing Configuration Files into Cluster Configuration](gfsh_load_from_shared_dir.html).
-
-## How the Cluster Configuration Service Works
-
-When you use `gfsh` commands to create Apache Geode regions, disk-stores, and other objects, the cluster configuration service saves the configurations on each locator in the cluster (also called a Geode distributed system). If you specify a group when issuing these commands, a separate configuration is saved containing only configurations that apply to the group.
-
-When you use `gfsh` to start new Apache Geode servers, the locator distributes the persisted configurations to the new server. If you specify a group when starting the server, the server receives the group-level configuration in addition to the cluster-level configuration. Group-level configurations are applied after cluster-wide configurations; therefore you can use group-level to override cluster-level settings.
-
-<img src="../../images_svg/cluster_config_overview.svg" id="concept_r22_hyw_bl__image_jjc_vhb_y4" class="image" />
-
-## gfsh Commands that Create Cluster Configurations
-
-The following `gfsh` commands cause the configuration to be written to all locators in the cluster (the locators write the configuration to disk):
-
--   `configure pdx`\*
--   `create region`
--   `alter region`
--   `alter runtime`
--   `destroy region`
--   `create index`
--   `destroy index`
--   `create disk-store`
--   `destroy disk-store`
--   `create async-event-queue`
--   `deploy jar`
--   `undeploy jar`
-
-**\*** Note that the `configure pdx` command must be executed *before* starting your data members. This command does not affect any currently running members in the system. Data members (with cluster configuration enabled) that are started after running this command will pick up the new PDX configuration.
-
-The following gateway-related commands use the cluster configuration service, and their configuration is saved by locators:
-
--   `create gateway-sender`
--   `create gateway-receiver`
-
-## <a id="concept_r22_hyw_bl__section_bn3_23p_y4" class="no-quick-link"></a>gfsh Limitations
-
-There are some configurations that you cannot create using `gfsh`, and that you must configure using cache.xml or the API:
-
--   Client cache configuration
--   You cannot specify parameters and values for Java classes for the following objects:
-    -   `function`
-    -   `custom-load-probe`
-    -   `cache-listener`
-    -   `cache-loader`
-    -   `cache-writer`
-    -   `compressor`
-    -   `serializer`
-    -   `instantiantor`
-    -   `pdx-serializer`
-    
-        **Note:**
-        The `configure pdx` command always specifies the `org.apache.geode.pdx.ReflectionBasedAutoSerializer` class. You cannot specify a custom PDX serializer in gfsh.
-
-    -   `custom-expiry`
-    -   `initializer`
-    -   `declarable`
-    -   `lru-heap-percentage`
-    -   `lru-memory-size`
-    -   `partition-resolver`
-    -   `partition-listener`
-    -   `transaction-listener`
-    -   `transaction-writer`
--   Adding or removing a TransactionListener
--   Adding JNDI bindings
--   Deleting an AsyncEventQueue
-
-In addition, there are some limitations on configuring gateways using `gfsh`.You must use cache.xml or the Java APIs to configure the following:
-
--   Configuring a GatewayConflictResolver
--   You cannot specify parameters and values for Java classes for the following:
-    -   `gateway-listener`
-    -   `gateway-conflict-resolver`
-    -   `gateway-event-filter`
-    -   `gateway-transport-filter`
-    -   `gateway-event-substitution-filter`
-
-## <a id="concept_r22_hyw_bl__section_fh1_c3p_y4" class="no-quick-link"></a>Disabling the Cluster Configuration Service
-
-If you do not want to use the cluster configuration service, start up your locator with the `--enable-cluster-configuration` parameter set to false or do not use standalone locators. You will then need to configure the cache (via cache.xml or API) separately on all your distributed system members.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/cluster_config/gfsh_remote.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/cluster_config/gfsh_remote.html.md.erb b/configuring/cluster_config/gfsh_remote.html.md.erb
deleted file mode 100644
index 9132e44..0000000
--- a/configuring/cluster_config/gfsh_remote.html.md.erb
+++ /dev/null
@@ -1,61 +0,0 @@
----
-title:  Using gfsh to Manage a Remote Cluster Over HTTP or HTTPS
----
-
-You can connect `gfsh` via HTTP or HTTPS to a remote cluster and manage the cluster using `gfsh` commands.
-
-To connect `gfsh` using the HTTP protocol to a remote GemFire cluster:
-
-1.  Launch `gfsh`. See [Starting gfsh](../../tools_modules/gfsh/starting_gfsh.html#concept_DB959734350B488BBFF91A120890FE61).
-2.  When starting the remote cluster on the remote host, you can optionally specify `--http-bind-address` and `--http-service-port` as GemFire properties when starting up your JMX manager (server or locator). These properties can be then used in the URL used when connecting from your local system to the HTTP service in the remote cluster. For example:
-
-    ``` pre
-    gfsh>start server --name=server1 --J=-Dgemfire.jmx-manager=true \
-    --J=-Dgemfire.jmx-manager-start=true --J=-Dgemfire.http-service-port=8080 \
-    --J=-Dgemfire.http-service-bind-address=myremotecluster.example.com
-    ```
-
-    This command must be executed directly on the host machine that will ultimately act as the remote GemFire server that hosts the HTTP service for remote administration. (You cannot launch a GemFire server remotely.)
-
-3.  On your local system, run the `gfsh` `connect` command to connect to the remote system. Include the `--use-http` and `--url` parameters. For example:
-
-    ``` pre
-    gfsh>connect --use-http=true --url="http://myremotecluster.example.com:8080/gemfire/v1"
-
-    Successfully connected to: GemFire Manager's HTTP service @ http://myremotecluster.example.com:8080/gemfire/v1
-    ```
-
-    See [connect](../../tools_modules/gfsh/command-pages/connect.html).
-
-    `gfsh` is now connected to the remote system. Most `gfsh` commands will now execute on the remote system; however, there are exceptions. The following commands are executed on the local cluster:
-      -   `alter disk-store`
-      -   `compact offline-disk-store`
-      -   `describe offline-disk-store`
-      -   `help`
-      -   `hint`
-      -   `sh` (for executing OS commands)
-      -   `sleep`
-      -   `start jconsole` (however, you can connect JConsole to a remote cluster when gfsh is connected to the cluster via JMX)
-      -   `start jvisualvm`
-      -   `start locator`
-      -   `start server`
-      -   `start vsd`
-      -   `status locator``*`
-      -   `status server``*`
-      -   `stop locator``*`
-      -   `stop server``*`
-      -   `run` (for executing gfsh scripts)
-      -   `validate disk-store`
-      -   `version`
-
-    `*`You can stop and obtain the status of *remote locators and servers* when `gfsh` is connected to the cluster via JMX or HTTP/S by using the `--name` option for these `stop`/`status` commands. If you are using the `--pid` or `--dir` option for these commands, then the` stop`/`status` commands are executed only locally.
-
-To configure SSL for the remote connection (HTTPS), enable SSL for the `http` component
-in <span class="ph filepath">gemfire.properties</span> or <span class="ph
-filepath">gfsecurity-properties</span> or upon server startup. See
-[SSL](../../managing/security/ssl_overview.html) for details on configuring SSL parameters. These
-SSL parameters also apply to all HTTP services hosted on the configured JMX Manager, which can
-include the following:
-
--   Developer REST API service
--   Pulse monitoring tool

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/cluster_config/persisting_configurations.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/cluster_config/persisting_configurations.html.md.erb b/configuring/cluster_config/persisting_configurations.html.md.erb
deleted file mode 100644
index e18bb30..0000000
--- a/configuring/cluster_config/persisting_configurations.html.md.erb
+++ /dev/null
@@ -1,320 +0,0 @@
----
-title:  Tutorial—Creating and Using a Cluster Configuration
----
-
-A short walk-through that uses a single computer to demonstrate how to use `gfsh` to create a cluster configuration for a Geode cluster.
-
-The `gfsh` command-line tool allows you to configure and start a Geode cluster. The cluster configuration service uses Apache Geode locators to store the configuration at the group and cluster levels and serves these configurations to new members as they are started. The locators store the configurations in a hidden region that is available to all locators and also write the configuration data to disk as XML files. Configuration data is updated as `gfsh` commands are executed.
-
-This section provides a walk-through example of configuring a simple Apache Geode cluster and then re-using that configuration in a new context.
-
-1.  Create a working directory (For example:`/home/username/my_gemfire`) and switch to the new directory. This directory will contain the configurations for your cluster.
-
-2.  Start the `gfsh` command-line tool. For example:
-
-    ``` pre
-    $ gfsh
-    ```
-
-    The `gfsh` command prompt displays.
-
-    ``` pre
-        _________________________     __
-       / _____/ ______/ ______/ /____/ /
-      / /  __/ /___  /_____  / _____  /
-     / /__/ / ____/  _____/ / /    / /
-    /______/_/      /______/_/    /_/    1.0.0
-
-    Monitor and Manage Apache Geode
-    gfsh>
-
-    ```
-
-3.  Start a locator using the command in the following example:
-
-    ``` pre
-    gfsh>start locator --name=locator1
-    Starting a GemFire Locator in /Users/username/my_gemfire/locator1...
-    .............................
-    Locator in /Users/username/my_gemfire/locator1 on 192.0.2.0[10334] as locator1 is currently online.
-    Process ID: 5203
-    Uptime: 15 seconds
-    GemFire Version: 8.1.0
-    Java Version: 1.7.0_71
-    Log File: /Users/username/my_gemfire/locator1/locator1.log
-    JVM Arguments: -Dgemfire.enable-cluster-configuration=true
-    -Dgemfire.load-cluster-configuration-from-dir=false
-    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
-    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
-    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
-    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/locator-dependencies.jar
-
-    Successfully connected to: [host=192.0.2.0, port=1099]
-
-    Cluster configuration service is up and running.
-    ```
-
-    Note that `gfsh` responds with a message indicating that the cluster configuration service is up and running. If you see a message indicating a problem, review the locator log file for possible errors. The path to the log file is displayed in the output from `gfsh`.
-
-4.  Start Apache Geode servers using the commands in the following example:
-
-    ``` pre
-    gfsh>start server --name=server1 --group=group1
-    Starting a GemFire Server in /Users/username/my_gemfire/server1...
-    .....
-    Server in /Users/username/my_gemfire/server1 on 192.0.2.0[40404] as server1 is currently online.
-    Process ID: 5627
-    Uptime: 2 seconds
-    GemFire Version: 8.1.0
-    Java Version: 1.7.0_71
-    Log File: /Users/username/my_gemfire/server1/server1.log
-    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334] -Dgemfire.groups=group1
-    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
-    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
-    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
-    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
-    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
-
-    gfsh>start server --name=server2 --group=group1 --server-port=40405
-    Starting a GemFire Server in /Users/username/my_gemfire/server2...
-    .....
-    Server in /Users/username/my_gemfire/server2 on 192.0.2.0[40405] as server2 is currently online.
-    Process ID: 5634
-    Uptime: 2 seconds
-    GemFire Version: 8.1.0
-    Java Version: 1.7.0_71
-    Log File: /Users/username/my_gemfire/server2/server2.log
-    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334] -Dgemfire.groups=group1
-    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
-    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
-    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
-    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
-    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
-
-    gfsh>start server --name=server3 --server-port=40406
-    Starting a GemFire Server in /Users/username/my_gemfire/server3...
-    .....
-    Server in /Users/username/my_gemfire/server3 on 192.0.2.0[40406] as server3 is currently online.
-    Process ID: 5637
-    Uptime: 2 seconds
-    GemFire Version: 8.1.0
-    Java Version: 1.7.0_71
-    Log File: /Users/username/my_gemfire/server3/server3.log
-    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334]
-    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
-    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
-    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
-    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
-    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
-    ```
-
-    Note that the `gfsh` commands you used to start `server1` and `server2` specify a group named `group1` while the command for `server3` did not specify a group name.
-
-5.  Create some regions using the commands in the following example:
-
-    ``` pre
-    gfsh>create region --name=region1 --group=group1 --type=REPLICATE
-    Member  | Status
-    ------- | --------------------------------------
-    server2 | Region "/region1" created on "server2"
-    server1 | Region "/region1" created on "server1"
-
-    gfsh>create region --name=region2 --type=REPLICATE
-    Member  | Status
-    ------- | --------------------------------------
-    server1 | Region "/region2" created on "server1"
-    server2 | Region "/region2" created on "server2"
-    server3 | Region "/region2" created on "server3"
-    ```
-
-    Note that `region1` is created on all cache servers that specified the group named `group1` when starting the cache server (`server1` and `server2`, in this example). `region2` is created on all members because no group was specified.
-
-6.  Deploy jar files. Use the `gfsh deploy` command to deploy application jar files to all members or to a specified group of members. The following example deploys the `mail.jar` and `mx4j.jar` files from the distribution. (Note: This is only an example, you do not need to deploy these files to use the Cluster Configuration Service. Alternately, you can use any two jar files for this demonstration.)
-
-    ``` pre
-    gfsh>deploy --group=group1 --jar=${SYS_GEMFIRE_DIR}/lib/mail.jar
-    Post substitution: deploy --group=group1 --jar=/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/mail.jar
-    Member  | Deployed JAR | Deployed JAR Location
-    ------- | ------------ | -------------------------------------------------
-    server1 | mail.jar     | /Users/username/my_gemfire/server1/vf.gf#mail.jar#1
-    server2 | mail.jar     | /Users/username/my_gemfire/server2/vf.gf#mail.jar#1
-
-    gfsh>deploy --jar=${SYS_GEMFIRE_DIR}/lib/mx4j.jar
-    Post substitution: deploy --jar=/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/mx4j.jar
-    Member  | Deployed JAR | Deployed JAR Location
-    ------- | ------------ | -------------------------------------------------
-    server1 | mx4j.jar     | /Users/username/my_gemfire/server1/vf.gf#mx4j.jar#1
-    server2 | mx4j.jar     | /Users/username/my_gemfire/server2/vf.gf#mx4j.jar#1
-    server3 | mx4j.jar     | /Users/username/my_gemfire/server3/vf.gf#mx4j.jar#1
-    ```
-
-    Note that the `mail.jar` file was deployed only to the members of `group1` and the `mx4j.jar` was deployed to all members.
-
-7.  Export the cluster configuration.
-    You can use the `gfsh export cluster-configuration` command to create a zip file that contains the cluster's persisted configuration. The zip file contains a copy of the contents of the `cluster_config` directory. For example:
-
-    ``` pre
-    gfsh>export cluster-configuration --zip-file-name=myClusterConfig.zip --dir=/Users/username
-    ```
-
-    Apache Geode writes the cluster configuration to the specified zip file.
-
-    ``` pre
-    Downloading cluster configuration : /Users/username/myClusterConfig.zip
-    ```
-
-    The remaining steps demonstrate how to use the cluster configuration you just created.
-
-8.  Shut down the cluster using the following commands:
-
-    ``` pre
-    gfsh>shutdown --include-locators=true
-    As a lot of data in memory will be lost, including possibly events in queues, do you
-    really want to shutdown the entire distributed system? (Y/n): Y
-    Shutdown is triggered
-
-    gfsh>
-    No longer connected to 192.0.2.0[1099].
-    gfsh>
-    ```
-
-9.  Exit the `gfsh` command shell:
-
-    ``` pre
-    gfsh>quit
-    Exiting...
-    ```
-
-10. Create a new working directory (for example: `new_gemfire`) and switch to the new directory.
-11. Start the `gfsh` command shell:
-
-    ``` pre
-    $ gfsh
-    ```
-
-12. Start a new locator. For example:
-
-    ``` pre
-    gfsh>start locator --name=locator2 --port=10335
-    Starting a GemFire Locator in /Users/username/new_gemfire/locator2...
-    .............................
-    Locator in /Users/username/new_gemfire/locator2 on 192.0.2.0[10335] as locator2 is currently online.
-    Process ID: 5749
-    Uptime: 15 seconds
-    GemFire Version: 8.1.0
-    Java Version: 1.7.0_71
-    Log File: /Users/username/new_gemfire/locator2/locator2.log
-    JVM Arguments: -Dgemfire.enable-cluster-configuration=true
-    -Dgemfire.load-cluster-configuration-from-dir=false
-    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
-    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
-    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
-    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/locator-dependencies.jar
-
-    Successfully connected to: [host=192.0.2.0, port=1099]
-
-    Cluster configuration service is up and running.
-    ```
-
-13. Import the cluster configuration using the `import cluster-configuration` command. For example:
-
-    ``` pre
-    gfsh>import cluster-configuration --zip-file-name=/Users/username/myClusterConfig.zip
-    Cluster configuration successfully imported
-    ```
-
-    Note that the `locator2` directory now contains a `cluster_config` subdirectory.
-
-14. Start a server that does not reference a group:
-
-    ``` pre
-    gfsh>start server --name=server4 --server-port=40414
-    Starting a GemFire Server in /Users/username/new_gemfire/server4...
-    ........
-    Server in /Users/username/new_gemfire/server4 on 192.0.2.0[40414] as server4 is currently online.
-    Process ID: 5813
-    Uptime: 4 seconds
-    GemFire Version: 8.1.0
-    Java Version: 1.7.0_71
-    Log File: /Users/username/new_gemfire/server4/server4.log
-    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10335]
-    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
-    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
-    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
-    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
-    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
-    ```
-
-15. Start another server that references `group1`:
-
-    ``` pre
-    gfsh>start server --name=server5 --group=group1 --server-port=40415
-    Starting a GemFire Server in /Users/username/new_gemfire/server5...
-    .....
-    Server in /Users/username/new_gemfire/server2 on 192.0.2.0[40415] as server5 is currently online.
-    Process ID: 5954
-    Uptime: 2 seconds
-    GemFire Version: 8.1.0
-    Java Version: 1.7.0_71
-    Log File: /Users/username/new_gemfire/server5/server5.log
-    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10335] -Dgemfire.groups=group1
-    -Dgemfire.use-cluster-configuration=true -XX:OnOutOfMemoryError=kill -KILL %p
-    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
-    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
-    Class-Path: /Users/username/Pivotal_GemFire_810_b50582_Linux/lib/gemfire.jar
-    :/Users/username/Pivotal_GemFire_810_b50582_Linux/lib/server-dependencies.jar
-    ```
-
-16. Use the `list regions` command to display the configured regions. Note that region1 and region2, which were configured in the original cluster level are available.
-
-    ``` pre
-    gfsh>list regions
-    List of regions
-    ---------------
-    region1
-    region2
-    ```
-
-17. Use the `describe region` command to see which members host each region. Note that region1 is hosted only by server5 because server5 was started using the group1 configuration. region2 is hosted on both server4 and server5 because region2 was created without a group specified.
-
-    ``` pre
-    gfsh>describe region --name=region1
-    ..........................................................
-    Name            : region1
-    Data Policy     : replicate
-    Hosting Members : server5
-
-    Non-Default Attributes Shared By Hosting Members
-
-     Type  | Name | Value
-    ------ | ---- | -----
-    Region | size | 0
-
-
-    gfsh>describe region --name=region2
-    ..........................................................
-    Name            : region2
-    Data Policy     : replicate
-    Hosting Members : server5
-                      server4
-
-    Non-Default Attributes Shared By Hosting Members
-
-     Type  | Name | Value
-    ------ | ---- | -----
-    Region | size | 0
-    ```
-
-    This new cluster uses the same configuration as the original system. You can start any number of servers using this cluster configuration. All servers will receive the cluster-level configuration. Servers that specify `group1` also receive the `group1` configuration.
-
-18. Shut down your cluster using the following commands:
-
-    ``` pre
-    gfsh>shutdown --include-locators=true
-    As a lot of data in memory will be lost, including possibly events in queues,
-      do you really want to shutdown the entire distributed system? (Y/n): Y
-    Shutdown is triggered
-
-    gfsh>
-    No longer connected to 192.0.2.0[1099].
-    ```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/cluster_config/using_member_groups.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/cluster_config/using_member_groups.html.md.erb b/configuring/cluster_config/using_member_groups.html.md.erb
deleted file mode 100644
index 524d787..0000000
--- a/configuring/cluster_config/using_member_groups.html.md.erb
+++ /dev/null
@@ -1,27 +0,0 @@
----
-title:  Using Member Groups
----
-
-Apache Geode allows you to organize your distributed system members into logical member groups.
-
-The use of member groups in Apache Geode is optional. The benefit of using member groups is the ability to coordinate certain operations on members based on logical group membership. For example, by defining and using member groups you can:
-
--   Alter a subset of configuration properties for a specific member or members. See [alter runtime](../../tools_modules/gfsh/command-pages/alter.html#topic_7E6B7E1B972D4F418CB45354D1089C2B) in `gfsh`.
--   Perform certain disk operations like disk-store compaction across a member group. See [Disk Store Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA) for a list of commands.
--   Manage specific indexes or regions across all members of a group.
--   Start and stop multi-site (WAN) services such as gateway senders and gateway receivers across a member group.
--   Deploy or undeploy JAR applications on all members in a group.
--   Execute functions on all members of a specific group.
-
-You define group names in the `groups` property of your member's `gemfire.properties` file or upon member startup in `gfsh`.
-
-**Note:**
-Any roles defined in the currently existing `roles` property will now be considered a group. If you wish to add membership roles to your distributed system, you should add them as member groups in the `groups` property. The `roles` property has been deprecated in favor of using the `groups` property.
-
-To add a member to a group, add the name of a member group to the `gemfire.properties` file of the member prior to startup or you can start up a member in `gfsh` and pass in the `--group` argument at startup time.
-
-A single member can belong to more than one group.
-
-Member groups can also be used to organize members from either a client's perspective or from a peer member's perspective. See [Organizing Peers into Logical Member Groups](../../topologies_and_comm/p2p_configuration/configuring_peer_member_groups.html) and [Organizing Servers Into Logical Member Groups](../../topologies_and_comm/cs_configuration/configure_servers_into_logical_groups.html) for more information. On the client side, you can supply the member group name when configuring a client's connection pool. Use the &lt;pool server-group&gt; element in the client's cache.xml.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/running/change_file_spec.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/running/change_file_spec.html.md.erb b/configuring/running/change_file_spec.html.md.erb
deleted file mode 100644
index 8edb68b..0000000
--- a/configuring/running/change_file_spec.html.md.erb
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title:  Changing the File Specifications
----
-
-You can change all file specifications in the `gemfire.properties` file and at the command line.
-
-**Note:**
-Geode applications can use the API to pass `java.lang.System properties` to the distributed system connection. This changes file specifications made at the command line and in the `gemfire.properties`. You can verify an application’s property settings in the configuration information logged at application startup. The configuration is listed when the `gemfire.properties` `log-level` is set to `config` or lower.
-
-This invocation of the application, `testApplication.TestApp1`, provides non-default specifications for both the `cache.xml` and `gemfire.properties`:
-
-``` pre
-java -Dgemfire.cache-xml-file=
-/gemfireSamples/examples/dist/cacheRunner/queryPortfolios.xml
--DgemfirePropertyFile=defaultConfigs/gemfire.properties
-testApplication.TestApp1
-```
-
-The gfsh start server command can use the same specifications:
-
-``` pre
-gfsh>start server
--J-Dgemfire.cache-xml-file=/gemfireSamples/examples/dist/cacheRunner/queryPortfolios.xml
--J-DgemfirePropertyFile=defaultConfigs/gemfire.properties
-```
-
-You can also change the specifications for the `cache.xml` file inside the `gemfire.properties` file.
-
-**Note:**
-Specifications in `gemfire.properties` files cannot use environment variables.
-
-Example `gemfire.properties` file with non-default `cache.xml` specification:
-
-``` pre
-#Tue May 09 17:53:54 PDT 2006
-mcast-address=192.0.2.0
-mcast-port=10333
-locators=
-cache-xml-file=/gemfireSamples/examples/dist/cacheRunner/queryPortfolios.xml
-```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/running/default_file_specs.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/running/default_file_specs.html.md.erb b/configuring/running/default_file_specs.html.md.erb
deleted file mode 100644
index 37f9ee3..0000000
--- a/configuring/running/default_file_specs.html.md.erb
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title:  Default File Specifications and Search Locations
----
-
-Each file has a default name, a set of file search locations, and a system property you can use to override the defaults.
-
-To use the default specifications, place the file at the top level of its directory or jar file. The system properties are standard file specifications that can have absolute or relative pathnames and filenames.
-
-**Note:**
-If you do not specify an absolute file path and name, the search examines all search locations for the file.
-
-<table>
-<colgroup>
-<col width="33%" />
-<col width="33%" />
-<col width="33%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>Default File Specification</th>
-<th>Search Locations for Relative File Specifications</th>
-<th>Available Property for File Specification</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td><code class="ph codeph">gemfire.properties</code></td>
-<td><ol>
-<li>current directory</li>
-<li>home directory</li>
-<li>CLASSPATH</li>
-</ol></td>
-<td>As a Java system property, use <code class="ph codeph">gemfirePropertyFile</code></td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">cache.xml</code></td>
-<td><ol>
-<li>current directory</li>
-<li>CLASSPATH</li>
-</ol></td>
-<td>In <code class="ph codeph">gemfire.properties</code>, use the <code class="ph codeph">cache-xml-file</code> property</td>
-</tr>
-</tbody>
-</table>
-
-Examples of valid `gemfirePropertyFile` specifications:
-
--   `/zippy/users/jpearson/gemfiretest/gemfire.properties`
--   `c:\gemfiretest\gemfire.prp`
--   `myGF.properties`
--   `test1/gfprops`
-
-For the `test1/gfprops` specification, if you launch your Geode system member from `/testDir` in a Unix file system, Geode looks for the file in this order until it finds the file or exhausts all locations:
-
-1.  `/testDir/test1/gfprops`
-2.  `<yourHomeDir>/test1/gfprops`
-3.  under every location in your `CLASSPATH` for `test1/gfprops`
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/running/deploy_config_files_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/running/deploy_config_files_intro.html.md.erb b/configuring/running/deploy_config_files_intro.html.md.erb
deleted file mode 100644
index 758b25a..0000000
--- a/configuring/running/deploy_config_files_intro.html.md.erb
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title:  Main Steps to Deploying Configuration Files
----
-
-These are the basic steps for deploying configuration files, with related detail in sections that follow.
-
-1.  Determine which configuration files you need for your installation.
-2.  Place the files in your directories or jar files.
-3.  For any file with a non-default name or location, provide the file specification in the system properties file and/or in the member `CLASSPATH.`
-
-## <a id="concept_337B365782E44951B73F33E1E17AB07B__section_53C98F9DB1584E3BABFA315CDF254A92" class="no-quick-link"></a>Geode Configuration Files
-
--   `gemfire.properties`. Contains the settings required by members of a distributed system. These settings include licensing, system member discovery, communication parameters, logging, and statistics. See the [Reference](../../reference/book_intro.html#reference).
--   **`gfsecurity.properties`**. An optional separate file that contains security-related (`security-*`) settings that are otherwise defined in `gemfire.properties`. Placing these member properties into a separate file allows you to restrict user access to those specific settings. See the [Reference](../../reference/book_intro.html#reference).
--   `cache.xml`. Declarative cache configuration file. This file contains XML declarations for cache, region, and region entry configuration. You also use it to configure disk stores, database login credentials, server and remote site location information, and socket information. See [cache.xml](../../reference/topics/chapter_overview_cache_xml.html#cache_xml).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/configuring/running/deploying_config_files.html.md.erb
----------------------------------------------------------------------
diff --git a/configuring/running/deploying_config_files.html.md.erb b/configuring/running/deploying_config_files.html.md.erb
deleted file mode 100644
index 76c036a..0000000
--- a/configuring/running/deploying_config_files.html.md.erb
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title:  Deploying Configuration Files without the Cluster Configuration Service
----
-
-You can deploy your Apache Geode configuration files in your system directory structure or in jar files. You determine how you want to deploy your configuration files and set them up accordingly.
-
-**Note:**
-If you use the cluster configuration service to create and manage your Apache Geode cluster configuration, the procedures described in this section are not needed because Geode automatically manages the distribution of the configuration files and jar files to members of the cluster. See [Overview of the Cluster Configuration Service](../cluster_config/gfsh_persist.html).
-
-You can use the procedures described in this section to distribute configurations that are member-specific, or for situations where you do not want to use the cluster configuration service.
-
--   **[Main Steps to Deploying Configuration Files](../../configuring/running/deploy_config_files_intro.html)**
-
-    These are the basic steps for deploying configuration files, with related detail in sections that follow.
-
--   **[Default File Specifications and Search Locations](../../configuring/running/default_file_specs.html)**
-
-    Each file has a default name, a set of file search locations, and a system property you can use to override the defaults.
-
--   **[Changing the File Specifications](../../configuring/running/change_file_spec.html)**
-
-    You can change all file specifications in the `gemfire.properties` file and at the command line.
-
--   **[Deploying Configuration Files in JAR Files](../../configuring/running/deploying_config_jar_files.html)**
-
-    This section provides a procedure and an example for deploying configuration files in JAR files.
-
-



Mime
View raw message