drill-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bridg...@apache.org
Subject [09/15] drill git commit: reorg and partial update odbc
Date Mon, 08 Jun 2015 23:06:58 GMT
http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/020-tableau-examples.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/020-tableau-examples.md b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/020-tableau-examples.md
new file mode 100644
index 0000000..a69dd7a
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/020-tableau-examples.md
@@ -0,0 +1,251 @@
+---
+title: "Tableau Examples"
+parent: "Using Drill with BI Tools"
+---
+You can generate reports in Tableau using ODBC connections to Drill data
+sources on Windows. Each example in this section takes you through the steps to create a
+DSN to a Drill data source and then access the data in Tableau 8.1.
+
+This section includes the following examples:
+
+  * Connecting to a Hive table  
+  * Using a view to connect to Hbase table data  
+  * Using custom SQL to connect to data in a Parquet file
+
+The steps and results of these examples assume pre-configured schemas and
+source data. You configure schemas as storage plugin instances on the Storage
+tab of the [Drill Web UI]({{ site.baseurl }}/docs/getting-to-know-the-drill-sandbox#storage-plugin-overview). Also, the examples assume you [enabled the DECIMAL data type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type) in Drill.  
+
+## Example: Connect to a Hive Table in Tableau
+
+To access Hive tables in Tableau 8.1, connect to the Hive schema using a DSN
+and then visualize the data in Tableau.  
+{% include startnote.html %}This example assumes that there is a schema named hive.default which contains a table named student_hive.{% include endnote.html %}
+
+----------
+
+## Step 1: Create a DSN to a Hive Table
+
+In this step, we will create a DSN that accesses a Hive table.
+
+  1. To launch the ODBC Administrator, click **Start > All Programs > MapR Drill ODBC Driver 1.0 (32|64-bit) > (32|64-bit) ODBC Administrator.**
+     The *ODBC Data Source Administrator* window appears.
+  2. On the **System DSN** tab, click **Add**.
+  3. Select **MapR Drill ODBC Driver** and click **Finish**.  
+     The *MapR Drill ODBC Driver DSN Setup* window appears.
+  4. Enter a name for the data source.
+  5. Specify the connection type based on your requirements. The connection type provides the DSN access to Drill Data Sources.  
+In this example, we are connecting to a Zookeeper Quorum.
+  6. In the **Schema** field, select the Hive schema.
+     In this example, the Hive schema is named hive.default.
+     ![]({{ site.baseurl }}/docs/img/Hive_DSN.png)
+  7. Click **OK** to create the DSN and return to the ODBC Data Source Administrator window.
+  8. Click **OK** to close the ODBC Data Source Administrator.
+
+----------
+
+## Step 2: Connect to Hive Tables in Tableau
+
+Now, we can connect to Hive tables.
+
+  1. In Tableau, click **Data > Connect to Data**.
+  2. In the *On a server* section, click **Other Databases (ODBC**).  
+     The *Generic ODBC Connection* dialog appears.
+  3. In the *Connect Using* section, select the DSN that connects to the Hive table.   
+-or-  
+To create a connection without an existing DSN, select the Driver option,
+select the MapR Drill ODBC driver from the list and click **Connect.** Then,
+configure the connection to the Hive table and click **OK**.
+  4. In the **Schema** field, select the Hive schema.  
+     In this example, the Hive schema is named hive.default.
+  5. In the *Table* section, verify that **Single Table** is selected and then click the Search icon.  
+     A list of tables appears.
+  6. Select the table from the list and click **Select**.   
+     In this example, the table name is student_hive.
+  7. Click **OK** to complete the connection.  
+     ![]({{ site.baseurl }}/docs/img/ODBC_HiveConnection.png)
+  8. In the *Data Connection* dialog, click **Connect Live**.
+
+----------
+
+## Step 3. Visualize the Data in Tableau
+
+Once you connect to the data, the columns appear in the Data window. To
+visualize the data, drag fields from the Data window to the workspace view.
+
+For example, you can visualize the data in this way:
+
+![]({{ site.baseurl }}/docs/img/student_hive.png)
+
+## Example: Connect to Self-Describing Data in Tableau
+
+You can connect to self-describing data in Tableau in the following ways:
+
+  1. Use Drill Explorer to explore the self-describing data sources, create a Drill view, and then use ODBC to access the view in Tableau as if it were a table. 
+  2. Use Tableau’s Custom SQL to query the self-describing data directly. 
+
+### Option 1. Using a View to Connect to Self-Describing Data
+
+The following example describes how to create a view of an HBase table and
+connect to that view in Tableau 8.1. You can also use these steps to access
+data for other sources such as Hive, Parquet, JSON, TSV, and CSV.
+
+{% include startnote.html %}This example assumes that there is a schema named hbase that contains a table named s_voters and a schema named dfs.default that points to a writable location.{% include endnote.html %}
+
+#### Step 1. Create a View and a DSN
+
+In this step, we will use the ODBC Administrator to access the Drill Explorer
+where we can create a view of an HBase table. Then, we will use the ODBC
+Administrator to create a DSN that connects to the view.
+
+  1. To launch the ODBC Administrator, click **Start > All Programs > MapR Drill ODBC Driver 1.0 (32|64-bit) > (32|64-bit) ODBC Administrator**.  
+     The *ODBC Data Source Administrator* window appears.
+  2. On the System DSN tab, click **Add**.
+  3. Select **MapR Drill ODBC Driver** and click **Finish**.
+     The *MapR Drill ODBC Driver DSN Setup* window appears.
+  4. Specify the Connection Type based on your requirements.
+     The connection type provides the DSN access to a Drillbit. For more
+information, see [Connection Type]({{ site.baseurl }}/docs/configuring-connections-on-windows/#connection-type).
+  5. Click **Drill Explorer** to start exploring the data.
+     The Drill Explorer dialog appears. You can use the Browse tab to visually
+explore the metadata and data available from Drill data sources. Advanced
+users can use SQL tab to type in SQL manually to explore the data and save the
+SQL query as a view.
+  6. Select the schema that you want to create a view for.
+      ![]({{ site.baseurl }}/docs/img/Hbase_Browse.png)        
+     Drill Explorer displays the metadata and column families for the selected
+HBase table.
+  7. To create a view of the HBase table, click the **SQL** tab.  
+     By default, the View Definition SQL field contains: `SELECT * FROM
+<schema>.<table>`
+  8. To create the view, enter SQL in the _View Definition SQL_ section and then click **Preview** to verify that the results are as expected.   
+      ![]({{ site.baseurl }}/docs/img/ODBC_HbasePreview2.png)
+     In this example, the following SQL was entered:
+       
+        SELECT cast(row_key as integer) voter_id, convert_from(voter.onecf.name,
+        'UTF8') name, cast(voter.twocf.age as integer) age,
+        cast(voter.twocf.registration as varchar(20)) registration,
+        cast(voter.threecf.contributions as decimal(6,2)) contributions,
+        cast(voter.threecf.voterzone as integer)
+        voterzone,cast(voter.fourcf.create_date as timestamp) create_time FROM
+        hbase.voter
+
+     HBase does not contain type information, so you need to cast the data in Drill
+Explorer. For information about SQL query support, see the [SQL Reference]({{ site.baseurl }}/docs/sql-reference).
+  9. To save the view, click **Create As**.
+  10. Specify the schema where you want to save the view, enter a name for the view, and click **Save**.  
+
+       ![]({{ site.baseurl }}/docs/img/HbaseViewCreation0.png)
+
+  11. Close the Drill Explorer to return to the _MapR Drill ODBC Driver DSN Setup _window.  
+      Now that we have created the view, we can create a DSN that can access the
+view.
+  12. Enter a data source name and select the schema where you saved the view.  
+      In this example, we saved the view to dfs.default.        
+       ![]({{ site.baseurl }}/docs/img/HbaseViewDSN.png)
+  13. Click **OK** to create the DSN and return to the _ODBC Data Source Administrator_ window.
+  14. Click **OK** to close the ODBC Data Source Administrator.
+
+#### Step 2. Connect to the View from Tableau
+
+Now, we can connect to the view in Tableau.
+
+  1. In Tableau, click **Data > Connect to Data**.
+  2. In the *On a server* section, click **Other Databases (ODBC)**.  
+     The *Generic ODBC Connection* dialog appears.
+  3. In the *Connect Using* section, select the DSN that connects to the schema that contains the view that you created.   
+     -or-  
+     To create a connection without an existing DSN, select the **Driver** option, select the **MapR Drill ODBC Driver** from the list and click **Connect**. Then, configure the connection using the steps in step 1 and click **OK**.In this example, we created SQLView-DrillDataSource to access the view.
+  4. In the **Schema** field, select the schema that contains the views that you created in Drill Explorer.  
+     In this example, we saved the view to the dfs_default schema.
+  5. In the *Table* section, verify that **Single Table** is selected and then click the Search icon.  
+     A list of views appears.
+  6. Select the view from the list and click **Select**.   
+     In this example, we need to select hbase_s_voter.  
+      ![]({{ site.baseurl }}/docs/img/SelectHbaseView.png)
+  7. Click **OK** to complete the connection.   
+      ![]({{ site.baseurl }}/docs/img/ODBC_HbaseView.png)
+  8. In the _Data Connection dialog_, click **Connect Live**.
+
+#### Step 3. Visualize the Data in Tableau
+
+Once you connect to the data in Tableau, the columns appear in the Data
+window. To visualize the data, drag fields from the Data window to the
+workspace view.
+
+For example, you can visualize the data in this way:
+
+![]({{ site.baseurl }}/docs/img/VoterContributions_hbaseview.png)
+
+### Option 2. Using Custom SQL to Access Self-Describing Data
+
+The following example describes how to use custom SQL to connect to a Parquet
+file and then visualize the data in Tableau 8.1. You can use the same steps to
+access data from other sources such as Hive, HBase, JSON, TSV, and CSV.
+
+{% include startnote.html %}This example assumes that there is a schema named dfs.default which contains a parquet file named region.parquet.{% include endnote.html %}
+
+#### Step 1. Create a DSN to the Parquet File and Preview the Data
+
+In this step, we will create a DSN that accesses files on the DFS. We will
+also use Drill Explorer to preview the SQL that we want to use to connect to
+the data in Tableau.
+
+  1. To launch the ODBC Administrator, click **Start > All Programs > MapR Drill ODBC Driver 1.0 (32|64-bit) > (32|64-bit) ODBC Administrator.**   
+     The *ODBC Data Source Administrator* window appears.
+  2. On the **System DSN** tab, click **Add**.
+  3. Select **MapR Drill ODBC Driver** and click **Finish**.  
+     The *MapR Drill ODBC Driver DSN Setup* window appears.
+  4. Enter a data source name.
+  5. Specify the connection type based on your requirements. See [Connection Type]({{ site.baseurl }}/docs/configuring-connections-on-windows/#connection-type) for more information.  
+     The connection type provides the DSN access to a Drillbit.  
+     In this example, we will connect to a Zookeeper Quorum.
+  6. In the *Schema* section, select the schema associated with the data source that contains the Parquet file that you want to access. Then, click **OK**.  
+     In this example, the Parquet file is available in the dfs.default schema.  
+      ![]({{ site.baseurl }}/docs/img/Parquet_DSN.png)  
+     You can use this DSN to access multiple files from the same schema.  
+     In this example, we plan to use the Custom SQL option to connect to data in Tableau. You can use Drill Explorer to preview the results of custom SQL before you enter the SQL in Tableau.
+  7. If you want to preview the results of a query, click **Drill Explorer**.
+    1. On the **Browse** tab, navigate to the file that you want. 
+    2. Click the **SQL** tab.  
+       The SQL tab will include a default query to the file you selected on the Browse tab. You can use the SQL tab to preview the results of various queries until you achieve the expected result.
+    3. Enter the query that you want to preview and then click **Preview**.  
+       ![]({{ site.baseurl }}/docs/img/Parquet_Preview.png)  
+       You can copy this query to file so that you can use it in Tableau.
+    4. Close the Drill Explorer window. 
+  8. Click **OK** to create the DSN and return to the _ODBC Data Source Administrato_r window.
+  9. Click **OK** to close the ODBC Data Source Administrator.
+
+#### Step 2. Connect to a Parquet File in Tableau using Custom SQL
+
+Now, we can create a connection to the Parquet file using the custom SQL.
+
+  1. In Tableau, click **Data > Connect to Data**.
+  2. In the *On a server* section, click **Other Databases (ODBC).**
+     The *Generic ODBC Connection* dialog appears.
+  3. In the *Connect Using* section, select the DSN that connects to the data source.  
+     In this example, Files-DrillDataSources was selected.
+  4. In the *Schema* section, select the schema associated with the data source.  
+     In this example, dfs.default was selected.
+  5. In the *Table* section, select **Custom SQL**.
+  6. Enter the SQL query.  
+     In this example, the following SQL query was entered: 
+     
+         SELECT CAST(R_NAME as varchar(20))Country,
+         CAST(R_COMMENT as varchar(200))Comments, R_RegionKey 
+         FROM `dfs`.`default`.`./opt/mapr/drill/drill-1.0.0.BETA1/sample-data/region.parquet`  
+
+     {% include startnote.html %}The path to the file depends on its location in your file system.{% include endnote.html %} 
+
+  7. Click **OK** to complete the connection.  
+     ![]({{ site.baseurl }}/docs/img/ODBC_CustomSQL.png)
+  8. In the *Data Connection dialog*, click **Connect Live**.
+
+#### Step 3. Visualize the Data in Tableau
+
+Once you connect to the data, the fields appear in the Data window. To
+visualize the data, drag fields from the Data window to the workspace view.
+
+For example, you can visualize the data in this way:
+![]({{ site.baseurl }}/docs/img/RegionParquet_table.png)
+

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/030-using-microstrategy-analytics-with-apache-drill.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/030-using-microstrategy-analytics-with-apache-drill.md b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/030-using-microstrategy-analytics-with-apache-drill.md
new file mode 100755
index 0000000..1725953
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/030-using-microstrategy-analytics-with-apache-drill.md
@@ -0,0 +1,150 @@
+---
+title: "Using MicroStrategy Analytics with Apache Drill"
+parent: "Using Drill with BI Tools"
+---
+Apache Drill is certified with the MicroStrategy Analytics Enterprise Platform™. You can connect MicroStrategy Analytics Enterprise to Apache Drill and explore multiple data formats instantly on Hadoop. Use the combined power of these tools to get direct access to semi-structured data without having to rely on IT teams for schema creation.
+
+Complete the following steps to use Apache Drill with MicroStrategy Analytics Enterprise:
+ 
+1.  Install the Drill ODBC driver from MapR.
+2.	Configure the MicroStrategy Drill Object.
+3.	Create the MicroStrategy database connection for Drill.
+4.	Query and analyze the data.
+
+----------
+
+
+### Step 1: Install and Configure the MapR Drill ODBC Driver 
+
+Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. 
+
+Complete the following steps to install and configure the driver:
+
+1.	Download the driver from the following location: 
+
+    http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/
+
+    {% include startnote.html %}Use the 32-bit Windows driver for MicroStrategy 9.4.1.{% include endnote.html %}
+
+2.	Complete steps 2-8 under *Installing the Driver* on the following page: 
+
+    https://cwiki.apache.org/confluence/display/DRILL/Using+the+MapR+ODBC+Driver+on+Windows
+3.	Complete the steps on the following page to configure the driver:
+
+    https://cwiki.apache.org/confluence/display/DRILL/Step+2.+Configure+ODBC+Connections+to+Drill+Data+Sources 
+
+    {% include startnote.html %}Verify that you are using the 32-bit driver since both drivers can coexist on the same machine.{% include endnote.html %} 
+
+	a.	Verify the version number of the driver.
+
+    	 
+	b.	Click Test to verify that the ODBC configuration works before using it with MicroStrategy.
+
+    ![]({{ site.baseurl }}/docs/img/image_2.png)
+
+----------
+
+
+### Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise 
+The steps listed in this section were created based on the MicroStrategy Technote for installing DBMS objects which you can reference at: 
+
+http://community.microstrategy.com/t5/Database/TN43537-How-to-install-DBMS-objects-provided-by-MicroStrategy/ta-p/193352
+
+
+Complete the following steps to install the Drill Object on MicroStrategy Analytics Enterprise:
+
+1. Obtain the Drill Object from MicroStrategy Technical Support. The Drill Object is contained in a file named `MapR_Drill.PDS`. When you get this file, store it locally in your Windows file system.
+2. Open MicroStrategy Developer. 
+3. Expand Administration, and open Configuration Manager.
+4. Select **Database Instances**.
+   ![]({{ site.baseurl }}/docs/img/image_3.png)
+5. Right-click in the area where the current database instances display. 
+   ![]({{ site.baseurl }}/docs/img/image_4.png)
+6. Select **New – Database Instance**. 
+7. Once the Database Instances window opens, select **Upgrade**.
+   ![]({{ site.baseurl }}/docs/img/image_5.png)
+8. Enter the path and file name for the Drill Object file in the DB types script file field. Alternatively, you can use the browse button next to the field to search for the file. 
+   ![]({{ site.baseurl }}/docs/img/image_6.png)
+9.  Click **Load**. 
+10.	Once loaded, select the MapR Drill database type in the left column.
+11.	Click **>** to load MapR Drill into **Existing database types**. 
+12.	Click **OK** to save the database type.
+13.	Restart MicroStrategy Intelligence Server if it is used for the project source.
+   ![]({{ site.baseurl }}/docs/img/image_7.png)
+
+MicroStrategy Analytics Enterprise can now access Apache Drill.
+
+
+----------
+
+### Step 3: Create the MicroStrategy database connection for Apache Drill
+Complete the following steps to use the Database Instance Wizard to create the MicroStrategy database connection for Apache Drill:
+
+1. In MicroStrategy  Developer, select **Administration > Database Instance Wizard**.
+   ![]({{ site.baseurl }}/docs/img/image_8.png)
+2. Enter a name for the database, and select **MapR Drill** as the Database type from the drop-down menu.
+   ![]({{ site.baseurl }}/docs/img/image_9.png)
+3. Click **Next**. 
+4. Select the ODBC DSN that you configured with the ODBC Administrator.
+   ![]({{ site.baseurl }}/docs/img/image_10.png)
+5. Provide the login information for the connection and then click **Finish**.
+
+You can now use MicroStrategy Analytics Enterprise to access Drill as a database instance. 
+
+----------
+
+
+### Step 4: Query and Analyze the Data
+This step includes an example scenario that shows you how to use MicroStrategy, with Drill as the database instance, to analyze Twitter data stored as complex JSON documents. 
+
+####Scenario
+The Drill distributed file system plugin is configured to read Twitter data in a directory structure. A view is created in Drill to capture the most relevant maps and nested maps and arrays for the Twitter JSON documents. Refer to [Query Data](/docs/query-data-introduction/) for more information about how to configure and use Drill to work with complex data:
+
+####Part 1: Create a Project
+Complete the following steps to create a project:
+
+1. In MicroStrategy Developer, use the Project Creation Assistant to create a new project.
+   ![]({{ site.baseurl }}/docs/img/image_11.png)
+2.  Once the Assistant starts, click **Create Project**, and enter a name for the new project. 
+3.	Click **OK**. 
+4.	Click **Select tables from the Warehouse Catalog**. 
+5.	Select the Drill database instance connection from the drop down list, and click **OK**.	MicroStrategy queries Drill and displays all of the available tables and views.
+   ![]({{ site.baseurl }}/docs/img/image_12.png)
+6.	Select the two views created for the Twitter Data.
+7.	Use **>** to move the views to **Tables being used in the project**. 
+8.	Click **Save and Close**.
+9.	Click **OK**. The new project is created in MicroStrategy Developer. 
+
+####Part 2: Create a Freeform Report to Analyze Data
+Complete the following steps to create a Freeform Report and analyze data:
+
+1.	In Developer, open the Project and then open Public Objects.
+2.	Click **Reports**.
+3.	Right-click in the pane on the right, and select **New > Report**.
+   ![]({{ site.baseurl }}/docs/img/image_13.png)
+4.	Click the **Freeform Soures** tab, and select the Drill data source.
+   ![]({{ site.baseurl }}/docs/img/image_14.png)
+5.	Verify that **Create Freeform SQL Report** is selected, and click **OK**. This allows you to enter a quick query to gather data. The Freeform SQL Editor window appears.
+   ![]({{ site.baseurl }}/docs/img/image_15.png)
+6.	Enter a SQL query in the field provided. Attributes specified display. 
+In this scenario, a simple query that selects and groups the tweet source and counts the number of times the same source appeared in a day is entered. The tweet source was added as a text metric and the count as a number. 
+7.	Click **Data/Run Report** to run the query. A bar chart displays the output.
+   ![]({{ site.baseurl }}/docs/img/image_16.png)
+
+You can see that there are three major sources for the captured tweets. You can change the view to tabular format and apply a filter to see that iPhone, Android, and Web Client are the three major sources of tweets for this specific data set.
+![]({{ site.baseurl }}/docs/img/image_17.png)
+
+In this scenario, you learned how to configure MicroStrategy Analytics Enterprise to work with Apache Drill. 
+
+----------
+
+### Certification Links
+
+* MicroStrategy certifies its analytics platform with Apache Drill: http://ir.microstrategy.com/releasedetail.cfm?releaseid=902795
+
+* http://community.microstrategy.com/t5/Database/TN225724-Post-Certification-of-MapR-Drill-0-6-and-0-7-with/ta-p/225724
+
+* http://community.microstrategy.com/t5/Release-Notes/TN231092-Certified-Database-and-ODBC-configurations-for/ta-p/231092
+
+* http://community.microstrategy.com/t5/Release-Notes/TN231094-Certified-Database-and-ODBC-configurations-for/ta-p/231094   
+

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/040-using-tibco-spotfire-with-drill.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/040-using-tibco-spotfire-with-drill.md b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/040-using-tibco-spotfire-with-drill.md
new file mode 100755
index 0000000..49772e7
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/040-using-tibco-spotfire-with-drill.md
@@ -0,0 +1,50 @@
+---
+title: "Using Tibco Spotfire with Drill"
+parent: "Using Drill with BI Tools"
+---
+Tibco Spotfire Desktop is a powerful analytic tool that enables SQL statements when connecting to data sources. Spotfire Desktop can utilize the powerful query capabilities of Apache Drill to query complex data structures. Use the MapR Drill ODBC Driver to configure Tibco Spotfire Desktop with Apache Drill.
+
+To use Spotfire Desktop with Apache Drill, complete the following steps:
+
+1.  Install the Drill ODBC Driver from MapR.
+2.	Configure the Spotfire Desktop data connection for Drill.
+
+----------
+
+
+### Step 1: Install and Configure the MapR Drill ODBC Driver 
+
+Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. 
+
+Complete the following steps to install and configure the driver:
+
+1.    Download the 64-bit MapR Drill ODBC Driver for Windows from the following location:<br> [http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/)     
+**Note:** Spotfire Desktop 6.5.1 utilizes the 64-bit ODBC driver.
+2.    Complete steps 2-8 under on the following page to install the driver:<br> 
+[http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/](http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/)
+3.    Complete the steps on the following page to configure the driver:<br>
+[http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/](http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/)
+
+----------
+
+
+### Step 2: Configure the Spotfire Desktop Data Connection for Drill 
+Complete the following steps to configure a Drill data connection: 
+
+1. Select the **Add Data Connection** option or click the Add Data Connection button in the menu bar, as shown in the image below:![](http://i.imgur.com/p3LNNBs.png)
+2. When the dialog window appears, click the **Add** button, and select **Other/Database** from the dropdown list.![](http://i.imgur.com/u1g9kaT.png)
+3. In the Open Database window that appears, select **Odbc Data Provider** and then click **Configure**. ![](http://i.imgur.com/8Gu0GAZ.png)
+4. In the Configure Data Source Connection window that appears, select the Drill DSN that you configured in the ODBC administrator, and enter the relevant credentials for Drill.<br> ![](http://i.imgur.com/Yd6BKls.png) 
+5. Click **OK** to continue. The Spotfire Desktop queries the Drill metadata for available schemas, tables, and views. You can navigate the schemas in the left-hand column. After you select a specific view or table, the relevant SQL displays in the right-hand column. 
+![](http://i.imgur.com/wNBDs5q.png)
+6. Optionally, you can modify the SQL to work best with Drill. Simply change the schema.table.* notation in the SELECT statement to simply * or the relevant column names that are needed. 
+Note that Drill has certain reserved keywords that you must put in back ticks [ ` ] when needed. See [Drill Reserved Keywords](http://drill.apache.org/docs/reserved-keywords/).
+7. Once the SQL is complete, provide a name for the Data Source and click **OK**. Spotfire Desktop queries Drill and retrieves the data for analysis. You can use the functionality of Spotfire Desktop to work with the data.
+![](http://i.imgur.com/j0MWorh.png)
+
+**NOTE:** You can use the SQL statement column to query data and complex structures that do not display in the left-hand schema column. A good example is JSON files in the file system.
+
+**SQL Example:**<br>
+SELECT t.trans_id, t.`date`, t.user_info.cust_id as cust_id, t.user_info.device as device FROM dfs.clicks.`/clicks/clicks.campaign.json` t
+
+----------

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/050-configuring-spotfire-server.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/050-configuring-spotfire-server.md b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/050-configuring-spotfire-server.md
new file mode 100644
index 0000000..436776e
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/050-configuring-spotfire-server.md
@@ -0,0 +1 @@
+---
title: "Configuring Tibco Spotfire Server with Drill"
parent: "Using Drill with BI Tools"
---
This document describes how to configure Tibco Spotfire Server (TSS) to integrate with Apache Drill and explore multiple data formats instantly on Hadoop. Users can combine these powerful platforms to rapidly gain analytical access to a wide variety of data types. 

Complete the following steps to configure and use Apache Drill with TSS: 

1. Install the Drill JDBC driver with TSS.
2. Configure the Drill Data Source Template in TSS with the TSS configuration tool.
3. Configure Drill data sources with Tibco Spotfire Desktop and Information Designer.
4. Query and analyze various data formats with Tibco Spotfire and Drill.


----------


### Step 1: Install and Configure the Drill JDBC Driver 


Drill provides standard JDBC connectivity, making it easy to integrate data exploration capabilities on complex, schema-less data sets. Tibco Spotfire Server (TSS) requires Drill 1.0 or later, whic
 h incudes the JDBC driver. The JDBC driver is bundled with the Drill configuration files, and it is recommended that you use the JDBC driver that is shipped with the specific Drill version.

For general instructions to install the Drill JDBC driver, see [Using JDBC](http://drill.apache.org/docs/using-jdbc/).
Complete the following steps to install and configure the JDBC driver for TSS:

1. Locate the JDBC driver in the Drill installation directory:  
   `<drill-home>/jars/jdbc-driver/drill-jdbc-all-<drill-version>.jar`  
   For example, on a MapR cluster:  
   `/opt/mapr/drill/drill-1.0.0/jars/jdbc-driver/drill-jdbc-all-1.0.0-SNAPSHOT.jar`

2. Locate the TSS library directory and copy the JDBC driver file to that directory: 
   `<TSS-home-directory>/tomcat/lib`  
   For example, on a Linux server:  
   `/usr/local/bin/tibco/tss/6.0.3/tomcat/lib`  
   For example, on a Windows server:  
   `C:\Program Files\apache-tomcat\lib`

3. Restart TSS to load the JDBC driver.
4. Verify that th
 e TSS system can resolve the hostnames of the ZooKeeper nodes for the Drill cluster. You can do this by validating that DNS is properly configured for the TSS system and all the ZooKeeper nodes. Alternatively, you can add the hostnames and IP addresses of the ZooKeeper nodes to the TSS system hosts file.  
   For Linux systems, the hosts file is located here: 
   `/etc/hosts`  
   For Windows systems, the hosts file is located here: 
   `%WINDIR%\system32\drivers\etc\hosts`

----------

### Step 2: Configure the Drill Data Source Template in TSS

The Drill Data Source template can now be configured with the TSS Configuration Tool. The Windows-based TSS Configuration Tool is recommended. If TSS is installed on a Linux system, you also need to install TSS on a small Windows-based system so you can utilize the Configuration Tool. In this case, it is also recommended that you install the Drill JDBC driver on the TSS Windows system.

1. Click **Start > All Programs > TIBCO Spotfire Serve
 r > Configure TIBCO Spotfire Server**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-start.png)
2. Enter the Configuration Tool password that was specified when TSS was initially installed.
3. Once the Configuration Tool has connected to TSS, click the **Configuration** tab, then **Data Source Templates**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-configtab.png)
4. In the Data Source Templates window, click the **New** button at the bottom of the window. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-new.png)
5. Provide a name for the data source template, then copy the following XML template into the **Data Source Template** box. When complete, click **OK**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-template.png)
6. The new entry will now be available in the data source template. Check the box next to the new entry, then click **Save Configuration**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfi
 re-server-saveconfig.png)
7. Select Database as the destination and click Next. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-saveconfig2.png) 
8. Add a comment to the updated configuration and click **Finish**. 
9. A response window is displayed to state that the configuration was successfully uploaded to TSS. Click **OK**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-importconfig.png)
10. Restart TSS to enable it to use the Drill data source template.
   
#### XML Template

Make sure that you enter the correct ZooKeeper node name instead of `<zk-node>`, as well as the correct Drill cluster name instead of `<drill-cluster-name>` in the example below. This is just a template that will appear whenever a data source is configured. The hostnames of ZooKeeper nodes and the Drill cluster name can be found in the `$DRILL_HOME/conf/drill-override.conf` file on any of the Drill nodes in the cluster.
     
    <jdbc-type-settings>
    <type-name>drill</ty
 pe-name>
    <driver>org.apache.drill.jdbc.Driver</driver> 
    <connection-url-pattern>jdbc:drill:zk=<zk-node>:5181/drill/<drill-cluster-name>-drillbits</connection-url-pattern> 
    <ping-command>SELECT 1 FROM sys.version</ping-command>
    <supports-catalogs>true</supports-catalogs>
    <supports-schemas>true</supports-schemas>
    <supports-procedures>false</supports-procedures>
    <table-expression-pattern>[$$schema$$.]$$table$$</table-expression-pattern>
 
    <column-name-pattern>`$$name$$`</column-name-pattern>
    <table-name-pattern>`$$name$$`</table-name-pattern>
    <schema-name-pattern>`$$name$$`</schema-name-pattern>
    <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
    <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
    <column-alias-pattern>`$$name$$`</column-alias-pattern>

    <java-to-sql-type-conversions>
     <type-mapping>
      <from max-length="32672">String</from>
      <to>VARCHAR($$value$$)</to>
     </type-mapping>
     <type-mappin
 g>
      <from>String</from>
      <to>VARCHAR(32672)</to>
     </type-mapping>
     <type-mapping>
      <from>Integer</from>
      <to>INTEGER</to>
     </type-mapping>
    </java-to-sql-type-conversions>
    </jdbc-type-settings>


----------

### Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop 

To configure Drill data sources in TSS, you need to use the Tibco Spotfire Desktop client.

1. Open Tibco Spotfire Desktop. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-client.png)
2. Log into TSS. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-tss.png)
3. Select the deployment area in TSS to be used. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-deployment.png)
4. Click **Tools > Information Designer**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-infodesigner.png)
5. In the Information Designer, click **New > Data Source**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-infodesign
 er2.png)
6. In the Data Source window, enter the name for the data source. Select the Drill Data Source template created in Step 2 as the type. Update the connection URL with the correct hostname of the ZooKeeper node(s) and the Drill cluster name. Note: The Zookeeper node(s) hostname(s) and Drill cluster name can be found in the `$DRILL_HOME/conf/drill-override.conf` file on any of the Drill nodes in the cluster. Enter the username and password used to connect to Drill. When completed, click **Save**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-connectionURL.png)
7. In the Save As window, verify the name and the folder where you want to save the new data source in TSS. Click **Save** when done. TSS will now validate the information and save the new data source in TSS.
8. When the data source is saved, it will appear in the **Data Sources** tab, and you will be able to navigate the schema. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-datasource
 s-tab.png)


----------

### Step 4: Query and Analyze the Data

After the Drill data source has been configured in the Information Designer, the information elements can be defined. 

1.	In this example all the columns of a Hive table have been defined, using the Drill data source, and added to an information link. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-infolink.png)
2.	The SQL syntax to retrieve the data can be validated by clicking the **SQL** button. Many other operations can be performed in Information Link,  including joins, filters, and so on. See the Tibco Spotfire documentation for details.
3.	You can now import the data of this table into TSS by clicking the **Open Data** button. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-hiveorders.png)
The data is now available in Tibco Spotfire Desktop to create various reports and tables as needed, and to be shared. For more information about creating charts, tables and reports, see the Tib
 co Spotfire documentation.


...
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/060-using-apache-drill-with-tableau-9-desktop.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/060-using-apache-drill-with-tableau-9-desktop.md b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/060-using-apache-drill-with-tableau-9-desktop.md
new file mode 100644
index 0000000..39e59b4
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/060-using-apache-drill-with-tableau-9-desktop.md
@@ -0,0 +1,99 @@
+---
+title: "Using Apache Drill with Tableau 9 Desktop"
+parent: "Using Drill with BI Tools"
+---
+This document describes how to connect Tableau 9 Desktop to Apache Drill and explore multiple data formats instantly on Hadoop. Use the combined power of these tools to get direct access to semi-structured data, without having to rely on IT teams for schema creation.
+
+To use Apache Drill with Tableau 9 Desktop, complete the following steps: 
+
+1. Install the Drill ODBC driver from MapR.
+2. Install the Tableau Data-connection Customization (TDC) file.
+3. Connect Tableau to Drill Using ODBC.
+4. Query and analyze various data formats with Tableau and Drill.
+
+----------
+
+### Step 1: Install and Configure the MapR Drill ODBC Driver 
+
+Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. For the best experience use the latest release of Apache Drill. For Tableau 9.0 Desktop, Drill Version 0.9 or higher is recommended.
+
+Complete the following steps to install and configure the driver:
+
+1. Download the 64-bit MapR Drill ODBC Driver for Windows from the following location:<br> [http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/)     
+**Note:** Tableau 9.0 Desktop 64 bit can use either the 32-bit driver or the 64-bit driver.
+2. Complete steps 2-8 under on the following page to install the driver:<br> 
+[http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/](http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/)
+3. Complete the steps on the following page to configure the driver:<br>
+[http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/](http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/)
+4. If Drill authentication is enabled, select **Basic Authentication** as the authentication type. Enter a valid user and password. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-odbc-setup.png)
+
+Note: If you select **ZooKeeper Quorum** as the ODBC connection type, the client system must be able to resolve the hostnames of the ZooKeeper nodes. The simplest way is to add the hostnames and IP addresses for the ZooKeeper nodes to the `%WINDIR%\system32\drivers\etc\hosts` file. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-odbc-setup-2.png)
+
+Also make sure to test the ODBC connection to Drill before using it with Tableau.
+
+
+----------
+
+### Step 2: Install the Tableau Data-connection Customization (TDC) File
+
+The MapR Drill ODBC Driver includes a file named `MapRDrillODBC.TDC`. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau.
+
+The MapR Drill ODBC Driver installer automatically installs the TDC file if the installer can find the Tableau installation. If you installed the MapR Drill ODBC Driver first and then installed Tableau, the TDC file is not installed automatically, and you need to install it manually. 
+
+**To install the MapRDrillODBC.TDC file manually:**
+
+  1. Click **Start > All Programs > MapR Drill ODBC Driver <version> (32|64-bit) > Install Tableau TDC File**. ![drill query flow]({{ site.baseurl }}/docs/img/install-tableau-tdc.png)
+  2. When the installation completes, press any key to continue.   
+For example, you can press the SPACEBAR key.
+
+If the installation of the TDC file fails, this is likely because your Tableau repository is not in a location other than the default one. In this case, manually copy the My Tableau Repository to the following location: `C:\Users\<user>\Documents\My Tableau Repository`. Repeat the procedure to install the `MapRDrillODBC.TDC` file manually.
+
+
+----------
+
+
+### Step 3: Connect Tableau to Drill via ODBC
+Complete the following steps to configure an ODBC data connection: 
+
+To connect Tableau to Drill use the following steps:
+
+1.	In a Tableau Workbook click **Data > New Data Source**.![drill query flow]({{ site.baseurl }}/docs/img/new-data-source.png)
+2.	In the **Connect** list, select **Other Databases (ODBC)**. ![drill query flow]({{ site.baseurl }}/docs/img/connect-list.png)
+3.	On the Server Connection window, select the DSN configured in Step 1 from the drop-down list of ODBC data sources. Then click **Connect**. Note: You will be prompted to enter a username and password; these entries will be passed to the Server Connection window. 
+![drill query flow]({{ site.baseurl }}/docs/img/other-dbs.png) ![drill query flow]({{ site.baseurl }}/docs/img/other-dbs-2.png)
+Tableau is now connected to Drill, and you can select various tables and views. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-schemas.png)
+4.	Click the **Schema** drop-down list and choose a schema by clicking the search icon: all available Drill schemas will be displayed. When you have selected the schema, click the search icon in the Table dialog box: all available tables or views will be displayed. 
+
+You can select tables and views to build a Tableau Visualization. You can also use custom SQL by clicking the **New Custom SQL** option. 
+
+Tableau can natively work with Hive tables and Drill views. For Drill data sources, including directly accessible  file systems or HBase/MapR-DB tables, you can either use the Custom SQL option, or you can create a view in Drill to easily represent the complex data to Tableau. For more information, see the following links:
+http://drill.apache.org/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool/
+http://drill.apache.org/docs/tableau-examples/
+
+Note: If Drill authentication and impersonation is enabled, only the views that the user has access to will be displayed in the Table dialog box. Also, if custom SQL is being used to try and access data sources that the user does not have access to, an error message will be displayed. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-error.png)
+
+----------
+
+### Step 4: Query and Analyze the Data 
+
+Tableau Desktop can now use Drill to query various data sources and visualize the information.
+
+Assume that a retailer has weblog data stored as JSON files in a nested directory structure and product information stored in a Hive table. Using Drill and Tableau, users would like to understand the volume of product sold by state and product category.
+
+1.	Using the New Custom SQL function in Tableau, write a Drill query to read the JSON files without the need for ETL. Casting data types is recommended when you are working directly with files. 
+For example: ![drill query flow]({{ site.baseurl }}/docs/img/edit-custom-sql.png)
+
+2.	Next select the Hive products table: ![drill query flow]({{ site.baseurl }}/docs/img/custom-sql-query.png)
+
+3.	Verify that Tableau is joining the two data sources (JSON files and Hive table) on the prod_id key:![drill query flow]({{ site.baseurl }}/docs/img/tableau-join-key.png)
+The data sources are now configured and ready to be used in the visualization.
+4.	Drag State from the Custom SQL (JSON files) to **Columns**, and drag Category from the Hive products table to **Rows**. 
+5.	Create a calculated field called `Total Number of Products` and enter `count(prod_id)` for the `prod_id` field from the weblog files (Custom SQL), then drag it to **Rows** next to Category. The visualization will now show the total products by category and state. 
+6.	To filter out weblog data where products were not bought, drag the `purch_flag` field from the weblog files to **Filters**. Select only data where the `purch_flag` was true.
+7.	Finally, order the data from the state with the most products sold to the one with the least.
+8.	Add a grand total row by clicking **Analysis > Totals > Show Column Grand Totals**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-desktop-query.png)
+
+----------
+
+In this quick tutorial, you saw how you can configure Tableau Desktop 9.0 to work with Apache Drill. 
+

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/070-using-apache-drill-with-tableau-9-server.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/070-using-apache-drill-with-tableau-9-server.md b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/070-using-apache-drill-with-tableau-9-server.md
new file mode 100644
index 0000000..79ac35c
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/070-using-apache-drill-with-tableau-9-server.md
@@ -0,0 +1,97 @@
+---
+title: "Using Apache Drill with Tableau 9 Server"
+parent: "Using Drill with BI Tools"
+---
+
+This document describes how to connect Tableau 9 Server to Apache Drill and explore multiple data formats instantly on Hadoop, as well as share all the Tableau visualizations in a collaborative environment. Use the combined power of these tools to get direct access to semi-structured data, without having to rely on IT teams for schema creation and data manipulation. 
+
+To use Apache Drill with Tableau 9 Server, complete the following steps: 
+
+1.	Install the Drill ODBC driver from MapR on the Tableau Server system and configure ODBC data sources.
+2.	Install the Tableau Data-connection Customization (TDC) file.
+3.	Publish Tableau visualizations and data sources from Tableau Desktop to Tableau Server for collaboration.
+
+----------
+
+### Step 1: Install and Configure the MapR Drill ODBC Driver 
+
+Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. For the best experience use the latest release of Apache Drill. For Tableau 9.0 Server, Drill Version 0.9 or higher is recommended.
+
+Complete the following steps to install and configure the driver:
+
+1. Download the 64-bit MapR Drill ODBC Driver for Windows from the following location:<br> [http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/)     
+**Note:** Tableau 9.0 Server works with the 64-bit ODBC driver.
+2. Complete steps 2-8 under on the following page to install the driver:<br> 
+[http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/](http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/)
+3. Complete the steps on the following page to configure the driver:<br>
+[http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/](http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/)
+4. If Drill authentication is enabled, select **Basic Authentication** as the authentication type. Enter a valid user and password. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-odbc-setup.png)
+
+Note: If you select **ZooKeeper Quorum** as the ODBC connection type, the client system must be able to resolve the hostnames of the ZooKeeper nodes. The simplest way is to add the hostnames and IP addresses for the ZooKeeper nodes to the `%WINDIR%\system32\drivers\etc\hosts` file. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-odbc-setup-2.png)
+
+Also make sure to test the ODBC connection to Drill before using it with Tableau.
+
+
+----------
+
+### Step 2: Install the Tableau Data-connection Customization (TDC) File
+
+The MapR Drill ODBC Driver includes a file named `MapRDrillODBC.TDC`. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau.
+
+For Tableau Server, you need to manually copy this file to the Server Datasources folder:
+1.	Locate the `MapRDrillODBC.tdc` file in the `~\Program Files\MapR Drill ODBC Driver\Resources` folder.
+2.	Copy the file to the `~\ProgramData\Tableau\Tableau Server\data\tabsvc\vizqlserver\Datasources` folder.
+3.	Restart Tableau Server.
+
+For more information about Tableau TDC configuration, see [Customizing and Tuning ODBC Connections](http://kb.tableau.com/articles/knowledgebase/customizing-odbc-connections)
+
+----------
+
+
+### Step 3: Publish Tableau Visualizations and Data Sources
+
+For collaboration purposes, you can now use Tableau Desktop to publish data sources and visualizations on Tableau Server.
+
+####Publishing Visualizations
+
+To publish a visualization from Tableau Desktop to Tableau Server:
+
+1. Configure Tableau Desktop by using the ODBC driver; see []()
+
+2. For best results, verify that the ODBC configuration and DSNs (data source names) are the same for both Tableau Desktop and Tableau Server.
+
+3. Create visualizations in Tableau Desktop using Drill as the data source.
+
+4. Connect to Tableau Server from Tableau Desktop. Select **Server > Sign In**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-signin1.png)
+
+5. Sign into Tableau Server using the server hostname or IP address, username, and password. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-signin2.png)
+
+6. You can now publish a workbook to Tableau Server. Select **Server > Publish Workbook**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish1.png)
+
+7. Select the project from the drop-down list. Enter a name for the visualization to be published and provide a description and tags as needed. Assign permissions and views to be shared. Then click **Authentication**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish2.png)
+
+8. In the Authentication window, select **Embedded Password**, then click **OK**. Then click **Publish** in the Publish Workbook window to publish the visualization to Tableau Server. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-authentication.png)
+
+####Publishing Data Sources
+
+If all you want to do is publish data sources to Tableau Server, follow these steps:
+1.	Open data source(s) in Tableau Desktop.
+2.	In the Workbook, select **Data > Data Source Name > Publish to Server**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish-datasource.png)
+
+3.	If you are not already signed in, sign into Tableau Server.
+4.	Select the project from the drop-down list and enter a name for the data source (or keep the same name that is used in the Desktop workbook). ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish-datasource2.png)
+
+5.	In the **Authentication** drop-down list, select **Embedded Password**. Select permissions as needed, then click **Publish**. The data source will now be published on the Tableau Server and is available for building visualizations. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish-datasource3.png)
+
+
+
+
+
+
+
+
+
+----------
+
+In this quick tutorial, you saw how you can configure Tableau Server 9.0 to work with Tableau Desktop and Apache Drill. 
+

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/005-odbc-on-linux-and-mac-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/005-odbc-on-linux-and-mac-introduction.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/005-odbc-on-linux-and-mac-introduction.md
deleted file mode 100644
index f304ce5..0000000
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/005-odbc-on-linux-and-mac-introduction.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: "ODBC on Linux and Mac Introduction"
-parent: "Using ODBC on Linux and Mac OS X"
----
-The MapR Drill ODBC driver provides BI tools access to Drill’s flexible query
-capabilities so you can quickly explore various data sources. Once you install
-the MapR Drill ODBC Driver, you can configure ODBC connections to access Drill
-from BI tools through the connections. Drill can connect to data with well-
-defined schemas, such as Hive. Drill can also connect directly to data that is
-self-describing, such as HBase, Parquet, JSON, CSV, and TSV.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/010-installing-the-driver-on-linux.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/010-installing-the-driver-on-linux.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/010-installing-the-driver-on-linux.md
deleted file mode 100755
index 59361d4..0000000
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/010-installing-the-driver-on-linux.md
+++ /dev/null
@@ -1,109 +0,0 @@
----
-title: "Installing the Driver on Linux"
-parent: "Using ODBC on Linux and Mac OS X"
----
-Install the MapR Drill ODBC Driver on the machine from which you connect to
-the Drill service. You can install the 32- or 64-bit driver on Linux. Install
-the version of the driver that matches the architecture of the client
-application that you use to access Drill. The 64-bit editions of Linux support
-32- and 64-bit applications.
-
-To install the MapR Drill ODBC Driver, complete the following steps:
-
-  * Step 1: Downloading the MapR Drill ODBC Driver 
-  * Step 2: Installing the MapR Drill ODBC Driver
-  * Step 3: Setting the LD_LIBRARY_PATH Environment Variable
-
-After you complete the installation steps, complete the steps listed in
-[Configuring ODBC Connections on Linux and Mac OS X]({{ site.baseurl }}/docs/configuring-connections-on-linux-and-mac-os-x).
-
-Verify that your system meets the system requirements before you start.
-
-**System Requirements**
-
-  * One of the following distributions (32- and 64-bit editions are supported):
-    * Red Hat® Enterprise Linux® (RHEL) 5.0/6.0
-    * CentOS 5.0/6.0
-    * SUSE Linux Enterprise Server (SLES) 11
-  * 90 MB of available disk space.
-  * An installed ODBC driver manager:
-    * iODBC 3.52.7 or above  
-      OR 
-    * unixODBC 2.2.12 or above
-  * The client must be able to resolve the actual hostname of the Drill node(s) with the IP(s). Verify that a DNS entry was created on the client machine for the Drill node(s).   
-If not, create the following entry in `/etc/hosts` for the Drill node(s):  
-
-    `<drill-machine-IP> <drill-machine-hostname>`  
-    Example: `127.0.0.1 localhost`
-
-To install the driver, you need Administrator privileges on the computer.
-
-## Step 1: Downloading the MapR Drill ODBC Driver
-
-Click on a link below to download the driver:
-
-  * [MapR Drill ODBC Driver (32-bit)](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.0.0.1001/MapRDrillODBC-32bit-1.0.0.i686.rpm)
-  * [MapR Drill ODBC Driver (64-bit)](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.0.0.1001/MapRDrillODBC-1.0.0.x86_64.rpm)
-
-## Step 2: Installing the MapR Drill ODBC Driver
-
-To install the driver, complete the following steps:
-
-  1. Login as the root user.
-  2. Navigate to the folder that contains the driver RPM packages to install.
-  3. Enter the following command where _RPMFileName_ is the file name of the RPM package containing the version of the driver that you want to install: 
-
-     **RedHat/CentOS**
-     
-     `yum localinstall --nogpgcheck RPMFileName`
-
-     **SUSE**
-     
-     `zypper install RPMFileName`
-
-{% include startnote.html %}The MapR Drill ODBC Driver dependencies need to be resolved.{% include endnote.html %}
-
-The MapR Drill ODBC Driver depends on the following resources:
-
-  * `cyrus-sasl-2.1.22-7` or above
-  * `cyrus-sasl-gssapi-2.1.22-7` or above
-  * `cyrus-sasl-plain-2.1.22-7` or above
-
-If the package manager in your Linux distribution cannot resolve the
-dependencies automatically when installing the driver, download and manually
-install the packages.
-
-The following table provides a list of the MapR Drill ODBC Driver file
-locations and descriptions:
-
-File| Description  
----|---  
-`/opt/mapr/drillodbc/ErrorMessages `| Error messages files directory.  
-`/opt/mapr/drillodbc/Setup`| Sample configuration files directory.  
-`/opt/mapr/drillodbc/lib/32 `| 32-bit shared libraries directory.  
-`/opt/mapr/drillodbc/lib/64`| 64-bit shared libraries directory.  
-  
-## Step 3: Setting the LD_LIBRARY_PATH Environment Variable
-
-The `LD_LIBRARY_PATH` environment variable must include the paths to the
-following:
-
-  * Installed ODBC driver manager libraries
-  * Installed MapR ODBC Driver for Apache Drill shared libraries
-
-You can have both 32- and 64-bit versions of the driver installed at the same time on the same computer. 
-{% include startimportant.html %}Do not include the paths to both 32- and 64-bit shared libraries in LD_LIBRARY PATH at the same time.{% include endimportant.html %}
-Only include the path to the shared libraries corresponding to the driver matching the bitness of the client application used.
-
-For example, if you are using a 64-bit client application and ODBC driver
-manager libraries are installed in `/usr/local/lib`, then set
-`LD_LIBRARY_PATH` as follows:  
-
-`export LD_LIBRARY_PATH=/usr/local/lib:/opt/simba/drillodbc/lib/64`  
-
-
-#### Next Step
-
-Complete the steps listed in [Configuring ODBC Connections for Linux and Mac
-OS X]({{ site.baseurl }}/docs/configuring-connections-on-linux-and-mac-os-x).
-

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/020-installing-the-driver-on-mac-os-x.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/020-installing-the-driver-on-mac-os-x.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/020-installing-the-driver-on-mac-os-x.md
deleted file mode 100755
index 39442b7..0000000
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/020-installing-the-driver-on-mac-os-x.md
+++ /dev/null
@@ -1,75 +0,0 @@
----
-title: "Installing the Driver on Mac OS X"
-parent: "Using ODBC on Linux and Mac OS X"
----
-Install the MapR Drill ODBC Driver on the machine from which you connect to
-the Drill service.
-
-To install the MapR Drill ODBC Driver, complete the following steps:
-
-  * Step 1: Downloading the MapR Drill ODBC Driver 
-  * Step 2: Installing the MapR Drill ODBC Driver 
-  * Step 3: Updating the DYLD_LIBRARY_PATH Environment Variable
-
-After you complete the installation steps, complete the steps listed in
-[Configuring ODBC Connections on Linux and Mac OS X]({{ site.baseurl }}/docs/configuring-connections-on-linux-and-mac-os-x)
-.
-
-Verify that your system meets the following prerequisites before you start.
-
-**System Requirements**
-
-  * Mac OS X version 10.6.8 or later
-  * 100 MB of available disk space
-  * iODBC 3.52.7 or later
-  * The client must be able to resolve the actual hostname of the Drill node(s) with the IP(s). Verify that a DNS entry was created on the client machine for the Drill node(s).   
-If not, create the following entry in `/etc/hosts` for the Drill node(s):  
-`<drill-machine-IP> <drill-machine-hostname>`  
-Example: `127.0.0.1 localhost`
-
-To install the driver, you need Administrator privileges on the computer.
-
-----------
-
-## Step 1: Downloading the MapR Drill ODBC Driver
-
-Click the following link to download the driver:  
-
-[MapR Drill ODBC Driver for Mac](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.0.0.1001/MapRDrillODBC.dmg)
-
-----------
-
-## Step 2: Installing the MapR Drill ODBC Driver
-
-To install the driver, complete the following steps:
-
-  1. Double-click `MapRDrillODBC.dmg` to mount the disk image.
-  2. Double-click `MapRDrillODBC.pkg` to run the Installer.
-  3. Follow the instructions in the Installer to complete the installation process.
-  4. When the installation completes, click **Close.**
-
-{% include startnote.html %}MapR Drill ODBC Driver files install in the following locations:{% include endnote.html %}
-
-  * `/opt/mapr/drillodbc/ErrorMessages` – Error messages files directory
-  * `/opt/mapr/drillodbc/Setup` – Sample configuration files directory
-  * `/opt/mapr/drillodbc/lib/universal` – Binaries directory
-
-----------
-
-## Step 3: Updating the DYLD_LIBRARY_PATH Environment Variable
-
-The Dynamic Link editor library path environment variable DYLD_LIBRARY_PATH must include paths to the following
-libraries:
-
-  * Installed iODBC driver manager libraries: libiodbc.dylib and libiodbcinst.dylib
-  * Installed MapR Drill ODBC Driver for Drill shared libraries
-
-If you installed the iODBC driver manager using the DMG, libraries are installed in
-`/usr/local/iODBC/lib`, set the `DYLD_LIBRARY_PATH` to include that path and the path to the MapR Drill ODBC Driver. For example, use the following command to set the environment variable to both paths:
-
-`export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/usr/local/iODBC/lib:/opt/mapr/drillodbc/lib/universal`
-
-#### Next Step
-
-Complete the steps listed in [Configuring ODBC Connections for Linux and Mac
-OS X]({{ site.baseurl }}/docs/configuring-connections-on-linux-and-mac-os-x).

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/030-configuring-connections-on-linux-and-mac-os-x.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/030-configuring-connections-on-linux-and-mac-os-x.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/030-configuring-connections-on-linux-and-mac-os-x.md
deleted file mode 100644
index 0f39dbf..0000000
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/030-configuring-connections-on-linux-and-mac-os-x.md
+++ /dev/null
@@ -1,141 +0,0 @@
----
-title: "Configuring Connections on Linux and Mac OS X"
-parent: "Using ODBC on Linux and Mac OS X"
----
-ODBC driver managers use configuration files to define and configure ODBC data
-sources and drivers. To configure an ODBC connection for Linux or Mac OS X, complete the following
-steps:
-
-* Step 1: Set Environment Variables (Linux only)
-* Step 2: Define the ODBC Data Sources in odbc.ini
-* Step 3: (Optional) Define the ODBC Driver in odbcinst.ini
-* Step 4: Configure the MapR Drill ODBC Driver
-
-## Sample Configuration Files
-
-Before you connect to Drill through an ODBC client tool
-on Linux or Mac OS X, copy the following configuration files in `/opt/mapr/drillobdc/Setup` to your home directory unless the files already exist in your home directory:
-
-* `mapr.drillodbc.ini`
-* `odbc.ini`
-* `odbcinst.ini`
-
-In your home directory, use sudo to rename the files as hidden files:
-
-* .mapr.drillodbc.ini
-* .odbc.ini
-* .odbcinst.ini
-
-If the configuration files already exist in your home directory, you can use the sample configuration files as a guide for modifying the existing configuration files as described in Steps 2-4.
-
-----------
-
-## Step 1: Set Environment Variables (Linux only)
-
-Set the following environment variables to point to the` odbc.ini`
-and `mapr.drillodbc.ini `configuration files, respectively:
-
-  * `ODBCINI` (point to `odbc.ini`)
-  * `MAPRDRILLINI` (point to `mapr.drillodbc.ini`)
-
-For example, if you are using the 32-bit driver and the files are in the
-default install directory, set the environment variables as follows:
-
-{% include startnote.html %}You do not need to set these variables for the Mac OS X version of the driver.{% include endnote.html %}
-
-----------
-
-## Step 2: Define the ODBC Data Sources in .odbc.ini
-
-Define the ODBC data sources in the `odbc.ini` configuration file for your environment. The following sample shows a possible configuration for using Drill in embedded mode. Get the ZKQuorum and ZKClusterID values from the `drill-override.conf` file.
-
-**Example**
-          
-    [ODBC]
-    # Specify any global ODBC configuration here such as ODBC tracing.
-  
-    [ODBC Data Sources]
-    My MapR Drill DSN=MapR Drill ODBC Driver
-  
-    [Sample MapR Drill DSN]
-    # Description: DSN Description.
-    # This key is not necessary and is only to give a description of the data source.
-    Description=My MapR Drill ODBC Driver DSN
-    # Driver: The location where the ODBC driver is installed to.
-    Driver=/opt/mapr/drillodbc/lib/universal/libmaprdrillodbc.dylib
-  
-    # Values for ConnectionType, AdvancedProperties, Catalog, Schema should be set here.
-    # If ConnectionType is Direct, include Host and Port. If ConnectionType is ZooKeeper, include ZKQuorum and ZKClusterID
-    # They can also be specified in the connection string.
-    ConnectionType=Zookeeper
-    HOST=localhost
-    PORT=31010
-    ZKQuorum=localhost:2181
-    ZKClusterID=drillbits1
-    AdvancedProperties={HandshakeTimeout=5;QueryTimeout=180;TimestampTZDisplayTimeout=utc;ExcludedSchemas=sys,INFORMATION_SCHEMA}
-    Catalog=DRILL
-    Schema=
-
-[Driver
-Configuration
-Options]({{ site.baseurl }}/docs/driver-configuration-options) describes configuration options available for controlling the
-behavior of DSNs using the MapR Drill ODBC Driver.
-
-----------
-
-## Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini
-
-The `.odbcinst.ini` is an optional configuration file that defines the ODBC
-Drivers. This configuration file is optional because you can specify drivers
-directly in the` .odbc.ini` configuration file. The following sample shows a possible configuration.
-  
-**Example**
-
-    [ODBC Drivers]
-    MapR Drill ODBC Driver=Installed
-   
-    [MapR Drill ODBC Driver]
-    Description=MapR Drill ODBC Driver
-    Driver=/opt/mapr/drillodbc/lib/universal/libmaprdrillodbc.dylib
-
-----------
-
-## Step 4: Configure the MapR Drill ODBC Driver
-
-Configure the MapR Drill ODBC Driver for your environment by modifying the `.mapr.drillodbc.ini` configuration
-file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration.
-
-**Example**
-
-    [Driver]
-    ## - Note that this default DriverManagerEncoding of UTF-32 is for iODBC.
-    DriverManagerEncoding=UTF-32
-    ErrorMessagesPath=/opt/mapr/drillodbc/ErrorMessages
-
-    LogLevel=0
-    LogPath=
-    SwapFilePath=/tmp
-
-    #   iODBC
-    ODBCInstLib=libiodbcinst.dylib
-
-### Configuring .mapr.drillodbc.ini
-
-To configure the MapR Drill ODBC Driver in the `mapr.drillodbc.ini` configuration file, complete the following steps:
-
-  1. Open the `mapr.drillodbc.ini` configuration file in a text editor.
-  2. Edit the DriverManagerEncoding setting if necessary. The value is typically UTF-16 or UTF-32, but depends on the driver manger used. iODBC uses UTF-32 and unixODBC uses UTF-16. Review your ODBC Driver Manager documentation for the correct setting.
-  3. Edit the `ODBCInstLib` setting. The value is the name of the `ODBCInst` shared library for the ODBC driver manager that you use. The configuration file defaults to the shared library for `iODBC`. In Linux, the shared library name for iODBC is `libiodbcinst.so`. In Mac OS X, the shared library name for `iODBC` is `libiodbcinst.dylib`.
-     {% include startnote.html %}Review your ODBC Driver Manager documentation for the correct
-setting.{% include endnote.html %} 
-     Specify an absolute or relative filename for the library. If you use
-the relative file name, include the path to the library in the library path
-environment variable. In Linux, the library path environment variable is named
-`LD_LIBRARY_PATH`. In Mac OS X, the library path environment variable is
-named `DYLD_LIBRARY_PATH`.
-  4. Save the `mapr.drillodbc.ini` configuration file.
-
-### Next Step
-
-Refer to [Testing the ODBC Connection on Linux and Mac OS X]({{ site.baseurl }}/docs/testing-the-odbc-connection).
-

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/040-driver-configuration-options.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/040-driver-configuration-options.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/040-driver-configuration-options.md
deleted file mode 100644
index db1c7d2..0000000
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/040-driver-configuration-options.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: "Driver Configuration Options"
-parent: "Using ODBC on Linux and Mac OS X"
----
-You can use various configuration options to control the behavior of the MapR
-Drill ODBC Driver on Linux and Mac OS X. You can use these options in a connection string or in the
-`odbc.ini` configuration file for the Mac OS X version or the driver.
-
-{% include startnote.html %}If you use a connection string to connect to your data source, then you can set these configuration properties in the connection string instead of the` odbc.ini` file.{% include endnote.html %}
-
-The following table provides a list of the configuration options with their
-descriptions:
-
-| Property Name      | Description                                                                                                                                                                                                                                                                                               |
-|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| AdvancedProperties | Separate advanced properties using a semi-colon (;) and then surround all advanced properties in a connection string using braces { and }. For details on supported advanced properties, see Appendix C: Advanced Properties.                                                                             |
-| Catalog            | The name of the synthetic catalog under which all of the schemas/databases are organized: DRILL                                                                                                                                                                                                           |
-| ConnectionType     | The following values are possible: Direct and Zookeeper. The Direct connection type connects to a Drill server using Host and Port properties in the connection string. The Zookeeper connection type connects to a Zookeeper cluster using ZKQuorum and ZKClusterID properties in the connection string. |
-| DRIVER             | (Required) The name of the installed driver: MapR Drill ODBC Driver                                                                                                                                                                                                                                       |
-| Host               | If the ConnectionType property is set to Direct, then indicate the IP address or hostname of the Drill server using the Host property.                                                                                                                                                                    |
-| Port               | If the ConnectionType property is set to Direct, then indicate the port on which the Drill server is listening using the Port property.                                                                                                                                                                   |
-| Schema             | The name of the database schema to use when a schema is not explicitly specified in a query.Note: Queries on other schemas can still be issued by explicitly specifying the schema in the query.                                                                                                          |
-| ZKClusterID        | If the ConnectionType property is set to ZooKeeper, then use ZKClusterID to indicate the name of the Drillbit cluster to use.                                                                                                                                                                             |
-| ZKQuorum           | If the ConnectionType property is set to ZooKeeper, then use ZKQuorum to indicate the server(s) in your ZooKeeper cluster. Separate multiple servers using a comma (,).                                                                                                                                   |
-

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/050-using-a-connection-string.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/050-using-a-connection-string.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/050-using-a-connection-string.md
deleted file mode 100644
index 39c920d..0000000
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/050-using-a-connection-string.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Using a Connection String"
-parent: "Using ODBC on Linux and Mac OS X"
----
-You can use a connection string to connect to your data source using ODBC on Linux and Mac OS X. For a list of
-all the properties that you can use in connection strings, see [Driver
-Configuration
-Options]({{ site.baseurl }}/docs/driver-configuration-options).
-
-The following example shows a connection string for connecting directly to a
-Drillbit:
-
-**Example**
-
-    DRIVER=MapR Drill ODBC Driver;AdvancedProperties= {HandshakeTimeout=0;QueryTimeout=0; TimestampTZDisplayTimezone=utc;ExcludedSchemas=sys, INFORMATION_SCHEMA;[OS1] };Catalog=DRILL;Schema=hivestg; ConnectionType=Direct;Host=192.168.202.147;Port=31010
-
-The following example shows a connection string for connecting to a ZooKeeper
-cluster:
-
-**Example**
-
-    DRIVER=MapR Drill ODBC Driver;AdvancedProperties= {HandshakeTimeout=0;QueryTimeout=0; TimestampTZDisplayTimezone=utc;ExcludedSchemas=sys, INFORMATION_SCHEMA;};Catalog=DRILL;Schema=; ConnectionType=ZooKeeper;ZKQuorum=192.168.39.43:5181; ZKClusterID=drillbits1
-

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/060-advanced-properties.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/060-advanced-properties.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/060-advanced-properties.md
deleted file mode 100644
index 75b82ac..0000000
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/060-advanced-properties.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: "Advanced Properties"
-parent: "Using ODBC on Linux and Mac OS X"
----
-When you use advanced properties on Linux and Mac OS X, you must separate them using a semi-colon
-(;).
-
-For example, the following Advanced Properties string excludes the schemas
-named `test` and `abc`; sets the timeout to 30 seconds; and sets the time zone
-to Coordinated Universal:
-
-`Time:HandshakeTimeout=30;QueryTimeout=30;
-TimestampTZDisplayTimezone=utc;ExcludedSchemas=test,abc`
-
-The following table lists and describes the advanced properties that you can
-set when using the MapR Drill ODBC Driver on Linux and Mac OS X.
-
-| Property Name              | Default Value           | Description                                                                                                                                                                                                |
-|----------------------------|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| HandshakeTimeout           | 5                       | An integer value representing the number of seconds that the driver waits before aborting an attempt to connect to a data source. When set to a value of 0, the driver does not abort connection attempts. |
-| QueryTimeout               | 180                     | An integer value representing the number of seconds for the driver to wait before automatically stopping a query. When set to a value of 0, the driver does not stop queries automatically.                |
-| TimestampTZDisplayTimezone | local                   | Two values are possible:local—Timestamps are dependent on the time zone of the user.utc—Timestamps appear in Coordinated Universal Time (UTC).                                                             |
-| ExcludedSchemas            | sys, INFORMATION_SCHEMA | The value of ExcludedSchemas is a list of schemas that do not appear in client applications such as Drill Explorer, Tableau, and Excel. Separate schemas in the list using a comma (,).                    |
-| CastAnyToVarchar           | true                    | Casts the “ANY” and “(VARCHAR(1), ANY) MAP” data types returned from SQL column calls into type “VARCHAR”.                                                                                                 |
-

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/070-testing-the-odbc-connection.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/070-testing-the-odbc-connection.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/070-testing-the-odbc-connection.md
deleted file mode 100644
index b6bca05..0000000
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-linux-and-mac-os-x/070-testing-the-odbc-connection.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: "Testing the ODBC Connection"
-parent: "Using ODBC on Linux and Mac OS X"
----
-To test the ODBC connection on Linux and Mac OS X, you can use an ODBC-enabled client application. For a
-basic connection test, you can also use the test utilities that are packaged
-with your driver manager installation.
-
-For example, the iODBC driver manager includes simple utilities called
-`iodbctest` and `iodbctestw`. You can use either one of these utilities to
-establish a test connection with your driver and your DSN. Use `iodbctest` to
-test how your driver works with an ANSI application. Use `iodbctestw` to test
-how your driver works with a Unicode application.
-
-**Note:** There are 32-bit and 64-bit installations of the iODBC driver manager available. If you have only one or the other installed, then the appropriate version of iodbctest (or iodbctestw) is available. However, if you have both 32- and 64-bit versions installed, then you need to be careful that you are running the version from the correct installation directory.
-
-Visit [http://www.iodbc.org](http://www.iodbc.org/) for further details on
-using the iODBC driver manager.
-
-## Testing the ODBC Connection
-
-Complete the following steps to test your connection using the iODBC driver
-manager:
-
-  1. Run `iodbctest` or `iodbctestw`. The program prompts you for an ODBC connection string.
-  2. If you do not remember the DSN name, type a question mark (?) to see a list of DSNs.
-  3. If you are connecting directly to a Drillbit, type an ODBC connection string using the following format:
-
-     `DRIVER=MapR Drill ODBC Driver;ConnectionType=Direct;Host=HostName;Port=PortNumber`
-     
-     OR
-     
-     If you are connecting to a ZooKeeper cluster, type an ODBC connection string
-using the following format:
-
-     `DRIVER=MapR Drill ODBC Driver;ConnectionType=ZooKeeper;ZKQuorum=Server1:Port1
-,Server2:Port2;ZKClusterID=DrillbitName`
-
-     If the connection is successful, the `SQL>` prompt appears.
-

http://git-wip-us.apache.org/repos/asf/drill/blob/8f1c9a42/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/005-odbc-on-windows-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/005-odbc-on-windows-introduction.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/005-odbc-on-windows-introduction.md
deleted file mode 100644
index 827ef44..0000000
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/005-odbc-on-windows-introduction.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
----
-title: "ODBC on Windows Introduction"
-parent: "Using ODBC on Windows"
----
-The MapR Drill ODBC driver provides BI tools access to Drill’s flexible query
-capabilities so that users can quickly explore various data sources. The MapR
-Drill ODBC driver includes Drill Explorer, which is a simple user interface
-that enables users to examine the content of data sources and create views
-before visualizing the data in a BI tool.
-
-Once you install the MapR Drill ODBC Driver, you can create ODBC DSNs to Drill
-data sources using the ODBC Administrator tool and then use the DSNs to access
-the data from BI tools that work with ODBC. Drill can connect to data with
-well-defined schemas, such as Hive. Drill can also connect directly to data
-that is self-describing, such as HBase, Parquet, JSON, CSV, and TSV.
-
-Complete the following steps to connect to a Drill data source from a BI tool
-using ODBC:
-
-  * [Step 1. Install the MapR Drill ODBC Driver]({{ site.baseurl }}/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows)
-  * [Step 2. Configure ODBC Connections to Drill Data Sources]({{ site.baseurl }}/docs/step-2-configure-odbc-connections-to-drill-data-sources)
-  * [Step 3. Connect to Drill Data Sources from a BI Tool]({{ site.baseurl }}/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool)
-
-For examples of how you can use the MapR Drill ODBC Driver to connect to Drill
-Data Sources from BI tools, see [Step 3. Connect to Drill Data Sources from a
-BI Tool]({{ site.baseurl }}/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool). While the documentation includes examples for Tableau, you can use
-this driver with any BI tool that works with ODBC, such as Excel,
-MicroStrategy, and Toad.
-
-The following figure shows how a BI tool on Windows uses an ODBC connection to
-access data from a Hive table:
-
-![BI to Drill Interface]({{ site.baseurl }}/docs/img/BI_to_Drill_2.png)
-
-The following components provide applications access to Drill data sources:
-
-<table ><tbody><tr><th >Component</th><th >Role</th></tr><tr><td valign="top">Drillbit</td><td valign="top">Accepts queries from clients, executes queries against Drill data sources, and returns the query results. </td></tr><tr><td valign="top">ODBC Data Source Administrator</td><td valign="top">The ODBC Data Source Administrator enables the creation of DSNs to Apache Drill data sources.<br /> In the figure above, the ODBC Data Source Administrator was used to create <code>Hive-DrillDataSources</code>.</td></tr><tr><td valign="top">ODBC DSN</td><td valign="top"><p>Provides applications information about how to connect to the Drill Source.</p>In the figure above, the <code>Hive-DrillDataSources</code> is a DSN that provides connection information to the Hive tables.</td></tr><tr><td colspan="1" valign="top">BI Tool</td><td colspan="1" valign="top"><p>Accesses Drill data sources using the connection information from the ODBC DSN.</p>In the figure above, the BI tool uses <code>Hive-Dri
 llDataSources</code> to access the <code>hive_student</code> table.</td></tr></tbody></table></div>
\ No newline at end of file


Mime
View raw message