eagle-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From h..@apache.org
Subject [50/84] [partial] eagle git commit: Clean repo for eagle site
Date Mon, 03 Apr 2017 11:54:58 GMT
http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/CONTRIBUTING.md
----------------------------------------------------------------------
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
deleted file mode 100755
index 6722d8a..0000000
--- a/CONTRIBUTING.md
+++ /dev/null
@@ -1,30 +0,0 @@
-<!--
-{% comment %}
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to you under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-{% endcomment %}
--->
-
-## Contributing to Eagle
-
-Contributions via GitHub pull requests are gladly accepted from their original
-author. Along with any pull requests, please state that the contribution is
-your original work and that you license the work to the project under the
-project's open source license. Whether or not you state this explicitly, by
-submitting any copyrighted material via pull request, email, or other means
-you agree to license the material under the project's open source license and
-warrant that you have the legal authority to do so.
-
-Learn more from [https://cwiki.apache.org/confluence/display/EAG/Contributing+to+Eagle](https://cwiki.apache.org/confluence/display/EAG/Contributing+to+Eagle)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/DISCLAIMER
----------------------------------------------------------------------
diff --git a/DISCLAIMER b/DISCLAIMER
deleted file mode 100644
index a899a17..0000000
--- a/DISCLAIMER
+++ /dev/null
@@ -1,11 +0,0 @@
-Apache Eagle is an effort undergoing incubation at the Apache Software
-Foundation (ASF), sponsored by the Apache Incubator PMC.
-
-Incubation is required of all newly accepted projects until a further
-review indicates that the infrastructure, communications, and decision
-making process have stabilized in a manner consistent with other
-successful ASF projects.
-
-While incubation status is not necessarily a reflection of the
-completeness or stability of the code, it does indicate that the
-project has yet to be fully endorsed by the ASF.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/KEYS
----------------------------------------------------------------------
diff --git a/KEYS b/KEYS
deleted file mode 100644
index 6da8330..0000000
--- a/KEYS
+++ /dev/null
@@ -1,144 +0,0 @@
-This file contains the PGP keys of various developers.
-Please don't use them for email unless you have to. Their main
-purpose is code signing.
-
-Examples of importing this file in your keystore:
- gpg --import KEYS
- (need pgp and other examples here)
-
-Examples of adding your key to this file:
- pgp -kxa <your name> and append it to this file.
- (pgpk -ll <your name> && pgpk -xa <your name>) >> this file.
- (gpg --list-sigs <your name> && gpg --armor --export <your name>) >> this file.
-
------------------------------------------------------------------------------------
-
-pub   4096R/855D61B1 2015-11-21
-uid                  Hao Chen <hao@apache.org>
-sig 3        855D61B1 2015-11-21  Hao Chen <hao@apache.org>
-sub   4096R/E8A36316 2015-11-21
-sig          855D61B1 2015-11-21  Hao Chen <hao@apache.org>
-
------BEGIN PGP PUBLIC KEY BLOCK-----
-
-mQINBFZQNn8BEADSDOUw1kLBxI/P3npkqvkE5TcZjcowDx9xeSQQIZ4UhChRslc3
-rrgTzcbpzEsybRV7VHMBYBSxRS0pWKPN6N6EZdsLxz+WL4GENzMTNHN0RhJuXw2t
-bThsEqKawXeP/qdH/NgXdLiy0C0rtY1W5T8AKDsNEiVC+snbsEm6EKdRA+72grPw
-FW5CqJfJPPOjE85O11T2s++qZdgU0Cn/F566rODBRQKgrtk1FiKUC0Rko+zmGfxl
-HInhxFS+YlIhC5iCsy6kG3EhZHluAs7rlsF2qOqPDXGedw9XdL97DA9j3KL6i+e0
-5ggV58J4C4UImIseLTofTQlKD8y7O7meuhQzZXoT1prIOBr1gvCgLzKJI1XQEFvC
-2PFGNUbiJBff05i3ogYzl1Q4XriTS+CyIeGX50nWVOXLsJpiu94/xz/T0pmz1vPG
-Zs1bITSbbscIufcmv5BoS/volwiuQqMEoB+QsYJRo4psKGZz3NDzKjl+Hyt88k+L
-dPGBuwt/433w2RDWmmEIO6V01ROjW7AH+raXfzujHQgj/q0aN7zNThQ/ATqkBYir
-6Dlnf/8219p+1+rCWk+UxrWh2lQuhuS5SNidrm63QkdcNEvm5TrHUdo8MDS3wtQb
-Nasbiy7eGcD7j7WkJTeRWo6a71YAFOaxnWV3o7ZCKUD4RDOqxY78lshm/wARAQAB
-tBlIYW8gQ2hlbiA8aGFvQGFwYWNoZS5vcmc+iQI4BBMBAgAiBQJWUDZ/AhsDBgsJ
-CAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRCFTl4ehV1hsb2pEACth7FD/aD4p77m
-K+w186eL5bH0oAyDH05OiVe+Iv+1PrlOHobBM6LOXp8amVTJNXOMAkfWUM+YEVYh
-uNUTYBoYnbiBHT+5fPun/7J65wiHNQpkYvAq7P3PI6kSeEWrT/yB7zxApkkydRQV
-WTMWinYxKwfCFv5uQ8l11lqsDTYU1xoU4rYJ6F0pjasU9GWanoeUsw6Q8Ye+L8Rd
-WMcNt+WcEtEd/1itpUs429gZbGf4U8w47prnr5BO8nJS1vl0ghloasWZK3jEDpAc
-Uxe3+v8HEE3nGrSWenRisv9Jmy4VFq7vRSfizP3UiMmowabsAmc4lS8ed1BygflR
-3mk38dd95zvi5QxfMfK1etDjVTxr0jr0SnLfRL/JOfYg3rnPNM6czHhNXe8k8jd2
-5majf+b3MZUQ62OXcOIgt/Yht9iAJSguGXQ4GYobUQRJ0LbuVyEJnnTT3h1ViiTL
-Lj91cCbd4oPBk0zTFRWb/E6LMu1OzZrfJ2+6Tg8kW+asEw70KWx1kSNBqNUCk6IW
-kbjhv7REyWPJlTj+jgC5tZQstA6OkA/hxJkRVqGEvKzV8gmdYt57VrDsu99nyNYB
-4GEGUQ1+3VobLaDuNd0EcWKhiVSkbCJHuhvKaCak0M/71UrkH34LAUjjqxxORX6q
-MrTE42ZzxwxCfBL5mLgcbFoCayqzpLkCDQRWUDZ/ARAA1Nc8VG9GhecrgcXhgc/2
-OFCcEKLKS97cNt9CJe4qg7ZBg1WGjSb65VmJMxLAP2mZchXqmqLriNS4VTiiDAT+
-6qOKkVsFhtGu9zoxzF6JnBSkmZ616D0oKBv05AIGaaLm5h3U2wMxwluhP1dv80zK
-OYY0hjwdMaN7q16wklTe/DlusRV1CFCGqBQV560j+ml4zRvqe6kV4RA5eCNLMEIO
-D6ghps5xDk7X+1cZC7tlfSwDI19SIOrX/2/EfpVvninHzHPvw9RNjOIgAjXaqGfV
-AI769e67FDXMDdyR04U9OHVpB1Bx46YFXwfMjLu+zxw9ZO2zgDtCk10nwp1kl1lu
-0QVqZYYUoqyEdBYhd+MOEdRGYejkx9xOIaPYaA3Qbrny8ZPfptxj3fnSaNfuo4ox
-F1Q6V27kCXlA2Y8FbVq3gus5M/VQddEmX7UElIw2XouSh2e8eLOu7foGywWlLjgf
-JDqgQxq6YzhqkGsJpc3NbhLi9MKEtZ3O98iJGcceK7S7ytW/mGnxEX/oU/W/iU9L
-sjHmLXG01IMgKFBxENvRRGXmMhjbc244duEXMh2/V98mfqRuAFKt+cYGSkxw9sL7
-OpcQ1J1cBcj+hcozTWzXjIXgM05/xz2m3b1G76C/TCciAy5XXWuBJz6oQ4InWGRE
-fGV39jScRk+EVFoVLrnLFLEAEQEAAYkCHwQYAQIACQUCVlA2fwIbDAAKCRCFTl4e
-hV1hsfOVEACOou7e22jXhP5DjqNeSl8Qr6r9/SPRANhhgSZoF9xSjZjOfcly2dtQ
-sgS/ndYu67/KTtPSMYugi1kaogXqohm3yVOZfwe3Nsye9jSpU1kHSIelRYQs366Q
-9tzAheqG/BlmN8EkleUTFFN9PHh8IqjJMS7SyhnIZ7Tma6uUrUGRGQ5noacLn2cX
-fcVUySl8z3Pv2K7T9HhxrfLX5ZWLUcM45yaiDbFtMNTgEuH+bLs1FwQwWx8Xy8L1
-wcu3GdNsoZ/+PEaWhxIeUrRTkmLQMXMA7d6HJgLj/yunJ8rDI1+InTch37Fbi4Ob
-M5UFIesrZb/puc3t43oPVf0CGvK5P+3mLJ65zm1v0NKH3wbzvGCWBwNvXsUT4HFy
-pqdkb7jarP0iORGdQyaXLQjOxTc8AEg2XuFHBceJzKHMhNXvWAwl0tREs6OjQ1R9
-zok2NC4dwMjXcLiIfCVacjYZGln4+DWbZ1uANJXxgQSEm56S7i4TIWIfQojbk28c
-E9CJ3Ero4bwJYhDyfwIE0gR4qSjAhc1okfxJed0fWsGYx7fw/CreB6WaTC6idRFE
-8PxC9obIyg8BCUaY4w7YX4ECzYsmCZs29QZsFXEIQOOAbw4O0A1wu6mT3dz5sP9s
-J/cbsuof93/Z0ET6S1QbbM5ayFB5vtmUVhA5xfBeHJgnH93l1rpccA==
-=KI+v
------END PGP PUBLIC KEY BLOCK-----
-
-
-
-pub   rsa4096/F1E50006 2016-06-29 [SC]
-uid         [ultimate] Michael Wu (CODE SIGNING KEY) <mw@apache.org>
-sig 3        F1E50006 2016-06-29  Michael Wu (CODE SIGNING KEY) <mw@apache.org>
-sig 3        BB7D5724 2016-07-12  Hemanth Dendukuri (CODE SIGNING KEY) <hdendukuri@apache.org>
-sub   rsa4096/63945BF5 2016-06-29 [E]
-sig          F1E50006 2016-06-29  Michael Wu (CODE SIGNING KEY) <mw@apache.org>
-
------BEGIN PGP PUBLIC KEY BLOCK-----
-Version: GnuPG v2
-
-mQINBFdzfg0BEACuEFvW6CAJVT5Z4qGjO54GySV5ySUSLpiJMyb3yB5ggQCDmLRp
-Lfq7ABdBchWy1d0PCZcqZNN+hfAGC5wgbwmf0is1LP4pYjCtuw9ES4eokypHw+bj
-Njn5GB/MfRUqYchela7wLjoMXSkdsXC8PtzBUVaf3MQR4CITU9Dvsx0qia9jijnT
-oW6ykQkQmgEXfDVms28vkshwpiE8nj6b5n24WGwUdgNwYr+ddp2SYiFY/5lv1eKE
-PyzpxpHtPYZ+h1CpnLSBffdmoDcJnCYT3jiMaQSUQK3SMOJHzWNXdRnKXrhyeMxf
-6q9pn99qb7jQ47AQwO552Vd351OcS0duH7Zo4vFryxNulgmqF4TAbiw8900ws7XV
-6yFPbRi/+/8pVTceXP3G4mt3EUWhidvx2tXof+IIjz7yTvL5WLfhblNV9C0wX9iX
-8noQcE5pUQ/5zxH4RhZZHVGrX2wGsCA7TEtnOUaZmldjkLJ3Tt/31TK4fspoyxHv
-vsUN+oH0ssRBtVhq7bY1m1/ExSvBnmqTZguK2hQKv88B9mNKIP5U3ulVh+WoA1vt
-Mt7bPJ2BzZIfeUHhKMjPzNrqRN//rtGvkE3KlxCQ7q93FUkPbLQC+Muqb1dy30rS
-HFREeOx6H/vGnYCXApKMO6RcgIOksPEc+AcqddVrrSeDITiIKpAF9AcRrwARAQAB
-tC1NaWNoYWVsIFd1IChDT0RFIFNJR05JTkcgS0VZKSA8bXdAYXBhY2hlLm9yZz6J
-AjcEEwEKACEFAldzfg0CGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQItL0
-W/HlAAZ8Cw//au2nEYqioVWu+GN/xZmsRPVoTmU4Yl0CvJsv61662BGApZfMXJKA
-0zfzJdJ8LtgvyBi9UxlkOuLK4nlP7I799pB4sDkC0twQm81ZVt2jOQ0BSaakClON
-Ia7yrr3s7BgNFAIR22EVQq8m2tI15O5uHXYGjqqhn14RD5M9a+dg17IXlmyYma+2
-yRO106Y4lsHY/4GULjU3ing4GgUVbU3yeS9za65k6ojn85ZS/tGIitrZbdy5LIUw
-kSlRV/WwbS6am3xSqmt2LJwmcCJN6khaVTJK9EKTF/bpQh8ddAIIJzZKk7i6IMOB
-Tnx/+JrmWRc4O+Re13AEHqyLuFKPIoLG1j8uaMzruV+R17FYDwnktzVRJvDqNdWE
-3qZZiKADYF7u+PDtGcoDVxGuauKTR2wIRcxbXruOHHGfHICsu5AC4CEkqfcNldl5
-c6hGicQiifqWvAl+YZZu085GzqW9GLXyQbz7SaxNhOdJmaSj/rCgBiVG3ymNHjaI
-KylNxYg6aFNDf8jmjII9bUdAO+KeGg4XdtQDj7dMIWIMOkDZ1SwEo+PomkjOt0W9
-utsjZM0Gf4nifYVCp8Mqq327igECyjwoKtImBhYeZbmcIkUrn7j3qU2BtnuCbIef
-dmh/6/0dK44sY+G4elNlidsxIlFXEEU+MkMtG5AAWdiFgd5fva7acwKJAiIEEwEK
-AAwFAleEXZgFgweGH4AACgkQ4pPgh7t9VyT91Q//d7xi37PBnVkOrE2B27F6kWpt
-ce/4Y426t2g9obXPf0TxXB2/aRRGQgwpNgn7de3q9eoTxjxgHiCEn6wZ4bIM/TkB
-iQ+OiFvtKoaOcuod+ZaYbrBnFxShoHKLCGB3g1uHalJ7Y9Kpkqt+tl0tiTZRz72w
-sSyDuGkKZdoMhh5aW73Hf0y/hsEauEUW/Q4Dmk90W6pGHzD2DtFFbALJk+ehlVgj
-ab2uotFSaPTEQ/NV1g2K9vhBaLIvW9wtYo5VvRAzRGnEp8v/oHFhr93v2QVtA6Li
-U1tdpt/cfPBGgsmph0++gWGbBsYN0WszgFKbi6F1+omQMMeeGP+0hweEtJpoLux9
-JPpto1xhuKOdm5nKPihBOQBpwQxbxukaWx3UtOSE5tpb+kwY5i8gxG30zo+J5gAe
-lFyMdvOHO5EsdGSp3OR6N1SWQRdym/ng/YTnLoAG5C6o3k+qAf1dKhIAsMySGRee
-bv+BkteN/2pK/DtFmMU/JmiJYdou0AZ4I8BTrL1ibn8Q4jNITJkegyQ0csDr5pEn
-ON3OdJmr3ykY1Vh7yQOGSLyhL19+Uk799mODeqtvWRnxNiGaNZkoKB5U7z2EFo1n
-vPltEG5sT+OJa8Bt738uf8UeIFab9TCjV5WZ3OB4pHKuGOKV1hRgBKbCU0CQT652
-b/6vyvMp5hegXN4cL6u5Ag0EV3N+DQEQANpSqMYMnLLH4E0fn57LgRwq6O4ZbroV
-j8QLCmSizW2U6NA5aBngaRrCNFOjspamw44Dj+xtoGY7VbD7hU0hG3NIMtNTf96F
-ygl1cYiXpeUwiFwgmvJ+HMjxk2AYhnPBqU1vS7CCmEuiBh5lUyeMwUG5ixHix4Jl
-6pwZgM51V8VhdV64LcZ+31s7H5e9mpRFLVP3XF48YXO2rzhp1lykcVwiAZ/lIY8y
-mCTUmEEZ6DuVIaMMCd37+h81KGfN6ysAGbIvmy+N5LjOBmNrTTkQ5vLtuajE/jUO
-N5vWM79N0CGh/qJdjN2/zwNGBezHoDzgnpih2ABQ5q2g6ETBRswEcpiiyZs7pXd4
-rw3OiYmumEy+wRX6GqZajUiEtgXqwxHt75RC7TkHEi7SIVV3SxF0U2DxHWJegM23
-cz/JACxHDv9/MIPhrdHYrL3RPXh3oDER9swU+JddJ4RxsLoGf3idPVSVi24tPnsS
-xqwleTKrx1mX88CAS+/ffxluFqhHfz2JfrP/vfDNMow1/QXh25ILaR8+wNTSxAi+
-/GfKAD/K+k8hhYzJFKP3Q+k0GI96kZ+WdzeJsJpxdKwa0Ss4IKDBD1Vsdai3uLZi
-5PnVLKwLWDGxi9mr462fZ/WuAL9MdPrmiJ6qo4+4Yl2DMTrS6AkGv8xZydsQKY4j
-Fo7RDXR15YXNABEBAAGJAh8EGAEKAAkFAldzfg0CGwwACgkQItL0W/HlAAYOPA//
-csTRJPfk0Mi4Qg9DArkpWK+HtCsmxExQx9AdZND8Bq9+OiSuegPgtC15N9KrP8LO
-Ep0WenWp7zvc/xY84HjPPUIZlCB17e/7rnRJl1axM48vqArMOIP0bMb58RWqJnW8
-VfbSnzogt6vsnboPQDGW3nJxrjSC7hO1RbIxXQBtropS2/AYyWhQdOk7kkcDg9bb
-XLBk9RT49aSaxjl7fUQ1Co/J/Beh7smvi37sQWnBqWq+Srl1q5vo+iYDl5LFnf68
-VCq7MRC0XG97/ubou6mou8AT82CdXMXqNujL9fne8+/0tAjebXqtGodi9hfE74m/
-4ENLiLGij76F9GKOnVMOV4Zz6IhkBmeHBnqvM3ovBSWtfuKn5Ks6TgstYi/bhSt8
-TGf0BhXhkEgqJY30e0PIzAKdd3+jgWQCvtN3ajLZ3EGv+W5WtejtVJhEgevn87DJ
-ZGMyzvdq2WEJjt4M9S/WvYe4T2WIyVrvsEDqlaSyzbQg+k0DqBxJVf7QCA1/vRXM
-+ZbFKwHB1lBXbgqaUL3wr7dSYc5xTXcHjKguOucrh/JjTp60gParmniFi432VkKK
-9pNowayzpXGRZPZ2Jed9nVGOI2NPAhqC4+uvnnp2UUerDDJ3J2dBqggvL0PgTwIm
-0d1p8cCKD7HXYPJUZmSBfBpgnKN8Y6P/wmONoFMzGG0=
-=jq5M
------END PGP PUBLIC KEY BLOCK-----

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/LICENSE
----------------------------------------------------------------------
diff --git a/LICENSE b/LICENSE
deleted file mode 100755
index 39aac98..0000000
--- a/LICENSE
+++ /dev/null
@@ -1,237 +0,0 @@
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "{}"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright {yyyy} {name of copyright owner}
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
-
-==============================================================================
-Apache Eagle Subcomponents:
-
-The Apache Eagle project contains subcomponents with separate copyright
-notices and license terms. Your use of the source code for the these
-subcomponents is subject to the terms and conditions of the following
-licenses.
-
-==============================================================================
-For "six": eagle-external/hadoop_jmx_collector/lib/six/
-==============================================================================
-This product bundles "six: a Python 2 and 3 compatibility library", which is available under
-
-"
-Copyright (c) 2010-2016 Benjamin Peterson
-
-Permission is hereby granted, free of charge, to any person obtaining a copy of
-this software and associated documentation files (the "Software"), to deal in
-the Software without restriction, including without limitation the rights to
-use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
-the Software, and to permit persons to whom the Software is furnished to do so,
-subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
-FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
-COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
-IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-"
-
-For details, see https://bitbucket.org/gutworth/six/raw/a9b120c9c49734c1bd7a95e7f371fd3bf308f107/LICENSE.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/NOTICE
----------------------------------------------------------------------
diff --git a/NOTICE b/NOTICE
deleted file mode 100644
index a0510f1..0000000
--- a/NOTICE
+++ /dev/null
@@ -1,5 +0,0 @@
-Apache Eagle
-Copyright 2015-2017 The Apache Software Foundation
-
-This product includes software developed at
-The Apache Software Foundation (http://www.apache.org/).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
deleted file mode 100755
index 7b541b5..0000000
--- a/README.md
+++ /dev/null
@@ -1,98 +0,0 @@
-<!--
-{% comment %}
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to you under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-{% endcomment %}
--->
-
-# Apache Eagle
-
->  The intelligent monitoring and alerting solution instantly analyzes big data platforms for security and performance
-
-Apache® Eagle™ is an open source analytics solution for identifying security and performance issues instantly on big data platforms e.g. Apache Hadoop, Apache Spark, NoSQL etc. It analyzes data activities, yarn applications, jmx metrics, and daemon logs etc., provides state-of-the-art alert engine to identify security breach, performance issues and shows insights.
-
-For more details, please visit [https://eagle.apache.org](https://eagle.apache.org)
-
-[![Build Status](https://builds.apache.org/buildStatus/icon?job=incubator-eagle-main)](https://builds.apache.org/job/incubator-eagle-main/) 
-[![Coverage Status](https://coveralls.io/repos/github/apache/incubator-eagle/badge.svg)](https://coveralls.io/github/apache/incubator-eagle)
-
-## Documentation
-
-You can find the latest Eagle documentation on [https://eagle.apache.org](https://eagle.apache.org/docs). This [README](README.md) file only contains basic setup instructions.
-
-## Downloads
-
-* Development Version: [eagle-0.5-SNAPSHOT](https://github.com/apache/eagle/archive/master.zip) (Under development)
-* Latest Release
-    * [eagle-0.4.0-incubating](http://eagle.apache.org/docs/download-latest.html)
-* Archived Releases
-    * [eagle-0.3.0-incubating](http://eagle.apache.org/docs/download.html#0.3.0-incubating)
-    * [More releases](http://eagle.apache.org/docs/download.html)
-
-## Getting Started
-
-### Prerequisites
-
-* [JDK 8](https://jdk8.java.net/): Java Environment `Version 1.8`
-* [Apache Maven](https://maven.apache.org/): Project management and comprehension tool `Version 3.x`
-* [NPM](https://www.npmjs.com/): Node package management tool `Version 3.x`
-
-### Building Eagle 
-
-> Since version 0.5, Eagle is only built on JDK 8.
-
-Eagle is built using [Apache Maven](https://maven.apache.org/). NPM should be installed (On MAC OS try "brew install node"). To build Eagle, run:
-    
-    mvn clean package -DskipTests 
-
-After successfully building, you will find eagle binary tarball at:
-    
-    eagle-assembly/target/eagle-${VERSION}-bin.tar.gz
-
-### Testing Eagle 
-
-    mvn clean test
-
-### Developing Eagle
-
-* (Optional) Install/Start [HDP Sandbox](http://hortonworks.com/products/sandbox/) which provide an all-in-one virtual machine with most dependency services like Zookeeper, Kafka, HBase, etc and monitored hadoop components.
-* Import Eagle as maven project with popular IDE like [IntelliJ IDEA](https://www.jetbrains.com/idea/)
-* Start **Eagle Server** in `debug` mode by running (default http port: `9090`, default smtp port: `5025`)
-
-        org.apache.eagle.server.ServerDebug
-  
-  Which will start some helpful services for convenient development:
-  * Local Eagle Service on [`http://localhost:9090`](http://localhost:9090)
-  * Local SMTP Service on `localhost:5025` with REST API at [`http://localhost:9090/rest/mail`](http://localhost:9090/rest/mail)
-* Start **Eagle Apps** with Eagle Web UI in `LOCAL MODE`.
-
-## Getting Help
-
-* **Mail**: The fastest way to get response from eagle community is to send email to the mail list [dev@eagle.apache.org](mailto:dev@eagle.apache.org),
-and remember to subscribe our mail list via [dev-subscribe@eagle.apache.org](mailto:dev-subscribe@eagle.apache.org)
-* **Slack**: Join Eagle community on Slack via [https://apacheeagle.slack.com](https://apacheeagle.slack.com)
-* **JIRA**: Report requirements, problems or bugs through apache jira system via [https://issues.apache.org/jira/browse/EAGLE](https://issues.apache.org/jira/browse/EAGLE)
-
-## FAQ
-
-[https://cwiki.apache.org/confluence/display/EAG/FAQ](https://cwiki.apache.org/confluence/display/EAG/FAQ)
-
-## Contributing
-
-Please review the [Contribution to Eagle Guide](https://cwiki.apache.org/confluence/display/EAG/Contributing+to+Eagle) for information on how to get started contributing to the project.
-
-## License
-
-Licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). More details, please refer to [LICENSE](LICENSE) file.

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/docs/README.md
----------------------------------------------------------------------
diff --git a/docs/README.md b/docs/README.md
deleted file mode 100644
index 0487b61..0000000
--- a/docs/README.md
+++ /dev/null
@@ -1,2 +0,0 @@
-# eagle-doc
-Temporarily holding new eagle documentation made by MkDocs.

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/docs/bin/demo-service.sh
----------------------------------------------------------------------
diff --git a/docs/bin/demo-service.sh b/docs/bin/demo-service.sh
deleted file mode 100755
index b62ccfa..0000000
--- a/docs/bin/demo-service.sh
+++ /dev/null
@@ -1,127 +0,0 @@
-#!/bin/bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-function print_help() {
-	echo "Usage: $0 {start | stop | restart | status}"
-	exit 1
-}
-
-if [ $# != 1 ]
-then
-	print_help
-fi
-
-BASE_DIR="$(dirname $0)"
-ROOT_DIR=$(cd "${BASE_DIR}/../"; pwd)
-BASE_NAME="$(basename $0)"
-SHELL_NAME="${BASE_NAME%.*}"
-CONF_FILE="${BASE_DIR}/doc-env.conf"
-
-if [ ! -f $CONF_FILE ]; then
-	echo "file missing: ${CONF_FILE}"
-fi
-
-source ${CONF_FILE}
-
-LOG_DIR="log"
-TEMP_DIR="temp"
-FULL_NAME="${PROGRAM}-${SHELL_NAME}-${PORT}"
-LOG_FILE="${ROOT_DIR}/${LOG_DIR}/${FULL_NAME}.out"
-PID_FILE="${ROOT_DIR}/${TEMP_DIR}/${FULL_NAME}-pid"
-
-CURR_USER="$(whoami)"
-echo -n "[sudo] password for ${CURR_USER}: "
-read -s PWD
-echo 
-
-if [ ! -e ${ROOT_DIR}/${LOG_DIR} ]; then
-	echo ${PWD} | sudo mkdir -p ${ROOT_DIR}/${LOG_DIR}
-	echo ${PWD} | sudo chown -R ${USER}:${GROUP} ${ROOT_DIR}/${LOG_DIR}
-	echo ${PWD} | sudo chmod -R ${FILE_MOD} ${ROOT_DIR}/${LOG_DIR}
-fi
-
-if [ ! -e ${ROOT_DIR}/${TEMP_DIR} ]; then
-	echo ${PWD} | sudo mkdir -p ${ROOT_DIR}/${TEMP_DIR}
-	echo ${PWD} | sudo chown -R ${USER}:${GROUP} ${ROOT_DIR}/${TEMP_DIR}
-	echo ${PWD} | sudo chmod -R ${FILE_MOD} ${ROOT_DIR}/${TEMP_DIR}
-fi
-
-cd ${ROOT_DIR}
-
-start() {
-	echo "Starting ${FULL_NAME} ..."
-	nohup ${COMMAND} 1> ${LOG_FILE} & echo $! > $PID_FILE
-	if [ $? != 0 ];then
-		echo "Error: failed starting"
-		exit 1
-	fi
-	echo "Started successfully"
-}
-
-stop() {
-    echo "Stopping ${FULL_NAME} ..."
-	if [[ ! -f ${PID_FILE} ]];then
-	    echo "No ${PROGRAM} running"
-    	exit 1
-    fi
-
-    PID=`cat ${PID_FILE}`
-	kill ${PID}
-	if [ $? != 0 ];then
-		echo "Error: failed stopping"
-		rm -rf ${PID_FILE}
-		exit 1
-	fi
-
-	rm ${PID_FILE}
-	echo "Stopped successfully"
-}
-
-case $1 in
-"start")
-    start;
-	;;
-"stop")
-    stop;
-	;;
-"restart")
-	echo "Restarting ${FULL_NAME} ..."
-    stop; sleep 1; start;
-	echo "Restarting completed"
-	;;
-"status")
-	echo "Checking ${FULL_NAME} status ..."
-	if [[ -e ${PID_FILE} ]]; then
-	    PID=`cat $PID_FILE`
-	fi
-	if [[ -z ${PID} ]];then
-	    echo "Error: ${FULL_NAME} is not running (missing PID)"
-	    exit 0
-	elif ps -p ${PID} > /dev/null; then
-	    echo "${FULL_NAME} is running with PID: ${PID}"
-	    exit 0
-    else
-        echo "${FULL_NAME} is not running (tested PID: ${PID})"
-        exit 0
-    fi
-	;;
-*)
-	print_help
-	;;
-esac
-
-exit 0

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/docs/bin/doc-env.conf
----------------------------------------------------------------------
diff --git a/docs/bin/doc-env.conf b/docs/bin/doc-env.conf
deleted file mode 100755
index e1b0caa..0000000
--- a/docs/bin/doc-env.conf
+++ /dev/null
@@ -1,7 +0,0 @@
-export GROUP=jenkins
-export USER=jenkins
-export FILE_MOD=770
-export PROGRAM=mkdocs
-export ADDRESS=0.0.0.0
-export PORT=8000
-export COMMAND="${PROGRAM} serve -a ${ADDRESS}:${PORT}"
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/docs/docs/applications.md
----------------------------------------------------------------------
diff --git a/docs/docs/applications.md b/docs/docs/applications.md
deleted file mode 100644
index 74efcc6..0000000
--- a/docs/docs/applications.md
+++ /dev/null
@@ -1,378 +0,0 @@
-# HDFS Data Activity Monitoring
-
-## Monitor Requirements
-
-This application aims to monitor user activities on HDFS via the hdfs audit log. Once any abnormal user activity is detected, an alert is sent in several seconds. The whole pipeline of this application is
-
-* Kafka ingest: this application consumes data from Kafka. In other words, users have to stream the log into Kafka first. 
-
-* Data re-procesing, which includes raw log parser, ip zone joiner, sensitivity information joiner. 
-
-* Kafka sink: parsed data will flows into Kafka again, which will be consumed by the alert engine. 
-
-* Policy evaluation: the alert engine (hosted in Alert Engine app) evaluates each data event to check if the data violate the user defined policy. An alert is generated if the data matches the policy.
-
-![HDFSAUDITLOG](include/images/hdfs_audit_log.png)
-
-
-## Setup & Installation
-
-* Choose a site to install this application. For example 'sandbox'
-
-* Install "Hdfs Audit Log Monitor" app step by step
-
-    ![Install Step 2](include/images/hdfs_install_1.png)
-
-    ![Install Step 3](include/images/hdfs_install_2.png)
-
-    ![Install Step 4](include/images/hdfs_install_3.png)
-
-
-## How to collect the log
-
-To collect the raw audit log on namenode servers, a log collector is needed. Users can choose any tools they like. There are some common solutions available: [logstash](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html), [filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html), log4j appender, etcs. 
-
-For detailed instruction, refer to: [How to stream audit log into Kafka](using-eagle/#how-to-stream-audit-log-into-kafka)
-
-## Sample policies
-
-### 1. monitor file/folder operations 
-
-Delete a file/folder on HDFS. 
-
-```
-from HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX[str:contains(src,'/tmp/test/subtest') and ((cmd=='rename' and str:contains(dst, '.Trash')) or cmd=='delete')] select * group by user insert into hdfs_audit_log_enriched_stream_out
-```
-
-HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX is the input stream name, and hdfs_audit_log_enriched_stream_out is the output stream name, the content between [] is the monitoring conditions. `cmd`, `src` and `dst` is the fields of hdfs audit logs.
-
-   ![Policy 1](include/images/hdfs_policy_1.png)
-
-### 2. classify the file/folder on HDFS
-
-Users may want to mark some folders/files on HDFS as sensitive content. For example, by marking '/sys/soj' as "SOJ", users can monitor any operations they care about on 'sys/soj' and its subfolders/files.
-
-```
-from HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX[sensitivityType=='SOJ' and cmd=='delete')] select * group by user insert into hdfs_audit_log_enriched_stream_out
-```
-The example policy monitors the 'delete' operation on files/subfolders under /sys/soj. 
-
-### 3. Classify the IP Zone 
-
-In some cases, the ips are classified into different zones. For some zone, it may have higher secrecy. Eagle providers ways to monitor user activities on IP level. 
-
-```
-from HDFS_AUDIT_LOG_ENRICHED_STREAM_SANDBOX[securityZone=='SECURITY' and cmd=='delete')] select * group by user insert into hdfs_audit_log_enriched_stream_out
-```
-
-The example policy monitors the 'delete' operation on hosts in 'SECURITY' zone. 
-
-## Questions on this application
-
----
-
-# JMX Monitoring
-
-* Application "**HADOOP_JMX_METRIC_MONITOR**" provide embedded collector script to ingest hadoop/hbase jmx metric as eagle stream and provide ability to define alert policy and detect anomaly in real-time from metric.
-
-    |   Fields   ||
-    | :---: | :---: |
-    | **Type**    | *HADOOP_JMX_METRIC_MONITOR* |
-    | **Version** | *0.5.0-version* |
-    | **Description** | *Collect JMX Metric and monitor in real-time* |
-    | **Streams** | *HADOOP_JMX_METRIC_STREAM* |
-    | **Configuration** | *JMX Metric Kafka Topic (default: hadoop_jmx_metric_{SITE_ID})*<br/><br/>*Kafka Broker List (default: localhost:6667)* |
-
-## Setup & Installation
-
-* Make sure already setup a site (here use a demo site named "sandbox").
-
-* Install "Hadoop JMX Monitor" app in eagle server.
-
-    ![Install Step 2](include/images/install_jmx_2.png)
-
-* Configure Application settings.
-
-    ![Install Step 3](include/images/install_jmx_3.png)
-
-* Ensure a kafka topic named hadoop_jmx_metric_{SITE_ID} (In current guide, it should be hadoop_jmx_metric_sandbox)
-
-* Setup metric collector for monitored Hadoop/HBase using hadoop_jmx_collector and modify the configuration.
-
-    * Collector scripts: [hadoop_jmx_collector](https://github.com/apache/incubator-eagle/tree/master/eagle-external/hadoop_jmx_collector)
-
-    * Rename config-sample.json to config.json: [config-sample.json](https://github.com/apache/incubator-eagle/blob/master/eagle-external/hadoop_jmx_collector/config-sample.json)
-
-            {
-                env: {
-                    site: "sandbox",
-                    name_node: {
-                        hosts: [
-                            "sandbox.hortonworks.com"
-                        ],
-                        port: 50070,
-                        https: false
-                    },
-                    resource_manager: {
-                        hosts: [
-                            "sandbox.hortonworks.com"
-                        ],
-                        port: 50030,
-                        https: false
-                    }
-                },
-                inputs: [{
-                    component: "namenode",
-                    host: "server.eagle.apache.org",
-                    port: "50070",
-                    https: false,
-                    kafka_topic: "nn_jmx_metric_sandbox"
-                }, {
-                    component: "resourcemanager",
-                    host: "server.eagle.apache.org",
-                    port: "8088",
-                    https: false,
-                    kafka_topic: "rm_jmx_metric_sandbox"
-                }, {
-                    component: "datanode",
-                    host: "server.eagle.apache.org",
-                    port: "50075",
-                    https: false,
-                    kafka_topic: "dn_jmx_metric_sandbox"
-                }],
-                filter: {
-                    monitoring.group.selected: [
-                        "hadoop",
-                        "java.lang"
-                    ]
-                },
-                output: {
-                    kafka: {
-                        brokerList: [
-                            "localhost:9092"
-                        ]
-                    }
-                }
-            }
-
-
-* Click "Install" button then you will see the HADOOP_JMX_METRIC_STREAM_{SITE_ID} in Streams.
-
-    ![Install Step 6](include/images/install_jmx_6.png)
-
-## Define JMX Alert Policy
-
-1. Go to "Define Policy".
-
-2. Select HADOOP_JMX_METRIC_MONITOR related streams.
-
-3. Define SQL-Like policy, for example
-
-        from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric=="cpu.usage" and value > 0.9]
-        select site,host,component,value
-        insert into HADOOP_CPU_USAGE_GT_90_ALERT;
-
-    As seen in below screenshot:
-
-![Define JMX Alert Policy](include/images/define_jmx_alert_policy.png)
-
-## Stream Schema
-
-* Schema
-
-    | Stream Name | Stream Schema | Time Series |
-    | :---------: | :-----------: | :---------: |
-    | HADOOP_JMX_METRIC_MONITOR | **host**: STRING<br/><br/>**timestamp**: LONG<br/><br/>**metric**: STRING<br/><br/>**component**: STRING<br/><br/>**site**: STRING<br/><br/>**value**: DOUBLE | True |
-
-## Metrics List
-
-* Please refer to the [Hadoop JMX Metrics List](hadoop-jmx-metrics-list.txt) and see which metrics you're interested in.
-
----
-
-# Job Performance Monitoring
-
-## Monitor Requirements
-
-* Finished/Running Job Details
-* Job Metrics(Job Counter/Statistics) Aggregation
-* Alerts(Job failure/Job slow)
-
-## Applications
-
-* Application Table
-
-    | application | responsibility |
-    | :---: | :---: |
-    | Map Reduce History Job Monitoring | parse mr history job logs from hdfs |
-    | Map Reduce Running Job Monitoring | get mr running job details from resource manager |
-    | Map Reduce Metrics Aggregation | aggregate metrics generated by applications above |
-
-## Data Ingestion And Process
-
-* We build storm topology to fulfill requirements for each application.
-
-    ![topology figures](include/images/jpm.jpg)
-
-* Map Reduce History Job Monitoring (Figure 1)
-    * **Read Spout**
-        * read/parse history job logs from HDFS and flush to eagle service(storage is Hbase)
-    * **Sink Bolt**
-        * convert parsed jobs to streams and write to data sink
-* Map Reduce Running Job Monitoring (Figure 2)
-    * **Read Spout**
-        * fetch running job list from resource manager and emit to Parse Bolt
-    * **Parse Bolt**
-        * for each running job, fetch job detail/job counter/job configure/tasks from resource manager
-* Map Reduce Metrics Aggregation (Figure 3)
-    * **Divide Spout**
-        * divide time period(need to be aggregated) to small pieces and emit to Aggregate Bolt
-    * **Aggregate Bolt**
-        * aggregate metrics for given time period received from Divide Spout
-
-## Setup & Installation
-* Make sure already setup a site (here use a demo site named "sandbox").
-
-* Install "Map Reduce History Job" app in eagle server(Take this application as an example).
-
-* Configure Application settings
-
-    ![application configures](include/images/jpm_configure.png)
-
-* Ensure a kafka topic named {SITE_ID}_map_reduce_failed_job (In current guide, it should be sandbox_map_reduce_failed_job) will be created.
-
-* Click "Install" button then you will see the MAP_REDUCE_FAILED_JOB_STREAM_{SITE_ID} in Alert->Streams.
-    ![application configures](include/images/jpm_streams.png)
-  This application will write stream data to kafka topic(created by last step)
-  
-## Integration With Alert Engine
-
-In order to integrate applications with alert engine and send alerts, follow below steps(Take Map Reduce History Job application as an example):
-
-* **define stream and configure data sink**
-    * define stream in resource/META-INF/providers/xxxProviders.xml
-    For example, MAP_REDUCE_FAILED_JOB_STREAM_{SITE_ID}
-    * configure data sink
-    For example, create kafka topic {SITE_ID}_map_reduce_failed_job
-
-* **define policy**
-
-For example, if you want to receive map reduce job failure alerts, you can define policies (SiddhiQL) as the following:
-```sql
-from map_reduce_failed_job_stream[site=="sandbox" and currentState=="FAILED"]
-select site, queue, user, jobType, jobId, submissionTime, trackingUrl, startTime, endTime
-group by jobId insert into map_reduce_failed_job_stream_out
-```
-    
-   ![define policy](include/images/jpm_define_policy.png)
-   
-* **view alerts**
-
-You can view alerts in Alert->alerts page.
-
-## Stream Schema
-All columns above are predefined in stream map_reduce_failed_job_stream defined in
-
-    eagle-jpm/eagle-jpm-mr-history/src/main/resources/META-INF/providers/org.apache.eagle.jpm.mr.history.MRHistoryJobApplicationProvider.xml
-
-Then, enable the policy in web ui after it's created. Eagle will schedule it automatically.
-
----
-
-# Topology Health Check
-
-* Application "TOPOLOGY HEALTH CHECK" aims to monior those servies with a master-slave structured topology and provide metrics at host level.
-
-    |   Fields   ||
-    | :---: | :---: |
-    | **Type**    | *TOPOLOGY_HEALTH_CHECK* |
-    | **Version** | *0.5.0-version* |
-    | **Description** | *Collect MR,HBASE,HDFS node status and cluster ratio* |
-    | **Streams** | *TOPOLOGY_HEALTH_CHECK_STREAM* |
-    | **Configuration** | *Topology Health Check Topic (default: topology_health_check)*<br/><br/>*Kafka Broker List (default: sandobox.hortonworks.com:6667)* |
-
-## Setup & Installation
-
-* Make sure already setup a site (here use a demo site named "sandbox").
-
-* Install "Topology Health Check" app in eagle server.
-
-    ![Health Check Installation](include/images/health_check_installation.png)
-
-* Configure Application settings.
-
-    ![Health Check Settings](include/images/health_check_settings.png)
-
-* Ensure the existence of a kafka topic named topology_health_check (In current guide, it should be topology_health_check).
-
-* Click "Install" button then you will see the TOPOLOGY_HEALTH_CHECK_STREAM_{SITE_ID} on "Streams" page (Streams could be navigated in left-nav).
-
-    ![Health Check Stream](include/images/health_check_stream.png)
-
-## Define Health Check Alert Policy
-
-* Go to "Define Policy".
-
-* Select TOPOLOGY_HEALTH_CHECK related streams.
-
-* Define SQL-Like policy, for example
-
-        from TOPOLOGY_HEALTH_CHECK_STREAM_SANDBOX[status=='dead'] select * insert into topology_health_check_stream_out;
-
-    ![Health Check Policy](include/images/health_check_policy.png)
-
----
-
-# Hadoop Queue Monitoring
-
-* This application collects metrics of Resource Manager in the following aspects:
-
-    * Scheduler Info of the cluster: http://{RM_HTTP_ADDRESS}:{PORT}/ws/v1/cluster/scheduler
-
-    * Applications of the cluster: http://{RM_HTTP_ADDRESS}:{PORT}/ws/v1/cluster/apps
-
-    * Overall metrics of the cluster: http://{RM_HTTP_ADDRESS}:{PORT}/ws/v1/cluster/metrics
-
-            by version 0.5-incubating, mainly focusing at metrics
-             - `appsPending`
-             - `allocatedMB`
-             - `totalMB`
-             - `availableMB`
-             - `reservedMB`
-             - `allocatedVirtualCores`.
-
-## Setup & Installation
-
-* Make sure already setup a site (here use a demo site named "sandbox").
-
-* From left-nav list, navigate to application managing page by "**Integration**" > "**Sites**", and hit link "**sandbox**" on right.
-
-    ![Navigate to app mgmt](include/images/hadoop_queue_monitor_1.png)
-
-* Install "Hadoop Queue Monitor" by clicking "install" button of the application.
-
-    ![Install Hadoop Queue Monitor App](include/images/hadoop_queue_monitor_2.png)
-
-* In the pop-up layout, select running mode as `Local` or `Cluster`.
-
-    ![Select Running Mode](include/images/hadoop_queue_monitor_3.png)
-
-* Set the target jar of eagle's topology assembly that has existed in eagle server, indicating the absolute path ot it. As in the following screenshot:
-
-    ![Set Jar Path](include/images/hadoop_queue_monitor_4.png)
-
-* Set Resource Manager endpoint urls field, separate values with comma if there are more than 1 url (e.g. a secondary node for HA).
-
-    ![Set RM Endpoint](include/images/hadoop_queue_monitor_5.png)
-
-* Set fields "**Storm Worker Number**", "**Parallel Tasks Per Bolt**", and "**Fetching Metric Interval in Seconds**", or leave them as default if they fit your needs.
-
-    ![Set Advanced Fields](include/images/hadoop_queue_monitor_6.png)
-
-* Finally, hit "**Install**" button to complete it.
-
-## Use of the application
-
-* There is no need to define policies for this applicatoin to work, it could be integrated with "**Job Performance Monitoring Web**" application and consequently seen on cluster dashboard, as long as the latter application is installed too. See an exmple in the following screenshot:
-
-    ![In Dashboard](include/images/hadoop_queue_monitor_7.png)

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/docs/docs/developing-application.md
----------------------------------------------------------------------
diff --git a/docs/docs/developing-application.md b/docs/docs/developing-application.md
deleted file mode 100644
index ec6833e..0000000
--- a/docs/docs/developing-application.md
+++ /dev/null
@@ -1,285 +0,0 @@
-# Introduction
-
-[Applications](applications) in Eagle include process component and view component. Process component normally refers to storm topology or spark stream job which processes incoming data, while viewing component normally refers to GUI hosted in Eagle UI. 
-
-[Application Framework](getting-started/#eagle-framework) targets at solving the problem of managing application lifecycle and presenting uniform views to end users.
- 
-Eagle application framework is designed for end-to-end lifecycle of applications including:
-
-* **Development**: application development and framework development
-
-* **Testing**.
-
-* **Installation**: package management with SPI/Providers.xml
-
-* **Management**: manage applications through REST API
-
----
-
-# Quick Start
-
-* Fork and clone eagle source code repository using GIT.
-
-        git clone https://github.com/apache/incubator-eagle.git
-
-* Run Eagle Server : execute “org.apache.eagle.server.ServerDebug” under eagle-server in IDE or with maven command line.
-
-        org.apache.eagle.server.ServerDebug
-
-* Access current available applications through API.
-
-        curl -XGET  http://localhost:9090/rest/apps/providers
-
-* Create Site through API.
-
-        curl -H "Content-Type: application/json" -X POST  http://localhost:9090/rest/sites --data '{
-             "siteId":"test_site",
-             "siteName":"Test Site",
-             "description":"This is a sample site for test",
-             "context":{
-                  "type":"FAKE_CLUSTER",
-                  "url":"http://localhost:9090",
-                  "version":"2.6.4",
-                  "additional_attr":"Some information about the face cluster site"
-             }
-        }'
-
-* Install Application through API.
-
-        curl -H "Content-Type: application/json" -X POST http://localhost:9090/rest/apps/install --data '{
-             "siteId":"test_site",
-             "appType":"EXAMPLE_APPLICATION",
-             "mode":"LOCAL"
-        }'
-
-* Start Application  (uuid means installed application uuid).
-
-        curl -H "Content-Type: application/json" –X POST http://localhost:9090/rest/apps/start --data '{
-             "uuid":"9acf6792-60e8-46ea-93a6-160fb6ef0b3f"
-        }'
-
-* Stop Application (uuid means installed application uuid).
-
-        curl -XPOST http://localhost:9090/rest/apps/stop '{
-         "uuid": "9acf6792-60e8-46ea-93a6-160fb6ef0b3f"
-        }'
-
-* Uninstall Application (uuid means installed application uuid).
-
-        curl -XDELETE http://localhost:9090/rest/apps/uninstall '{
-         "uuid": "9acf6792-60e8-46ea-93a6-160fb6ef0b3f"
-        }'
-
----
-
-# Create Application
-
-Each application should be developed under independent modules (including backend code and front-end code).
-
-Here is a typical code structure of a new application as following:
-
-```
-eagle-app-example/
-├── pom.xml
-├── src
-│   ├── main
-│   │   ├── java
-│   │   │   └── org
-│   │   │       └── apache
-│   │   │           └── eagle
-│   │   │               └── app
-│   │   │                   └── example
-│   │   │                       ├── ExampleApplicationProvider.java
-│   │   │                       ├── ExampleStormApplication.java
-│   │   ├── resources
-│   │   │   └── META-INF
-│   │   │       ├── providers
-│   │   │       │   └── org.apache.eagle.app.example.ExampleApplicationProvider.xml
-│   │   │       └── services
-│   │   │           └── org.apache.eagle.app.spi.ApplicationProvider
-│   │   └── webapp
-│   │       ├── app
-│   │       │   └── apps
-│   │       │       └── example
-│   │       │           └── index.html
-│   │       └── package.json
-│   └── test
-│       ├── java
-│       │   └── org
-│       │       └── apache
-│       │           └── eagle
-│       │               └── app
-│       │                   ├── example
-│       │                   │   ├── ExampleApplicationProviderTest.java
-│       │                   │   └── ExampleApplicationTest.java
-│       └── resources
-│           └── application.conf
-```
-
-**Eagle Example Application** - [eagle-app-example](https://github.com/haoch/incubator-eagle/tree/master/eagle-examples/eagle-app-example)
-
-**Description** - A typical eagle application is mainly consisted of:
-
-* **Application**: define core execution process logic inheriting from org.apache.eagle.app.Application, which is also implemented ApplicationTool to support Application to run as standalone process like a Storm topology  through command line.
-
-* **ApplicationProvider**: the interface to package application with descriptor metadata, also used as application SPI to dynamically load new application types.
-
-* **META-INF/providers/${APP_PROVIDER_CLASS_NAME}.xml**: support to easily describe application’s descriptor with declarative XML like:
-
-        <application>
-           <type>EXAMPLE_APPLICATION</type>
-           <name>Example Monitoring Application</name>
-           <version>0.5.0-incubating</version>
-           <configuration>
-               <property>
-                   <name>message</name>
-                   <displayName>Message</displayName>
-                   <value>Hello, example application!</value>
-                   <description>Just an sample configuration property</description>
-               </property>
-           </configuration>
-           <streams>
-               <stream>
-                   <streamId>SAMPLE_STREAM_1</streamId>
-                   <description>Sample output stream #1</description>
-                   <validate>true</validate>
-                   <timeseries>true</timeseries>
-                   <columns>
-                       <column>
-                           <name>metric</name>
-                           <type>string</type>
-                       </column>
-                       <column>
-                           <name>source</name>
-                           <type>string</type>
-                       </column>
-                       <column>
-                           <name>value</name>
-                           <type>double</type>
-                           <defaultValue>0.0</defaultValue>
-                       </column>
-                   </columns>
-               </stream>
-               <stream>
-                   <streamId>SAMPLE_STREAM_2</streamId>
-                   <description>Sample output stream #2</description>
-                   <validate>true</validate>
-                   <timeseries>true</timeseries>
-                   <columns>
-                       <column>
-                           <name>metric</name>
-                           <type>string</type>
-                       </column>
-                       <column>
-                           <name>source</name>
-                           <type>string</type>
-                       </column>
-                       <column>
-                           <name>value</name>
-                           <type>double</type>
-                           <defaultValue>0.0</defaultValue>
-                       </column>
-                   </columns>
-               </stream>
-           </streams>
-        </application>
-
-* **META-INF/services/org.apache.eagle.app.spi.ApplicationProvider**: support to dynamically scan and load extensible application provider using java service provider.
-
-* **webapp/app/apps/${APP_TYPE}**: if the application has web portal, then it could add more web code under this directory and make sure building as following in pom.xml
-
-        <build>
-           <resources>
-               <resource>
-                   <directory>src/main/webapp/app</directory>
-                   <targetPath>assets/</targetPath>
-               </resource>
-               <resource>
-                   <directory>src/main/resources</directory>
-               </resource>
-           </resources>
-           <testResources>
-               <testResource>
-                   <directory>src/test/resources</directory>
-               </testResource>
-           </testResources>
-        </build>
-
----
-
-# Test Application
-
-* Extend **org.apache.eagle.app.test.ApplicationTestBase** and initialize injector context.
-
-* Access shared service with **@Inject**.
-
-* Test application lifecycle with related web resource.
-
-        @Inject private SiteResource siteResource;
-        @Inject private ApplicationResource applicationResource;
-
-        // Create local site
-        SiteEntity siteEntity = new SiteEntity();
-        siteEntity.setSiteId("test_site");
-        siteEntity.setSiteName("Test Site");
-        siteEntity.setDescription("Test Site for ExampleApplicationProviderTest");
-        siteResource.createSite(siteEntity);
-        Assert.assertNotNull(siteEntity.getUuid());
-
-        ApplicationOperations.InstallOperation installOperation = new ApplicationOperations.InstallOperation(
-        	"test_site", 
-        	"EXAMPLE_APPLICATION", 
-        	ApplicationEntity.Mode.LOCAL);
-        installOperation.setConfiguration(getConf());
-        // Install application
-        ApplicationEntity applicationEntity = applicationResource
-            .installApplication(installOperation)
-            .getData();
-        // Start application
-        applicationResource.startApplication(new ApplicationOperations.StartOperation(applicationEntity.getUuid()));
-        // Stop application
-        applicationResource.stopApplication(new ApplicationOperations.StopOperation(applicationEntity.getUuid()));
-        // Uninstall application
-        applicationResource.uninstallApplication(
-        	new ApplicationOperations.UninstallOperation(applicationEntity.getUuid()));
-        try {
-           applicationResource.getApplicationEntityByUUID(applicationEntity.getUuid());
-           Assert.fail("Application instance (UUID: " + applicationEntity.getUuid() + ") should have been uninstalled");
-        } catch (Exception ex) {
-           // Expected exception
-        }
-
----
-
-# Management & REST API
-
-## ApplicationProviderSPILoader
-
-Default behavior - automatically loading from class path using SPI:
-
-* By default, eagle will load application providers from current class loader.
-
-* If application.provider.dir defined, it will load from external jars’ class loader.
-
-## Application REST API
-
-* API Table
-
-    | Type       | Uri + Class |
-    | :--------: | :---------- |
-    | **DELETE** | /rest/sites (org.apache.eagle.metadata.resource.SiteResource) |
-    | **DELETE** | /rest/sites/{siteId} (org.apache.eagle.metadata.resource.SiteResource) |
-    | **GET**    | /rest/sites (org.apache.eagle.metadata.resource.SiteResource) |
-    | **GET**    | /rest/sites/{siteId} (org.apache.eagle.metadata.resource.SiteResource) |
-    | **POST**   | /rest/sites (org.apache.eagle.metadata.resource.SiteResource) |
-    | **PUT**    | /rest/sites (org.apache.eagle.metadata.resource.SiteResource) |
-    | **PUT**    | /rest/sites/{siteId} (org.apache.eagle.metadata.resource.SiteResource) |
-    | **DELETE** | /rest/apps/uninstall (org.apache.eagle.app.resource.ApplicationResource) |
-    | **GET**    | /rest/apps (org.apache.eagle.app.resource.ApplicationResource) |
-    | **GET**    | /rest/apps/providers (org.apache.eagle.app.resource.ApplicationResource) |
-    | **GET**    | /rest/apps/providers/{type} (org.apache.eagle.app.resource.ApplicationResource) |
-    | **GET**    | /rest/apps/{appUuid} (org.apache.eagle.app.resource.ApplicationResource) |
-    | **POST**   | /rest/apps/install (org.apache.eagle.app.resource.ApplicationResource) |
-    | **POST**   | /rest/apps/start (org.apache.eagle.app.resource.ApplicationResource) |
-    | **POST**   | /rest/apps/stop (org.apache.eagle.app.resource.ApplicationResource) |
-    | **PUT**    | /rest/apps/providers/reload (org.apache.eagle.app.resource.ApplicationResource) |

http://git-wip-us.apache.org/repos/asf/eagle/blob/6fd95d5c/docs/docs/getting-started.md
----------------------------------------------------------------------
diff --git a/docs/docs/getting-started.md b/docs/docs/getting-started.md
deleted file mode 100644
index 0799934..0000000
--- a/docs/docs/getting-started.md
+++ /dev/null
@@ -1,233 +0,0 @@
-# Architecture
-
-![Eagle 0.5.0 Architecture](include/images/eagle_arch_v0.5.0.png)
-
-### Eagle Apps
-
-* Security
-* Hadoop
-* Operational Intelligence
-
-For more applications, see [Applications](applications).
-
-### Eagle Interface
-
-* REST Service
-* Management UI
-* Customizable Analytics Visualization
-
-### Eagle Integration
-
-* [Apache Ambari](https://ambari.apache.org)
-* [Docker](https://www.docker.com)
-* [Apache Ranger](http://ranger.apache.org)
-* [Dataguise](https://www.dataguise.com)
-
-### Eagle Framework
-
-Eagle has multiple distributed real-time frameworks for efficiently developing highly scalable monitoring applications.
-      	
-#### Alert Engine
-
-![Eagle Alert Engine](include/images/alert_engine.png)
-
-* Real-time: Apache Storm (Execution Engine) + Kafka (Message Bus)
-* Declarative Policy: SQL (CEP) on Streaming
-		from hadoopJmxMetricEventStream
-		[metric == "hadoop.namenode.fsnamesystemstate.capacityused" and value > 0.9] 
-		select metric, host, value, timestamp, component, site 
-		insert into alertStream;
-
-* Dynamical onboarding & correlation
-* No downtime migration and upgrading
-
-#### Storage Engine
-
-![Eagle Storage Engine](include/images/storage_engine.png)
-
-
-* Light-weight ORM Framework for HBase/RDMBS
-    
-    	@Table("HbaseTableName")
-		@ColumnFamily("ColumnFamily")
-		@Prefix("RowkeyPrefix")
-		@Service("UniqueEntitytServiceName")
-		@JsonIgnoreProperties(ignoreUnknown = true)
-		@TimeSeries(false)
-		@Indexes({
-			@Index(name="Index_1_alertExecutorId", columns = { "alertExecutorID" }, unique = true)})
-		public class AlertDefinitionAPIEntity extends TaggedLogAPIEntity{
-		@Column("a")
-		private String desc;
-
-* Full-function SQL-Like REST Query 
-
-		Query=UniqueEntitytServiceName[@site="sandbox"]{*}
-
-* Optimized Rowkey design for time-series data, optimized for metric/entity/log, etc. different storage types
-	
-		Rowkey ::= Prefix | Partition Keys | timestamp | tagName | tagValue | …  
-	
-
-* Secondary Index Support
-		@Indexes({@Index(name="INDEX_NAME", columns = { "SECONDARY_INDEX_COLUMN_NAME" }, unique = true/false)})
-		
-* Native HBase Coprocessor
-		org.apache.eagle.storage.hbase.query.coprocessor.AggregateProtocolEndPoint
-
-
-#### UI Framework
-
-Eagle UI is consist of following parts:
-
-* Eagle Main UI
-* Eagle App Portal/Dashboard/Widgets
-* Eagle Customized Dashboard 
-
-#### Application Framework
-
-##### Application
-
-An "Application" or "App" is composed of data integration, policies and insights for one data source.
-
-##### Application Descriptor 
-
-An "Application Descriptor" is a static packaged metadata information consist of basic information like type, name, version, description, and application process, configuration, streams, docs, policies and so on. 
-
-Here is an example ApplicationDesc of `JPM_WEB_APP`
-
-        {
-        type: "JPM_WEB_APP",
-        name: "Job Performance Monitoring Web ",
-        version: "0.5.0-incubating",
-        description: null,
-        appClass: "org.apache.eagle.app.StaticApplication",
-        jarPath: "/opt/eagle/0.5.0-incubating-SNAPSHOT-build-20161103T0332/eagle-0.5.0-incubating-SNAPSHOT/lib/eagle-topology-0.5.0-incubating-SNAPSHOT-hadoop-2.4.1-11-assembly.jar",
-        viewPath: "/apps/jpm",
-        providerClass: "org.apache.eagle.app.jpm.JPMWebApplicationProvider",
-        configuration: {
-            properties: [{
-                name: "service.host",
-                displayName: "Eagle Service Host",
-                value: "localhost",
-                description: "Eagle Service Host, default: localhost",
-                required: false
-            }, {
-                name: "service.port",
-                displayName: "Eagle Service Port",
-                value: "8080",
-                description: "Eagle Service Port, default: 8080",
-                required: false
-            }]
-        },
-        streams: null,
-        docs: null,
-        executable: false,
-        dependencies: [{
-            type: "MR_RUNNING_JOB_APP",
-            version: "0.5.0-incubating",
-            required: true
-        }, {
-            type: "MR_HISTORY_JOB_APP",
-            version: "0.5.0-incubating",
-            required: true
-        }]
-        }
-    
-
-##### Application Provider
-
-Appilcation Provider is a package management and loading mechanism leveraging [Java SPI](https://docs.oracle.com/javase/tutorial/ext/basics/spi.html).
-	
-For example, in file `META-INF/services/org.apache.eagle.app.spi.ApplicationProvider`, place the full class name of an application provider:
-
-	org.apache.eagle.app.jpm.JPMWebApplicationProvider
-
-
----
-
-# Concepts
-
-* Here are some terms we are using in Apache Eagle (incubating, called Eagle in the following), please check them for your reference. They are basic knowledge of Eagle which also will help to well understand Eagle.
-
-## Site
-
-* A site can be considered as a physical data center. Big data platform e.g. Hadoop may be deployed to multiple data centers in an enterprise.
-
-## Application
-
-* An "Application" or "App" is composed of data integration, policies and insights for one data source.
-
-## Policy
-
-* A "Policy" defines the rule to alert. Policy can be simply a filter expression or a complex window based aggregation rules etc.
-
-## Alerts
-
-* An "Alert" is an real-time event detected with certain alert policy or correlation logic, with different severity levels like INFO/WARNING/DANGER.
-
-## Data Source
-
-* A "Data Source" is a monitoring target data. Eagle supports many data sources HDFS audit logs, Hive2 query, MapReduce job etc.
-
-## Stream
-
-* A "Stream" is the streaming data from a data source. Each data source has its own stream.
-
----
-
-# Quick Start
-
-## Deployment
-
-### Prerequisites
-
-Eagle requires the following dependencies:
-
-* For streaming platform dependencies
-    * Storm: 0.9.3 or later
-    * Hadoop: 2.6.x or later
-    * Hbase: 0.98.x or later
-    * Kafka: 0.8.x or later
-    * Zookeeper: 3.4.6 or later
-    * Java: 1.8.x
-* For metadata database dependencies (Choose one of them)
-    * MangoDB 3.2.2 or later
-        * Installation is required
-    * Mysql 5.1.x or later
-        * Installation is required
-
-Notice:  
->     Storm 0.9.x does NOT support JDK8. You can replace asm-4.0.jar with asm-all-5.0.jar in the storm lib directory. 
->     Then restart other services(nimbus/ui/supervisor). 
-
-
-### Installation
-
-##### Build Eagle
-
-* Download the latest version of Eagle source code.
-
-        git clone https://github.com/apache/incubator-eagle.git
-        
-* Build the source code, and a tar.gz package will be generated under eagle-server-assembly/target
-
-        mvn clean install -DskipTests
-        
-##### Deploy Eagle
-* Copy binary package to your server machine. In the package, you should find:
-    * __bin/__: scripts used for start eagle server
-    * __conf/__: default configurations for eagle server setup.
-    * __lib/__ : all included software packages for eagle server
-* Change configurations under `conf/`
-	* __eagle.conf__
-    * __server.yml__
-* Run eagle-server.sh
-    
-    	./bin/eagle-server.sh start
-
-* Check eagle server
-    * Visit http://host:port/ in your web browser.
-
-## Setup Your Monitoring Case
-`Placeholder for topic: Setup Your Monitoring Case`
\ No newline at end of file


Mime
View raw message