hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "FrontPage" by firesleeve
Date Sat, 01 Oct 2011 13:41:10 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "FrontPage" page has been changed by firesleeve:
http://wiki.apache.org/hadoop/FrontPage?action=diff&rev1=276&rev2=277

Comment:
Firesleeve(Silicone Coated Fibreglass Sleeve)

- = Apache Hadoop =
- [[http://hadoop.apache.org/|Apache Hadoop]] is a framework for running applications on large
cluster built of commodity hardware. The Hadoop framework transparently provides applications
both reliability and data motion. Hadoop implements a computational paradigm named [[HadoopMapReduce|Map/Reduce]],
where the application is divided into many small fragments of work, each of which may be executed
or reexecuted on any node in the cluster. In addition, it provides a distributed file system
([[DFS|HDFS]]) that stores data on the compute nodes, providing very high aggregate bandwidth
across the cluster. Both Map/Reduce and the distributed file system are designed so that node
failures are automatically handled by the framework.
+ Firesleeve(Silicone Coated Fibreglass Sleeve)
+ 1)Silicone rubber coated fiberglass sleeve(sleeving) - Protects hoses, cables and wires.

+   from molten metal splash, high heat hazards and occasional exposure to flame, in steel
plants,
+   glass plants, foundries, cutting and welding shops and wherever hoses, cables and wires
may be
+   exposed to high heat or occasional flame. Fireproof sleeve also may be used to insulate
your race cars plumbing system. 
+ 2)Continuous protection to 500F, short term exposure thru 2200F
+   Extremely flexible and conformable through out entire size range at temperatures -65F
thru 500F.  
+ 3)Impede heat radiation of flame
+ 4)Protect operator from burning by hot pipe
+ 5)Impede heat lost and favor to saving energy
+ 6)Moisture-proof, water-proof, resistance to oil and pollution
+ 7)Color: red and blue mainly.
+ 8)Bore diameter(mm):15,20,25,30,35,40,45,50,55,60,65,70,75,80,90,100,110,120,130;(Sizes
from 1/4" I.D. to 6" I.D.)
  
- == General Information ==
-  * [[http://hadoop.apache.org/|Official Apache Hadoop Website]]: download, bug-tracking,
mailing-lists, etc.
+ Yancheng Hengsheng Insulation Co.,Ltd 
+ Web: http://www.hsinsulation.com     
+ Email: hsinsulation@yahoo.com.cn 
+ Tel :+86-139-61986280    Fax : +86-515-88430696
  
-  * [[ProjectDescription|Overview]] of Apache Hadoop
- 
-  * [[FAQ]] FAQ
- 
-  * [[HadoopIsNot|What Hadoop is not]]
- 
-  * [[Distributions and Commercial Support]] for Hadoop (RPMs, Debs, AMIs, etc)
- 
-  * [[HadoopPresentations|Presentations]], [[Books|books]], [[HadoopArticles|articles]] and
[[Papers|papers]] about Hadoop
- 
-  * PoweredBy, a list of sites and applications powered by Apache Hadoop
- 
-  * Support
-   * [[Help|Getting help from the hadoop community]].
- 
-   * [[Support|People and companies for hire]].
- 
-  * [[Conferences|Hadoop Community Events and Conferences]]
-   * HadoopUserGroups (HUGs)
- 
-   * HadoopSummit
- 
-   * HadoopWorld
- 
-  * [[http://developer.yahoo.com/hadoop/tutorial/|Yahoo! Hadoop Tutorial]]: A thorough tutorial
covering Hadoop setup, HDFS, and [[HadoopMapReduce|MapReduce]]
- 
-  * [[http://www.cloudera.com/hadoop-training-basic|Cloudera Online Hadoop Training]]: Video
lectures, exercises and a pre-configured [[http://www.cloudera.com/hadoop-training-virtual-machine|virtual
machine]] to follow along. Sessions cover [[http://www.cloudera.com/hadoop-training-programming-with-hadoop|Hadoop]],
[[http://www.cloudera.com/hadoop-training-mapreduce-algorithms|MapReduce]], [[http://www.cloudera.com/hadoop-training-hive-introduction|Hive]],
[[http://www.cloudera.com/hadoop-training-pig-introduction|Pig]] and more.
- 
-  * [[http://marakana.com/training/java/hadoop.html|Marakana Hadoop Training]]: 3-day training
program in San Francisco with  [[http://marakana.com/expert/srisatish_ambati,10809.html|Srisatish
Ambati]] Program is geared to give developers hands-on working knowledge for harnessing the
power of Hadoop in their organizations. 
- 
- == User Documentation ==
-  * [[HadoopJavaVersions|Available Java Runtime Environments for Hadoop]]
- 
-  * ImportantConcepts
- 
-  * GettingStartedWithHadoop (lots of details and explanation)
- 
-  * QuickStart (for those who just want it to work ''now'')
- 
-  * [[http://hadoop.apache.org/core/docs/current/commands_manual.html|Command Line Options]]
for hadoop shell script.
- 
-  * [[HadoopOverview|Hadoop Code Overview]]
- 
-  * [[TroubleShooting|Troubleshooting]] What do when things go wrong
- 
-  * [[Setup|Setting up a Hadoop Cluster]]
-   * [[Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)]] (tutorial on installing, configuring
and running Hadoop on a single machine)
- 
-   * [[Running_Hadoop_On_OS_X_10.5_64-bit_(Single-Node_Cluster)]]
- 
-   * HowToConfigure Hadoop software
- 
-   * [[WebApp_URLs|WebApps for monitoring your system]]
- 
-   * [[NameNodeFailover|How to handle name node failure]]
- 
-   * [[GangliaMetrics|How to get metrics into ganglia]]
- 
-   * [[LargeClusterTips|Tips for managing a large cluster]]
- 
-   * [[VirtualCluster|How to bring up a cluster of Virtual Machines]]
- 
-   * [[DiskSetup|Disk Setup: some suggestions]]
- 
-   * [[PerformanceTuning|Performance:]] getting extra throughput
- 
-   * [[http://v-lad.org/Tutorials/Hadoop/00%20-%20Intro.html|Hadoop Windows/Eclipse Tutorial]]:
Tutorial on how to setup and configure Hadoop development cluster for Windows and Eclipse.
- 
-   * [[topology_rack_awareness_scripts|Topology Scripts / Rack Awareness]]
- 
-  * Map/Reduce
-   * HadoopMapReduce
- 
-   * HadoopMapRedClasses
- 
-   * HowManyMapsAndReduces
- 
-   * TaskExecutionEnvironment
- 
-   * HowToDebugMapReducePrograms
- 
-  * Examples
-   * WordCount
- 
-   * [[PythonWordCount|Python Word Count]]
- 
-   * [[C++WordCount|C/C++ Word Count]]
- 
-   * [[Grep]]
- 
-   * [[Sort]]
- 
-   * RandomWriter
- 
-   * [[HadoopDfsReadWriteExample|How to read from and write to HDFS]]
- 
-  * Amazon
-   * Running Hadoop on [[AmazonEC2]]
- 
-   * Running Hadoop with AmazonS3
- 
-  * Benchmarks
-   * [[HardwareBenchmarks|Hardware benchmarks]]
- 
-   * [[DataProcessingBenchmarks|Data processing benchmarks]]
- 
-  * Sub-Projects
-   * [[HBase]], a Bigtable-like structured storage system for Hadoop HDFS
- 
-   * [[http://wiki.apache.org/pig/|Apache Pig]] is a high-level data-flow language and execution
framework for parallel computation. It is built on top of Hadoop Core.
- 
-   * [[Hive]] a data warehouse infrastructure which allows sql-like adhoc querying of data
(in any format) stored in Hadoop
- 
-   * ZooKeeper is a high-performance coordination service for distributed applications.
- 
-  * Contrib
-   * HadoopStreaming (Useful for using Hadoop with other programming languages)
- 
-   * DistributedLucene, a Proposal for a distributed Lucene index in Hadoop
- 
-   * [[MountableHDFS]], Fuse-DFS & other Tools to mount HDFS as a standard filesystem
on Linux (and some other Unix OSs)
- 
-   * [[HDFS-APIs]] in perl, python, php, etc
- 
-   * [[Chukwa]] a data collection, storage, and analysis framework
- 
-   * [[EclipsePlugIn|The Apache Hadoop Plugin for Eclipse]] (An Eclipse plug-in that simplifies
the creation and deployment of MapReduce programs with an HDFS Administrative feature)
- 
-   * [[HDFS-RAID]] Erasure Coding in HDFS
- 
- == Developer Documentation ==
-  * [[Roadmap]], listing release plans.
- 
-  * HowToContribute
- 
-  * HowToDevelopUnitTests
- 
-  * HowToUseInjectionFramework
- 
-  * HowToUseSystemTestFramework
- 
-  * HowToSetupYourDevelopmentEnvironment
- 
-  * HowToUseConcurrencyAnalysisTools
- 
-  * [[HowToUseJCarder]]
- 
-  * [[CodeReviewChecklist|HowToCodeReview]]
- 
-  * [[Jira]] usage guidelines
- 
-  * HowToCommit
- 
-  * HowToRelease
- 
-  * HudsonBuildServer
- 
-  * HowToSetupUbuntuBuildMachine
- 
-  * DevelopmentHints
- 
-  * ProjectSuggestions
- 
-  * [[HadoopUnderIDEA|Building/Testing under IntelliJ IDEA]]
- 
-  * [[GitAndHadoop|Git And Hadoop]]
- 
-  * ProjectSplit
- 
- == Related Resources ==
-  * [[http://wiki.apache.org/nutch/NutchHadoopTutorial|Nutch Hadoop Tutorial]] (Useful for
understanding Hadoop in an application context)
- 
-  * [[http://www.alphaworks.ibm.com/tech/mapreducetools|IBM MapReduce Tools for Eclipse]]
- Out of date. Use the Eclipse Plugin in the MapReduce/Contrib instead
- 
-  * Hadoop IRC channel is #hadoop at irc.freenode.net.
- 
-  * [[http://www.tom-doehler.de/wordpress/index.php/2007/12/19/spring-and-hadoop/|Using Spring
and Hadoop]] (Discussion of possibilities to use Hadoop and Dependency Injection with Spring)
- 
-  * [[http://wiki.apache.org/hama|Hama]], a Google's Pregel-like distributed computing framework
based on BSP (Bulk Synchronous Parallel) computing techniques for massive scientific computations.
- 
-  * [[http://lucene.apache.org/mahout|Mahout]], scalable Machine Learning algorithms using
Hadoop
- 
-  * [[http://opensolaris.org/os/project/livehadoop/|Live Hadoop]] A three-node, distributed
Hadoop cluster running on an !OpenSolaris live CD
- 
-  * [[http://wikis.sun.com/display/gridengine62u5/Configuring+and+Using+the+Hadoop+Integration|Grid
Engine integration]] Oracle Grid Engine product documentation on the built-in Hadoop integration
*
- 
-  * [[https://rc.usf.edu/trac/hadoop/wiki/SGEIntegration|SGE Integration]] A guide on tight-integration
of Hadoop on Sun Grid Engine
- 
-  * [[http://www.wheregridenginelives.com/content/big-data-big-compute-grid-engine-and-hadoop-0|Univa
Grid Engine Integration]] A blog post about the integration of Hadoop with the Grid Engine
successor Univa Grid Engine
- 
-  * [[http://philippeadjiman.com/blog/the-hadoop-tutorial-series/|Hadoop Tutorial Series]]
Learning progressively important core Hadoop concepts with hands-on experiments using the
Cloudera Virtual Machine
- 
-  * [[http://pydoop.sourceforge.net|Pydoop]] A Python MapReduce and HDFS API for Hadoop.
- 
-  * [[https://github.com/klbostee/dumbo/wiki|Dumbo]] Dumbo is a project that allows you to
easily write and run Hadoop programs in Python.
- 
-  * [[http://www.asterdata.com/news/091001-Aster-Hadoop-connector.php|Hadoop distributed
file system]] New Hadoop Connector Enables Ultra-Fast Transfer of Data between Hadoop and
Aster Data's MPP Data Warehouse.
- 
-  * [[CUDA On Hadoop|Hadoop + CUDA]]
- 
-  * [[http://kazman.shidler.hawaii.edu/ArchDoc.html|HDFS Architecture Documentation]] An
overview of the HDFS architecture, intended for contributors.
- 
- ----
- CategoryHomepage
- 

Mime
View raw message