incubator-bigtop-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r..@apache.org
Subject svn commit: r1328532 - in /incubator/bigtop/trunk: ./ bigtop-packages/src/deb/flume/ bigtop-packages/src/deb/hadoop/ bigtop-packages/src/deb/hbase/ bigtop-packages/src/deb/hive/ bigtop-packages/src/deb/oozie/ bigtop-packages/src/deb/pig/ bigtop-package...
Date Fri, 20 Apr 2012 22:31:12 GMT
Author: rvs
Date: Fri Apr 20 22:31:11 2012
New Revision: 1328532

URL: http://svn.apache.org/viewvc?rev=1328532&view=rev
Log:
BIGTOP-553. Metadata for packages needs to be harmonized between RPM and Debain

Modified:
    incubator/bigtop/trunk/bigtop-packages/src/deb/flume/control
    incubator/bigtop/trunk/bigtop-packages/src/deb/hadoop/control
    incubator/bigtop/trunk/bigtop-packages/src/deb/hbase/control
    incubator/bigtop/trunk/bigtop-packages/src/deb/hive/control
    incubator/bigtop/trunk/bigtop-packages/src/deb/oozie/control
    incubator/bigtop/trunk/bigtop-packages/src/deb/pig/control
    incubator/bigtop/trunk/bigtop-packages/src/deb/sqoop/control
    incubator/bigtop/trunk/bigtop-packages/src/deb/whirr/control
    incubator/bigtop/trunk/bigtop-packages/src/deb/zookeeper/control
    incubator/bigtop/trunk/bigtop-packages/src/rpm/hadoop/SPECS/hadoop.spec
    incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/apt/package_data.xml
    incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/package_data.xml
    incubator/bigtop/trunk/bigtop.mk

Modified: incubator/bigtop/trunk/bigtop-packages/src/deb/flume/control
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/deb/flume/control?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/deb/flume/control (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/deb/flume/control Fri Apr 20 22:31:11 2012
@@ -24,19 +24,27 @@ Homepage: http://incubator.apache.org/pr
 Package: flume
 Architecture: all
 Depends: adduser, hadoop-hdfs, bigtop-utils
-Description: reliable, scalable, and manageable distributed data collection application
- Flume is a reliable, scalable, and manageable distributed data collection
- application for collecting data such as logs and delivering it to data stores
- such as Hadoop's HDFS.  It can efficiently collect, aggregate, and move large
- amounts of log data.  It has a simple, but flexible, architecture based on
- streaming data flows.  It is robust and fault tolerant with tunable reliability
- mechanisms and many failover and recovery mechanisms.  The system is centrally
- managed and allows for intelligent dynamic management.  It uses a simple
- extensible data model that allows for online analytic applications.
+Description: Flume is a reliable, scalable, and manageable distributed log collection application
for collecting data such as logs and delivering it to data stores such as Hadoop's HDFS.
+ Flume is a reliable, scalable, and manageable distributed data 
+ collection application for collecting data such as logs and delivering it 
+ to data stores such as Hadoop's HDFS.  It can efficiently collect, 
+ aggregate, and move large amounts of log data.  It has a simple, 
+ but flexible, architecture based on streaming data flows.  It is 
+ robust and fault tolerant with tunable reliability mechanisms and many 
+ failover and recovery mechanisms.  The system is centrally managed and 
+ allows for intelligent dynamic management. It uses a simple extensible 
+ data model that allows for online analytic applications.
 
 Package: flume-node
 Architecture: all
 Depends: flume (= ${source:Version})
-Description: core element of Flume's data path that collects and delivers data
- The Flume node daemon is a core element of flume's data path and is
- responsible for generating, processing, and delivering data.
+Description: The flume node daemon is a core element of flume's data path and is responsible
for generating, processing, and delivering data.
+ Flume is a reliable, scalable, and manageable distributed data collection 
+ application for collecting data such as logs and delivering it to data 
+ stores such as Hadoop's HDFS.  It can efficiently collect, aggregate,
+ and move large amounts of log data.  It has a simple, but flexible, 
+ architecture based on streaming data flows.  It is robust and fault 
+ tolerant with tunable reliability mechanisms and many failover and recovery 
+ mechanisms.  The system is centrally managed and allows for intelligent 
+ dynamic management. It uses a simple extensible data model that allows 
+ for online analytic applications.

Modified: incubator/bigtop/trunk/bigtop-packages/src/deb/hadoop/control
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/deb/hadoop/control?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/deb/hadoop/control (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/deb/hadoop/control Fri Apr 20 22:31:11 2012
@@ -25,20 +25,20 @@ Package: hadoop
 Provides: hadoop
 Architecture: all
 Depends: ${shlibs:Depends}, ${misc:Depends}, adduser, bigtop-utils, zookeeper (>= 3.4.0)
-Description: A software platform for processing vast amounts of data
+Description: Hadoop is a software platform for processing vast amounts of data
  Hadoop is a software platform that lets one easily write and
  run applications that process vast amounts of data.
  .
  Here's what makes Hadoop especially useful:
-  * Scalable: Hadoop can reliably store and process petabytes.
-  * Economical: It distributes the data and processing across clusters
-                of commonly available computers. These clusters can number
-                into the thousands of nodes.
-  * Efficient: By distributing the data, Hadoop can process it in parallel
-               on the nodes where the data is located. This makes it
-               extremely rapid.
-  * Reliable: Hadoop automatically maintains multiple copies of data and
-              automatically redeploys computing tasks based on failures.
+ * Scalable: Hadoop can reliably store and process petabytes.
+ * Economical: It distributes the data and processing across clusters
+               of commonly available computers. These clusters can number
+               into the thousands of nodes.
+ * Efficient: By distributing the data, Hadoop can process it in parallel
+              on the nodes where the data is located. This makes it
+              extremely rapid.
+ * Reliable: Hadoop automatically maintains multiple copies of data and
+             automatically redeploys computing tasks based on failures.
  .
  Hadoop implements MapReduce, using the Hadoop Distributed File System (HDFS).
  MapReduce divides applications into many small blocks of work. HDFS creates
@@ -84,20 +84,15 @@ Package: hadoop-hdfs-fuse
 Architecture: i386 amd64
 Depends: ${shlibs:Depends}, hadoop-hdfs (= ${source:Version}), hadoop-client (= ${source:Version}),
libfuse2, fuse-utils
 Enhances: hadoop
-Description: HDFS exposed over a Filesystem in Userspace
- These projects (enumerated below) allow HDFS to be mounted (on most flavors 
- of Unix) as a standard file system using the mount command. Once mounted, the
-  user can operate on an instance of hdfs using standard Unix utilities such 
- as 'ls', 'cd', 'cp', 'mkdir', 'find', 'grep', or use standard Posix libraries 
- like open, write, read, close from C, C++, Python, Ruby, Perl, Java, bash, etc.
+Description: Mountable HDFS
+ These projects (enumerated below) allow HDFS to be mounted (on most flavors of Unix) as
a standard file system using
 
 Package: hadoop-doc
 Provides: hadoop-doc
 Architecture: all
 Section: doc
-Description: Documentation for Hadoop
- This package contains the Java Documentation for Hadoop and its relevant
- APIs.
+Description: Hadoop Documentation
+ Documentation for Hadoop
 
 Package: hadoop-conf-pseudo
 Provides: hadoop-conf-pseudo
@@ -116,14 +111,13 @@ Provides: hadoop-mapreduce-historyserver
 Architecture: all
 Depends: hadoop-mapreduce (= ${source:Version})
 Description: MapReduce History Server
- The History server keeps records of the different activities being performed 
- on a Apache Hadoop cluster.
+ The History server keeps records of the different activities being performed on a Apache
Hadoop cluster
 
 Package: hadoop-yarn-nodemanager
 Provides: hadoop-yarn-nodemanager
 Architecture: all
 Depends: hadoop-yarn (= ${source:Version})
-Description: Node manager for Hadoop
+Description: YARN Node Manager
  The NodeManager is the per-machine framework agent who is responsible for
  containers, monitoring their resource usage (cpu, memory, disk, network) and
  reporting the same to the ResourceManager/Scheduler.
@@ -132,21 +126,21 @@ Package: hadoop-yarn-resourcemanager
 Provides: hadoop-yarn-resourcemanager
 Architecture: all
 Depends: hadoop-yarn (= ${source:Version})
-Description: Resource manager for Hadoop
- The resource manager manages the global assignment of compute resources to applications.
+Description: YARN Resource Manager
+ The resource manager manages the global assignment of compute resources to applications
 
 Package: hadoop-yarn-proxyserver
 Provides: hadoop-yarn-proxyserver
 Architecture: all
 Depends: hadoop-yarn (= ${source:Version})
-Description: Web proxy for YARN
+Description: YARN Web Proxy
  The web proxy server sits in front of the YARN application master web UI.
 
 Package: hadoop-hdfs-namenode
 Provides: hadoop-hdfs-namenode
 Architecture: all
 Depends: hadoop-hdfs (= ${source:Version})
-Description: Name Node for Hadoop
+Description: The Hadoop namenode manages the block locations of HDFS files
  The Hadoop Distributed Filesystem (HDFS) requires one unique server, the
  namenode, which manages the block locations of files on the filesystem.
 
@@ -154,10 +148,10 @@ Package: hadoop-hdfs-secondarynamenode
 Provides: hadoop-hdfs-secondarynamenode
 Architecture: all
 Depends: hadoop-hdfs (= ${source:Version})
-Description: Secondary Name Node for Hadoop
- The Secondary Name Node is responsible for checkpointing file system images.
- It is _not_ a failover pair for the namenode, and may safely be run on the
- same machine.
+Description: Hadoop Secondary namenode
+ The Secondary Name Node periodically compacts the Name Node EditLog
+ into a checkpoint.  This compaction ensures that Name Node restarts
+ do not incur unnecessary downtime.
 
 Package: hadoop-hdfs-zkfc
 Provides: hadoop-hdfs-zkfc
@@ -174,7 +168,7 @@ Package: hadoop-hdfs-datanode
 Provides: hadoop-hdfs-datanode
 Architecture: all
 Depends: hadoop-hdfs (= ${source:Version})
-Description: Data Node for Hadoop
+Description: Hadoop Data Node
  The Data Nodes in the Hadoop Cluster are responsible for serving up
  blocks of data over the network to Hadoop Distributed Filesystem
  (HDFS) clients.
@@ -182,8 +176,8 @@ Description: Data Node for Hadoop
 Package: libhdfs0
 Architecture: any
 Depends: hadoop (= ${source:Version}), ${shlibs:Depends}
-Description: JNI Bindings to access Hadoop HDFS from C
- See http://wiki.apache.org/hadoop/LibHDFS
+Description: Hadoop Filesystem Library
+ Hadoop Filesystem Library
 
 Package: libhdfs0-dev
 Architecture: any

Modified: incubator/bigtop/trunk/bigtop-packages/src/deb/hbase/control
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/deb/hbase/control?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/deb/hbase/control (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/deb/hbase/control Fri Apr 20 22:31:11 2012
@@ -24,29 +24,42 @@ Package: hbase
 Architecture: all
 Depends: adduser, zookeeper (>= 3.3.1), hadoop-hdfs, bigtop-utils
 Recommends: ntp
-Description: HBase is the Hadoop database
- Use it when you need random, realtime read/write access to your Big Data. 
- This project's goal is the hosting of very large tables -- billions of rows  
- X millions of columns -- atop clusters of commodity hardware.
+Description: HBase is the Hadoop database. Use it when you need random, realtime read/write
access to your Big Data. This project's goal is the hosting of very large tables -- billions
of rows X millions of columns -- atop clusters of commodity hardware.
+ HBase is an open-source, distributed, column-oriented store modeled after
+ Google' Bigtable: A Distributed Storage System for Structured Data by 
+ Chang et al. Just as Bigtable leverages the distributed data storage 
+ provided by the Google File System, HBase provides Bigtable-like capabilities
+ on top of Hadoop. HBase includes:
+ .
+    * Convenient base classes for backing Hadoop MapReduce jobs with HBase tables
+    * Query predicate push down via server side scan and get filters
+    * Optimizations for real time queries
+    * A high performance Thrift gateway
+    * A REST-ful Web service gateway that supports XML, Protobuf, and binary data encoding
options
+    * Cascading source and sink modules
+    * Extensible jruby-based (JIRB) shell
+    * Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia;
or via JMX
 
 Package: hbase-doc
 Architecture: all
 Section: doc
-Description: Documentation for HBase
- This package contains the HBase manual and JavaDoc.
+Description: Hbase Documentation
+ Documentation for Hbase
 
 Package: hbase-master
 Architecture: all
 Depends: hbase (= ${source:Version}) 
-Description: HMaster is the "master server" for a HBase
+Description: The Hadoop HBase master Server.
+ HMaster is the "master server" for a HBase. 
  There is only one HMaster for a single HBase deployment.
 
 Package: hbase-regionserver
 Architecture: all
 Depends: hbase (= ${source:Version}) 
-Description: HRegionServer makes a set of HRegions available to clients
- It checks in with the HMaster. There are many HRegionServers in a single 
- HBase deployment.
+Description: The Hadoop HBase RegionServer server.
+ HRegionServer makes a set of HRegions available to clients.
+ It checks in with the HMaster. There are many HRegionServers
+ in a single HBase deployment.
 
 Package: hbase-rest
 Architecture: all
@@ -57,6 +70,14 @@ Description: The Apache HBase REST gatew
 Package: hbase-thrift
 Architecture: all
 Depends: hbase (= ${source:Version}) 
-Description: Provides an HBase Thrift service
- This package provides a Thrift service interface to the HBase distributed
- database.
+Description: The Hadoop HBase Thrift Interface
+ ThriftServer - this class starts up a Thrift server which 
+ implements the Hbase API specified in the Hbase.thrift IDL file.
+ "Thrift is a software framework for scalable cross-language 
+ services development. It combines a powerful software stack with 
+ a code generation engine to build services that work efficiently 
+ and seamlessly between C++, Java, Python, PHP, and Ruby. Thrift 
+ was developed at Facebook, and we are now releasing it as open 
+ source." For additional information, see 
+    http://developers.facebook.com/thrift/. 
+ Facebook has announced their intent to migrate Thrift into Apache Incubator.

Modified: incubator/bigtop/trunk/bigtop-packages/src/deb/hive/control
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/deb/hive/control?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/deb/hive/control (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/deb/hive/control Fri Apr 20 22:31:11 2012
@@ -24,35 +24,24 @@ Homepage: http://hive.apache.org/
 Package: hive
 Architecture: all
 Depends: adduser, hadoop-client, bigtop-utils
-Description: A data warehouse infrastructure built on top of Hadoop
- Hive is a data warehouse infrastructure built on top of Hadoop that
- provides tools to enable easy data summarization, adhoc querying and
- analysis of large datasets data stored in Hadoop files. It provides a
- mechanism to put structure on this data and it also provides a simple
- query language called Hive QL which is based on SQL and which enables
- users familiar with SQL to query this data. At the same time, this
- language also allows traditional map/reduce programmers to be able to
- plug in their custom mappers and reducers to do more sophisticated
- analysis which may not be supported by the built-in capabilities of
- the language.
-
-Package: python-hive
-Architecture: all
-Section: python
-Depends: ${python:Depends}
-Provides: ${python:Provides}
-XS-Python-Version: >= 2.4
-Description: Python client library to talk to the Hive Metastore
- This is a generated Thrift client to talk to the Hive Metastore.
+Description: Hive is a data warehouse infrastructure built on top of Hadoop
+ Hive is a data warehouse infrastructure built on top of Hadoop that provides
+ tools to enable easy data summarization, adhoc querying and analysis of large
+ datasets data stored in Hadoop files. It provides a mechanism to put structure
+ on this data and it also provides a simple query language called Hive QL which
+ is based on SQL and which enables users familiar with SQL to query this data.
+ At the same time, this language also allows traditional map/reduce programmers
+ to be able to plug in their custom mappers and reducers to do more sophisticated 
+ analysis which may not be supported by the built-in capabilities of the language.
 
 Package: hive-server
 Architecture: all
 Depends: hive (= ${source:Version})
-Description: Provides a Hive Thrift service
+Description: Provides a Hive Thrift service.
  This optional package hosts a Thrift server for Hive clients across a network to use.
 
 Package: hive-metastore
 Architecture: all
 Depends: hive (= ${source:Version})
-Description: Shared metadata repository for Hive
+Description: Shared metadata repository for Hive.
  This optional package hosts a metadata server for Hive clients across a network to use.

Modified: incubator/bigtop/trunk/bigtop-packages/src/deb/oozie/control
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/deb/oozie/control?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/deb/oozie/control (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/deb/oozie/control Fri Apr 20 22:31:11 2012
@@ -24,16 +24,20 @@ Homepage: http://incubator.apache.org/oo
 Package: oozie-client
 Architecture: all
 Depends: bigtop-utils
-Description: Command line utility that allows
- remote access and operation of oozie. Using this utility, the
- user can deploy workflows and perform other administrative and
- monitoring tasks such as start, stop, kill, resume workflows
- and coordinator jobs.
+Description: Client for Oozie Workflow Engine
+ Oozie client is a command line client utility that allows remote
+ administration and monitoring of worflows. Using this client utility
+ you can submit worflows, start/suspend/resume/kill workflows and
+ find out their status at any instance. Apart from such operations,
+ you can also change the status of the entire system, get vesion
+ information. This client utility also allows you to validate
+ any worflows before they are deployed to the Oozie server.
 
 Package: oozie
 Architecture: all
 Depends: oozie-client (= ${source:Version}), hadoop-client, bigtop-tomcat
-Description: A workflow and coordinator sytem for Hadoop jobs.
+Description: Oozie is a system that runs workflows of Hadoop jobs.
+ Oozie is a system that runs workflows of Hadoop jobs.
  Oozie workflows are actions arranged in a control dependency DAG (Direct
  Acyclic Graph).
  .
@@ -57,8 +61,8 @@ Description: A workflow and coordinator 
  JAR files for Map/Reduce jobs, shells for streaming Map/Reduce jobs, native
  libraries, Pig scripts, and other resource files.
  .
- Running workflow jobs is done via command line tools, a WebServices API or
- a Java API.
+ Running workflow jobs is done via command line tools, a WebServices API
+ or a Java API.
  .
  Monitoring the system and workflow jobs can be done via a web console, the
  command line tools, the WebServices API and the Java API.

Modified: incubator/bigtop/trunk/bigtop-packages/src/deb/pig/control
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/deb/pig/control?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/deb/pig/control (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/deb/pig/control Fri Apr 20 22:31:11 2012
@@ -24,7 +24,7 @@ Homepage: http://pig.apache.org/
 Package: pig
 Architecture: all
 Depends: hadoop-client, bigtop-utils
-Description: A platform for analyzing large data sets using Hadoop
+Description: Pig is a platform for analyzing large data sets
  Pig is a platform for analyzing large data sets that consists of a high-level language
  for expressing data analysis programs, coupled with infrastructure for evaluating these
  programs. The salient property of Pig programs is that their structure is amenable
@@ -34,7 +34,7 @@ Description: A platform for analyzing la
  sequences of Map-Reduce programs, for which large-scale parallel implementations already
  exist (e.g., the Hadoop subproject). Pig's language layer currently consists of a textual
  language called Pig Latin, which has the following key properties:
- .
+ . 
  * Ease of programming
     It is trivial to achieve parallel execution of simple, "embarrassingly parallel" data
     analysis tasks. Complex tasks comprised of multiple interrelated data transformations
@@ -44,4 +44,4 @@ Description: A platform for analyzing la
     The way in which tasks are encoded permits the system to optimize their execution
     automatically, allowing the user to focus on semantics rather than efficiency.
  * Extensibility
-    Users can create their own functions to do special-purpose processing. 
+    Users can create their own functions to do special-purpose processing.

Modified: incubator/bigtop/trunk/bigtop-packages/src/deb/sqoop/control
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/deb/sqoop/control?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/deb/sqoop/control (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/deb/sqoop/control Fri Apr 20 22:31:11 2012
@@ -23,13 +23,13 @@ Homepage: http://incubator.apache.org/sq
 Package:  sqoop
 Architecture: all
 Depends: hadoop-client, bigtop-utils
-Description: Tool for easy imports and exports of data sets between databases and HDFS
- Sqoop is a tool that provides the ability to import and export data sets between
- the Hadoop Distributed File System (HDFS) and relational databases.
+Description: Sqoop allows easy imports and exports of data sets between databases and the
Hadoop Distributed File System (HDFS).
+ Sqoop allows easy imports and exports of data sets between databases and the Hadoop Distributed
File System (HDFS).
 
 Package: sqoop-metastore
 Architecture: all
 Depends: sqoop (= ${source:Version}), adduser
-Description: Shared metadata repository for Sqoop.
- This optional package hosts a metadata server for Sqoop clients across a network to use.
+Description: Shared metadata repository for Sqoop. 
+ Shared metadata repository for Sqoop. This optional package hosts a metadata
+ server for Sqoop clients across a network to use.
 

Modified: incubator/bigtop/trunk/bigtop-packages/src/deb/whirr/control
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/deb/whirr/control?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/deb/whirr/control (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/deb/whirr/control Fri Apr 20 22:31:11 2012
@@ -23,12 +23,12 @@ Homepage: http://whirr.apache.org/
 Package:  whirr
 Architecture: all
 Depends: bigtop-utils
-Description: Scripts and libraries for running software services on cloud infrastructure
+Description: Scripts and libraries for running software services on cloud infrastructure.
  Whirr provides
  .
-  * A cloud-neutral way to run services. You don't have to worry about the
-    idiosyncrasies of each provider.
-  * A common service API. The details of provisioning are particular to the
-    service.
-  * Smart defaults for services. You can get a properly configured system
-    running quickly, while still being able to override settings as needed.
+ * A cloud-neutral way to run services. You don't have to worry about the
+   idiosyncrasies of each provider.
+ * A common service API. The details of provisioning are particular to the
+   service.
+ * Smart defaults for services. You can get a properly configured system
+   running quickly, while still being able to override settings as needed.

Modified: incubator/bigtop/trunk/bigtop-packages/src/deb/zookeeper/control
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/deb/zookeeper/control?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/deb/zookeeper/control (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/deb/zookeeper/control Fri Apr 20 22:31:11 2012
@@ -29,4 +29,5 @@ Description: A high-performance coordina
 Package: zookeeper-server
 Architecture: all
 Depends: zookeeper (= ${source:Version})
-Description: This runs the zookeeper server on startup.
+Description: The Hadoop Zookeeper server
+ This package starts the zookeeper server on startup

Modified: incubator/bigtop/trunk/bigtop-packages/src/rpm/hadoop/SPECS/hadoop.spec
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-packages/src/rpm/hadoop/SPECS/hadoop.spec?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-packages/src/rpm/hadoop/SPECS/hadoop.spec (original)
+++ incubator/bigtop/trunk/bigtop-packages/src/rpm/hadoop/SPECS/hadoop.spec Fri Apr 20 22:31:11
2012
@@ -310,7 +310,7 @@ The server providing HTTP REST API suppo
 interface in HDFS.
 
 %package yarn-resourcemanager
-Summary: Yarn Resource Manager
+Summary: YARN Resource Manager
 Group: System/Daemons
 Requires: %{name}-yarn = %{version}-%{release}
 Requires(pre): %{name} = %{version}-%{release}
@@ -319,7 +319,7 @@ Requires(pre): %{name} = %{version}-%{re
 The resource manager manages the global assignment of compute resources to applications
 
 %package yarn-nodemanager
-Summary: Yarn Node Manager
+Summary: YARN Node Manager
 Group: System/Daemons
 Requires: %{name}-yarn = %{version}-%{release}
 Requires(pre): %{name} = %{version}-%{release}
@@ -330,7 +330,7 @@ containers, monitoring their resource us
 reporting the same to the ResourceManager/Scheduler.
 
 %package yarn-proxyserver
-Summary: Yarn Web Proxy
+Summary: YARN Web Proxy
 Group: System/Daemons
 Requires: %{name}-yarn = %{version}-%{release}
 Requires(pre): %{name} = %{version}-%{release}
@@ -359,7 +359,7 @@ Requires: %{name}-mapreduce = %{version}
 Installation of this package will provide you with all the dependencies for Hadoop clients.
 
 %package conf-pseudo
-Summary: Hadoop installation in pseudo-distributed mode
+Summary: Pseudo-distributed Hadoop configuration
 Group: System/Daemons
 Requires: %{name} = %{version}-%{release}
 Requires: %{name}-hdfs-namenode = %{version}-%{release}
@@ -370,8 +370,9 @@ Requires: %{name}-yarn-nodemanager = %{v
 Requires: %{name}-mapreduce-historyserver = %{version}-%{release}
 
 %description conf-pseudo
-Installation of this RPM will setup your machine to run in pseudo-distributed mode
-where each Hadoop daemon runs in a separate Java process.
+Contains configuration files for a "pseudo-distributed" Hadoop deployment.
+In this mode, each of the hadoop components runs as a separate Java process,
+but all on the same machine.
 
 %package doc
 Summary: Hadoop Documentation

Modified: incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/apt/package_data.xml
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/apt/package_data.xml?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/apt/package_data.xml
(original)
+++ incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/apt/package_data.xml
Fri Apr 20 22:31:11 2012
@@ -21,7 +21,7 @@
     <flume>
       <home>/var/run/flume</home>
       <descr>Flume User</descr>
-      <shell>/sbin/nologin</shell>
+      <shell>/bin/false</shell>
     </flume>
   </users>
   <deps>
@@ -33,7 +33,7 @@
     <sqoop>
       <home>/var/lib/sqoop</home>
       <descr>Sqoop User</descr>
-      <shell>/sbin/nologin</shell>
+      <shell>/bin/false</shell>
     </sqoop>
   </users>
   <deps>
@@ -76,7 +76,7 @@
     <hive>
       <home>/var/lib/hive</home>
       <descr>Hive User</descr>
-      <shell>/sbin/nologin</shell>
+      <shell>/bin/false</shell>
     </hive>
   </users>
 </hive-metastore>
@@ -85,7 +85,7 @@
     <hive>
       <home>/var/lib/hive</home>
       <descr>Hive User</descr>
-      <shell>/sbin/nologin</shell>
+      <shell>/bin/false</shell>
     </hive>
   </users>
 </hive-server>
@@ -156,9 +156,8 @@
 </hadoop-httpfs>
 <libhdfs0>
   <metadata>
-    <summary>Mountable HDFS</summary>
-    <description>JNI Bindings to access Hadoop HDFS from C
- See http://wiki.apache.org/hadoop/LibHDFS</description>
+    <summary>Hadoop Filesystem Library</summary>
+    <description>Hadoop Filesystem Library</description>
     <url>http://hadoop.apache.org/core/</url>
   </metadata>
   <deps>
@@ -168,9 +167,8 @@
 </libhdfs0>
 <libhdfs0-dev>
   <metadata>
-    <summary>Mountable HDFS</summary>
-    <description>JNI Bindings to access Hadoop HDFS from C
- See http://wiki.apache.org/hadoop/LibHDFS</description>
+    <summary>Development support for libhdfs0</summary>
+    <description>Includes examples and header files for accessing HDFS from C</description>
     <url>http://hadoop.apache.org/core/</url>
   </metadata>
   <deps>

Modified: incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/package_data.xml
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/package_data.xml?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/package_data.xml
(original)
+++ incubator/bigtop/trunk/bigtop-tests/test-artifacts/package/src/main/resources/package_data.xml
Fri Apr 20 22:31:11 2012
@@ -726,7 +726,7 @@ blocks of data over the network to Hadoo
 </hadoop-hdfs-datanode>
 <hadoop-yarn-resourcemanager>
   <metadata>
-    <summary>Yarn Resource Manager</summary>
+    <summary>YARN Resource Manager</summary>
     <description>The resource manager manages the global assignment of compute resources
to applications</description>
     <url>http://hadoop.apache.org/core/</url>
   </metadata>
@@ -743,7 +743,7 @@ blocks of data over the network to Hadoo
 </hadoop-yarn-resourcemanager>
 <hadoop-yarn-nodemanager>
   <metadata>
-    <summary>Yarn Node Manager</summary>
+    <summary>YARN Node Manager</summary>
     <description>The NodeManager is the per-machine framework agent who is responsible
for
 containers, monitoring their resource usage (cpu, memory, disk, network) and
 reporting the same to the ResourceManager/Scheduler.</description>
@@ -762,7 +762,7 @@ reporting the same to the ResourceManage
 </hadoop-yarn-nodemanager>
 <hadoop-yarn-proxyserver>
   <metadata>
-    <summary>Yarn Web Proxy</summary>
+    <summary>YARN Web Proxy</summary>
     <description>The web proxy server sits in front of the YARN application master
web UI.</description>
     <url>http://hadoop.apache.org/core/</url>
   </metadata>
@@ -796,11 +796,10 @@ reporting the same to the ResourceManage
 </hadoop-mapreduce-historyserver>
 <hadoop-conf-pseudo>
   <metadata>
-    <summary>Hadoop installation in pseudo-distributed mode</summary>
-    <description>Pseudo-distributed Hadoop configuration
- Contains configuration files for a "pseudo-distributed" Hadoop deployment.
- In this mode, each of the hadoop components runs as a separate Java process,
- but all on the same machine.</description>
+    <summary>Pseudo-distributed Hadoop configuration</summary>
+    <description>Contains configuration files for a "pseudo-distributed" Hadoop deployment.
+In this mode, each of the hadoop components runs as a separate Java process,
+but all on the same machine.</description>
     <url>http://hadoop.apache.org/core/</url>
   </metadata>
   <deps>

Modified: incubator/bigtop/trunk/bigtop.mk
URL: http://svn.apache.org/viewvc/incubator/bigtop/trunk/bigtop.mk?rev=1328532&r1=1328531&r2=1328532&view=diff
==============================================================================
--- incubator/bigtop/trunk/bigtop.mk (original)
+++ incubator/bigtop/trunk/bigtop.mk Fri Apr 20 22:31:11 2012
@@ -25,7 +25,7 @@ HADOOP_TARBALL_DST=$(HADOOP_NAME)-$(HADO
 #HADOOP_DOWNLOAD_PATH=/hadoop/common/$(HADOOP_NAME)-$(HADOOP_BASE_VERSION)
 #HADOOP_SITE=$(APACHE_MIRROR)$(HADOOP_DOWNLOAD_PATH)
 #HADOOP_ARCHIVE=$(APACHE_ARCHIVE)$(HADOOP_DOWNLOAD_PATH)
-HADOOP_TARBALL_SRC=8c0466d
+HADOOP_TARBALL_SRC=7ed45d0
 HADOOP_SITE=https://github.com/apache/hadoop-common/tarball
 HADOOP_ARCHIVE=$(HADOOP_SITE)
 $(eval $(call PACKAGE,hadoop,HADOOP))



Mime
View raw message