hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Uma Maheswara Rao G 72686 <mahesw...@huawei.com>
Subject Re: set reduced block size for a specific file
Date Sun, 28 Aug 2011 12:03:05 GMT
Hi Ben,

I just verified it on trunk,
-D option support already there in Hadoop.

 /**
   * Print the usage message for generic command-line options supported.
   * 
   * @param out stream to print the usage message to.
   */
  public static void printGenericCommandUsage(PrintStream out) {
    
    out.println("Generic options supported are");
    out.println("-conf <configuration file>     specify an application configuration
file");
    out.println("-D <property=value>            use value for given property");
    out.println("-fs <local|namenode:port>      specify a namenode");
    out.println("-jt <local|jobtracker:port>    specify a job tracker");
    out.println("-files <comma separated list of files>    " + 
      "specify comma separated files to be copied to the map reduce cluster");
    out.println("-libjars <comma separated list of jars>    " +
      "specify comma separated jar files to include in the classpath.");
    out.println("-archives <comma separated list of archives>    " +
                "specify comma separated archives to be unarchived" +
                " on the compute machines.\n");
    out.println("The general command line syntax is");
    out.println("bin/hadoop command [genericOptions] [commandOptions]\n");
  }

Which version of hadoop you are running?

As part of below JIRA , i will post the tests. You can have a look.
 
Regards,
Uma

> On Sun, Aug 28, 2011 at 4:53 AM, Aaron T. Myers <atm@cloudera.com> 
> wrote:
> > Hey Ben,
> >
> > I just filed this JIRA to add this feature:
> > https://issues.apache.org/jira/browse/HDFS-2293
> >
> > If anyone would like to implement this, I would be happy to 
> review it.
> >
> > Thanks a lot,
> > Aaron
> >
> > --
> > Aaron T. Myers
> > Software Engineer, Cloudera
> >
> >
> >
> > On Sat, Aug 27, 2011 at 4:08 PM, Ben Clay <rbclay@ncsu.edu> wrote:
> >
> >> I didn't even think of overriding the config dir.  Thanks for 
> the tip!
> >>
> >> -Ben
> >>
> >>
> >> -----Original Message-----
> >> From: Allen Wittenauer [mailto:aw@apache.org]
> >> Sent: Saturday, August 27, 2011 6:42 PM
> >> To: hdfs-user@hadoop.apache.org
> >> Cc: rbclay@ncsu.edu
> >> Subject: Re: set reduced block size for a specific file
> >>
> >>
> >> On Aug 27, 2011, at 12:42 PM, Ted Dunning wrote:
> >>
> >> > There is no way to do this for standard Apache Hadoop.
> >>
> >>        Sure there is.
> >>
> >>        You can build a custom conf dir and point it to that.  
> You *always*
> >> have that option for client settable options as a work around 
> for lack of
> >> features/bugs.
> >>
> >>        1. Copy $HADOOP_CONF_DIR or $HADOOP_HOME/conf to a dir
> >>        2. modify the hdfs-site.xml to have your new block size
> >>        3. Run the following:
> >>
> >> HADOOP_CONF_DIR=mycustomconf hadoop dfs  -put file dir
> >>
> >>        Convenient?  No.  Doable? Definitely.
> >>
> >>
> >>
> >>
> >
> 

Mime
View raw message