accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Elser (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (ACCUMULO-3764) Improve RandomWalk scripts
Date Thu, 30 Apr 2015 21:03:05 GMT

     [ https://issues.apache.org/jira/browse/ACCUMULO-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Josh Elser updated ACCUMULO-3764:
---------------------------------
    Description: 
in {{test/system/randomwalk/bin/copy-config.sh}}

{code}
"$HADOOP_PREFIX/bin/hadoop" fs -rmr randomwalk 2>/dev/null
{code}

RandomWalk should not try to write to the root of HDFS. This is likely to be disallowed by
HDFS permissions.

{code}
"$HADOOP_PREFIX/bin/hadoop" fs -setrep 3 randomwalk/config.tgz
{code}

It would be nice to use some context to avoid setrep'ing to a value higher than the allowed
max or the number of datanodes (e.g. don't need to setrep at all if we only have one datanode).
{{hdfs dfsadmin -report}} might have some info that could be scraped?

  was:
{code}
"$HADOOP_PREFIX/bin/hadoop" fs -rmr randomwalk 2>/dev/null
{code}

RandomWalk should not try to write to the root of HDFS. This is likely to be disallowed by
HDFS permissions.

{code}
"$HADOOP_PREFIX/bin/hadoop" fs -setrep 3 randomwalk/config.tgz
{code}

It would be nice to use some context to avoid setrep'ing to a value higher than the allowed
max or the number of datanodes (e.g. don't need to setrep at all if we only have one datanode).
{{hdfs dfsadmin -report}} might have some info that could be scraped?


> Improve RandomWalk scripts
> --------------------------
>
>                 Key: ACCUMULO-3764
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3764
>             Project: Accumulo
>          Issue Type: Bug
>          Components: test
>            Reporter: Josh Elser
>            Priority: Trivial
>              Labels: newbie
>             Fix For: 1.8.0
>
>
> in {{test/system/randomwalk/bin/copy-config.sh}}
> {code}
> "$HADOOP_PREFIX/bin/hadoop" fs -rmr randomwalk 2>/dev/null
> {code}
> RandomWalk should not try to write to the root of HDFS. This is likely to be disallowed
by HDFS permissions.
> {code}
> "$HADOOP_PREFIX/bin/hadoop" fs -setrep 3 randomwalk/config.tgz
> {code}
> It would be nice to use some context to avoid setrep'ing to a value higher than the allowed
max or the number of datanodes (e.g. don't need to setrep at all if we only have one datanode).
{{hdfs dfsadmin -report}} might have some info that could be scraped?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message