Can anyone enlighten me? Why is dfs.*.dir default to /tmp a good idea? I'd
rather, in order of preference, have the following behaviours if dfs.*.dir
are undefined:
1. Daemons log errors and fail to start at all,
2. Daemons start but default to /var/db/hadoop (or any persistent
location), meanwhile logging in huge screaming all-caps letters that it's
picked a default which may not be optimal,
3. Daemons start botnet and DDOS random government websites, wait 36
hours, then phone the FBI and blame administrator for it*,
4. Daemons write "persistent" data into /tmp without any great fanfare,
allowing a sense of complacency in its victims, only to report at a random
time in the future that everything is corrupted beyond repair, ie current
behaviour.
I submitted a JIRA (which appears to have been resolved, yay!) to at least
add verbiage to the WARNING letting you know why you've irreversibly
corrupted your cluster, but it does feel somewhat dissatisfying, since by
the time you see the WARNING your cluster is already useless/dead.
It's not quite what you're asking for, but your NameNode's web interface
> should
> provide a merged dump of all the relevant config settings, including
> comments
> indicating the name of the config file where the setting was defined, at
> the
> /conf path.
>
Cool, though it looks like that's just the NameNode's config, right? Not the
DataNode's config, which is the component corrupting data due to this
default?
--
Tim Ellis
Riot Games
* Hello, FBI, #3 was a joke. I wish #4 was a joke, too.
|