<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Hi, Arindam<br>
<br>
Current Savanna's vanilla plugin pushes two configs directly into
hdfs-site.xml for all DataNodes and NameNode: <br>
dfs.name.dir =/lib/hadoop/hdfs/namenode, <br>
dfs.data.dir = /lib/hadoop/hdfs/datanode<br>
<a class="moz-txt-link-freetext" href="https://github.com/stackforge/savanna/blob/master/savanna/plugins/vanilla/config_helper.py#L178-L181">https://github.com/stackforge/savanna/blob/master/savanna/plugins/vanilla/config_helper.py#L178-L181</a><br>
All these pathes are joined with /mnt dir which as a root place
for mounted Ephemeral drives.<br>
These configs are responsible for placement of HDFS data.
Particularly /mnt/lib/hadoop/hdfs/namenode should be created
before formatting NameNode.<br>
I'm not sure about proper behaviour of Hadoop 0.20.203.0 you are
using in your plugin but in 1.1.2 version supported by Vanilla
Plugin /mnt/lib/hadoop/hdfs/namenode is created during formatting
namenode automatically.<br>
Maybe 0.20.203.0 this is not implemented. I'd recommend you to
check it with manual cluster deployment w/o Savanna cluster
provisioning. <br>
If that is case then you should write your code with creating
these directories before starting Hadoop services.<br>
<br>
Regards,<br>
Alexander Ignatov<br>
On 9/16/2013 6:11 PM, Arindam Choudhury wrote:<br>
</div>
<blockquote cite="mid:DUB116-W5466ADA9F2AF33B7225557DF260@phx.gbl"
type="cite">
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style>
<div dir="ltr">Hi,<br>
<br>
I am trying to a custom plugin to provision hadoop 0.20.203.0
with jdk1.6u45. So I created a custom pre-installed image
tweaking savanna-image-elements and a new plugin called mango.<br>
I am having this error on namenode:<br>
<br>
2013-09-16 13:34:27,463 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: <br>
/************************************************************<br>
STARTUP_MSG: Starting NameNode<br>
STARTUP_MSG: host = test-master-starfish-001/192.168.32.2<br>
STARTUP_MSG: args = []<br>
STARTUP_MSG: version = 0.20.203.0<br>
STARTUP_MSG: build =
<a class="moz-txt-link-freetext" href="http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203">http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203</a>
-r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011<br>
************************************************************/<br>
2013-09-16 13:34:27,784 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
from hadoop-metrics2.properties<br>
2013-09-16 13:34:27,797 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
source MetricsSystem,sub=Stats registered.<br>
2013-09-16 13:34:27,799 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
snapshot period at 10 second(s).<br>
2013-09-16 13:34:27,799 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
metrics system started<br>
2013-09-16 13:34:27,964 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
source ugi registered.<br>
2013-09-16 13:34:27,966 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
ugi already exists!<br>
2013-09-16 13:34:27,976 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
source jvm registered.<br>
2013-09-16 13:34:27,976 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
source NameNode registered.<br>
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet:
VM type = 64-bit<br>
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet:
2% max memory = 17.77875 MB<br>
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet:
capacity = 2^21 = 2097152 entries<br>
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet:
recommended=2097152, actual=2097152<br>
2013-09-16 13:34:28,047 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
fsOwner=hadoop<br>
2013-09-16 13:34:28,047 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
supergroup=supergroup<br>
2013-09-16 13:34:28,047 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true<br>
2013-09-16 13:34:28,060 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.block.invalidate.limit=100<br>
2013-09-16 13:34:28,060 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
accessTokenLifetime=0 min(s)<br>
2013-09-16 13:34:28,306 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStateMBean and NameNodeMXBean<br>
2013-09-16 13:34:28,326 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file
names occuring more than 10 times <br>
2013-09-16 13:34:28,329 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/mnt/lib/hadoop/hdfs/namenode does not exist.<br>
2013-09-16 13:34:28,330 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
FSNamesystem initialization failed.<br>
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory /mnt/lib/hadoop/hdfs/namenode is in an inconsistent
state: storage directory does not exist or is not accessible.<br>
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)<br>
2013-09-16 13:34:28,330 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory /mnt/lib/hadoop/hdfs/namenode is in an inconsistent
state: storage directory does not exist or is not accessible.<br>
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)<br>
<br>
2013-09-16 13:34:28,331 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: <br>
/************************************************************<br>
SHUTDOWN_MSG: Shutting down NameNode at
test-master-starfish-001/192.168.32.2<br>
************************************************************/<br>
<br>
<br>
and when I provide the namenode folder already created:<br>
<br>
2013-09-16 13:56:29,269 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: <br>
/************************************************************<br>
STARTUP_MSG: Starting NameNode<br>
STARTUP_MSG: host = test-master-starfish-001/192.168.32.2<br>
STARTUP_MSG: args = []<br>
STARTUP_MSG: version = 0.20.203.0<br>
STARTUP_MSG: build =
<a class="moz-txt-link-freetext" href="http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203">http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203</a>
-r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011<br>
************************************************************/<br>
2013-09-16 13:56:29,570 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
from hadoop-metrics2.properties<br>
2013-09-16 13:56:29,587 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
source MetricsSystem,sub=Stats registered.<br>
2013-09-16 13:56:29,588 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
snapshot period at 10 second(s).<br>
2013-09-16 13:56:29,588 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
metrics system started<br>
2013-09-16 13:56:29,775 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
source ugi registered.<br>
2013-09-16 13:56:29,779 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
ugi already exists!<br>
2013-09-16 13:56:29,786 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
source jvm registered.<br>
2013-09-16 13:56:29,787 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
source NameNode registered.<br>
2013-09-16 13:56:29,815 INFO org.apache.hadoop.hdfs.util.GSet:
VM type = 64-bit<br>
2013-09-16 13:56:29,815 INFO org.apache.hadoop.hdfs.util.GSet:
2% max memory = 17.77875 MB<br>
2013-09-16 13:56:29,815 INFO org.apache.hadoop.hdfs.util.GSet:
capacity = 2^21 = 2097152 entries<br>
2013-09-16 13:56:29,815 INFO org.apache.hadoop.hdfs.util.GSet:
recommended=2097152, actual=2097152<br>
2013-09-16 13:56:29,901 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
fsOwner=hadoop<br>
2013-09-16 13:56:29,901 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
supergroup=supergroup<br>
2013-09-16 13:56:29,901 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true<br>
2013-09-16 13:56:29,904 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.block.invalidate.limit=100<br>
2013-09-16 13:56:29,904 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
accessTokenLifetime=0 min(s)<br>
2013-09-16 13:56:30,162 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStateMBean and NameNodeMXBean<br>
2013-09-16 13:56:30,200 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file
names occuring more than 10 times <br>
2013-09-16 13:56:30,224 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
FSNamesystem initialization failed.<br>
java.io.IOException: NameNode is not formatted.<br>
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)<br>
2013-09-16 13:56:30,224 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.io.IOException: NameNode is not formatted.<br>
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)<br>
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)<br>
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)<br>
<br>
2013-09-16 13:56:30,225 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: <br>
/************************************************************<br>
SHUTDOWN_MSG: Shutting down NameNode at
test-master-starfish-001/192.168.32.2<br>
************************************************************/<br>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
OpenStack-dev mailing list
<a class="moz-txt-link-abbreviated" href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
</blockquote>
<br>
</body>
</html>