[openstack-dev] [Savanna] problem starting namenode
Alexander Ignatov
aignatov at mirantis.com
Mon Sep 16 16:35:18 UTC 2013
Hi, Arindam
Current Savanna's vanilla plugin pushes two configs directly into
hdfs-site.xml for all DataNodes and NameNode:
dfs.name.dir =/lib/hadoop/hdfs/namenode,
dfs.data.dir = /lib/hadoop/hdfs/datanode
https://github.com/stackforge/savanna/blob/master/savanna/plugins/vanilla/config_helper.py#L178-L181
All these pathes are joined with /mnt dir which as a root place for
mounted Ephemeral drives.
These configs are responsible for placement of HDFS data. Particularly
/mnt/lib/hadoop/hdfs/namenode should be created before formatting NameNode.
I'm not sure about proper behaviour of Hadoop 0.20.203.0 you are using
in your plugin but in 1.1.2 version supported by Vanilla Plugin
/mnt/lib/hadoop/hdfs/namenode is created during formatting namenode
automatically.
Maybe 0.20.203.0 this is not implemented. I'd recommend you to check it
with manual cluster deployment w/o Savanna cluster provisioning.
If that is case then you should write your code with creating these
directories before starting Hadoop services.
Regards,
Alexander Ignatov
On 9/16/2013 6:11 PM, Arindam Choudhury wrote:
> Hi,
>
> I am trying to a custom plugin to provision hadoop 0.20.203.0 with
> jdk1.6u45. So I created a custom pre-installed image tweaking
> savanna-image-elements and a new plugin called mango.
> I am having this error on namenode:
>
> 2013-09-16 13:34:27,463 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG: host = test-master-starfish-001/192.168.32.2
> STARTUP_MSG: args = []
> STARTUP_MSG: version = 0.20.203.0
> STARTUP_MSG: build =
> http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203
> -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
> ************************************************************/
> 2013-09-16 13:34:27,784 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2013-09-16 13:34:27,797 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 2013-09-16 13:34:27,799 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2013-09-16 13:34:27,799 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system started
> 2013-09-16 13:34:27,964 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> ugi registered.
> 2013-09-16 13:34:27,966 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi
> already exists!
> 2013-09-16 13:34:27,976 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> jvm registered.
> 2013-09-16 13:34:27,976 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: VM
> type = 64-bit
> 2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 17.77875 MB
> 2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet:
> capacity = 2^21 = 2097152 entries
> 2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=2097152, actual=2097152
> 2013-09-16 13:34:28,047 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
> 2013-09-16 13:34:28,047 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2013-09-16 13:34:28,047 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2013-09-16 13:34:28,060 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2013-09-16 13:34:28,060 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2013-09-16 13:34:28,306 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2013-09-16 13:34:28,326 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2013-09-16 13:34:28,329 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Storage directory
> /mnt/lib/hadoop/hdfs/namenode does not exist.
> 2013-09-16 13:34:28,330 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /mnt/lib/hadoop/hdfs/namenode is in an inconsistent state:
> storage directory does not exist or is not accessible.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
> 2013-09-16 13:34:28,330 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /mnt/lib/hadoop/hdfs/namenode is in an inconsistent state:
> storage directory does not exist or is not accessible.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
>
> 2013-09-16 13:34:28,331 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at
> test-master-starfish-001/192.168.32.2
> ************************************************************/
>
>
> and when I provide the namenode folder already created:
>
> 2013-09-16 13:56:29,269 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG: host = test-master-starfish-001/192.168.32.2
> STARTUP_MSG: args = []
> STARTUP_MSG: version = 0.20.203.0
> STARTUP_MSG: build =
> http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203
> -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
> ************************************************************/
> 2013-09-16 13:56:29,570 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2013-09-16 13:56:29,587 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 2013-09-16 13:56:29,588 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2013-09-16 13:56:29,588 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system started
> 2013-09-16 13:56:29,775 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> ugi registered.
> 2013-09-16 13:56:29,779 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi
> already exists!
> 2013-09-16 13:56:29,786 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> jvm registered.
> 2013-09-16 13:56:29,787 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2013-09-16 13:56:29,815 INFO org.apache.hadoop.hdfs.util.GSet: VM
> type = 64-bit
> 2013-09-16 13:56:29,815 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 17.77875 MB
> 2013-09-16 13:56:29,815 INFO org.apache.hadoop.hdfs.util.GSet:
> capacity = 2^21 = 2097152 entries
> 2013-09-16 13:56:29,815 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=2097152, actual=2097152
> 2013-09-16 13:56:29,901 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
> 2013-09-16 13:56:29,901 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2013-09-16 13:56:29,901 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2013-09-16 13:56:29,904 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2013-09-16 13:56:29,904 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2013-09-16 13:56:30,162 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2013-09-16 13:56:30,200 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2013-09-16 13:56:30,224 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
> 2013-09-16 13:56:30,224 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
>
> 2013-09-16 13:56:30,225 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at
> test-master-starfish-001/192.168.32.2
> ************************************************************/
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130916/78dbfd66/attachment.html>
More information about the OpenStack-dev
mailing list