`

【hadoop-1.0】:启动hadoop时,log中出现:java.io.IOException: NameNode is not formatted.

阅读更多

1、启动Hadoop

[plain] view plaincopy
  1. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./start-all.sh   
  2. starting namenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-namenode-ubuntu.out  
  3. localhost: starting datanode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-datanode-ubuntu.out  
  4. localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-secondarynamenode-ubuntu.out  
  5. starting jobtracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-jobtracker-ubuntu.out  
  6. localhost: starting tasktracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-tasktracker-ubuntu.out  

2、访问localhost:50070失败,说明namenode启动失败
3、查看namenode启动日志

[plain] view plaincopy
 
  1. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ cd ../logs  
  2. ubuntu@ubuntu:~/hadoop-1.0.4/logs$ view hadoop-ubuntu-namenode-ubuntu.log  
  3. /************************************************************  
  4. STARTUP_MSG: Starting NameNode  
  5. STARTUP_MSG:   host = ubuntu/127.0.1.1  
  6. STARTUP_MSG:   args = []  
  7. STARTUP_MSG:   version = 1.0.4  
  8. STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012  
  9. ************************************************************/  
  10. 2013-01-24 07:05:46,936 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties  
  11. 2013-01-24 07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.  
  12. 2013-01-24 07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).  
  13. 2013-01-24 07:05:46,945 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started  
  14. 2013-01-24 07:05:47,053 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.  
  15. 2013-01-24 07:05:47,058 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!  
  16. 2013-01-24 07:05:47,064 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.  
  17. 2013-01-24 07:05:47,064 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.  
  18. 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit  
  19. 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB  
  20. 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries  
  21. 2013-01-24 07:05:47,092 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304  
  22. 2013-01-24 07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu  
  23. 2013-01-24 07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup  
  24. 2013-01-24 07:05:47,140 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true  
  25. 2013-01-24 07:05:47,143 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100  
  26. 2013-01-24 07:05:47,143 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)  
  27. 2013-01-24 07:05:47,154 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean  
  28. 2013-01-24 07:05:47,169 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times   
  29. 2013-01-24 07:05:47,174 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.  
  30. java.io.IOException: NameNode is not formatted.  
  31.     at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)  
  32.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)  
  33.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)  
  34.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)  
  35.     at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)  
  36.     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)  
  37.     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)  
  38.     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)  
  39. 2013-01-24 07:05:47,175 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.  
  40.     at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)  
  41.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)  
  42.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)  
  43.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)  
  44.     at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)  
  45.     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)  
  46.     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)  
  47.     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)  

其中"2013-01-24 07:05:47,174 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted."一行显示,namenode未初始化。
4、初始化namenode,却提示是否重新初始化namenode,于是输入Y。

[plain] view plaincopy
 
  1. ubuntu@ubuntu:~/hadoop-1.0.4$ bin/hadoop namenode -format  
  2. 13/01/24 07:05:08 INFO namenode.NameNode: STARTUP_MSG:   
  3. /************************************************************  
  4. STARTUP_MSG: Starting NameNode  
  5. STARTUP_MSG:   host = ubuntu/127.0.1.1  
  6. STARTUP_MSG:   args = [-format]  
  7. STARTUP_MSG:   version = 1.0.4  
  8. STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012  
  9. ************************************************************/  
  10. Re-format filesystem in /home/ubuntu/hadoop-1.0.4/tmp/dfs/name ? (Y or N) y  
  11. Format aborted in /home/ubuntu/hadoop-1.0.4/tmp/dfs/name  
  12. 13/01/24 07:05:12 INFO namenode.NameNode: SHUTDOWN_MSG:   
  13. /************************************************************  
  14. SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1  
  15. ************************************************************/  

5、初始化后,重新启动hadoop,localhost:50070仍然访问失败。查看namenode启动日志,依然报namenode没有初始化的错误。
6、于是删除core-site.xml配置文件中配置的tmp目录下的所有文件;
将hadoop所有服务停止;

再次启动hadoop,访问localhost:50070成功。

 

[plain] view plaincopy
 
  1. ubuntu@ubuntu:~/hadoop-1.0.4/tmp$ rm -rf *  
  2. ubuntu@ubuntu:~/hadoop-1.0.4/tmp$ cd ../bin  
  3. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./stop-all.sh   
  4. stopping jobtracker  
  5. localhost: stopping tasktracker  
  6. no namenode to stop  
  7. localhost: stopping datanode  
  8. localhost: stopping secondarynamenode  
  9. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ hadoop namenode -format  
  10. 13/01/24 07:10:45 INFO namenode.NameNode: STARTUP_MSG:   
  11. /************************************************************  
  12. STARTUP_MSG: Starting NameNode  
  13. STARTUP_MSG:   host = ubuntu/127.0.1.1  
  14. STARTUP_MSG:   args = [-format]  
  15. STARTUP_MSG:   version = 1.0.4  
  16. STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012  
  17. ************************************************************/  
  18. 13/01/24 07:10:46 INFO util.GSet: VM type       = 32-bit  
  19. 13/01/24 07:10:46 INFO util.GSet: 2% max memory = 19.33375 MB  
  20. 13/01/24 07:10:46 INFO util.GSet: capacity      = 2^22 = 4194304 entries  
  21. 13/01/24 07:10:46 INFO util.GSet: recommended=4194304, actual=4194304  
  22. 13/01/24 07:10:46 INFO namenode.FSNamesystem: fsOwner=ubuntu  
  23. 13/01/24 07:10:46 INFO namenode.FSNamesystem: supergroup=supergroup  
  24. 13/01/24 07:10:46 INFO namenode.FSNamesystem: isPermissionEnabled=true  
  25. 13/01/24 07:10:46 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100  
  26. 13/01/24 07:10:46 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)  
  27. 13/01/24 07:10:46 INFO namenode.NameNode: Caching file names occuring more than 10 times   
  28. 13/01/24 07:10:46 INFO common.Storage: Image file of size 112 saved in 0 seconds.  
  29. 13/01/24 07:10:46 INFO common.Storage: Storage directory /home/ubuntu/hadoop-1.0.4/tmp/dfs/name has been successfully formatted.  
  30. 13/01/24 07:10:46 INFO namenode.NameNode: SHUTDOWN_MSG:   
  31. /************************************************************  
  32. SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1  
  33. ************************************************************/  
  34. ubuntu@ubuntu:~/hadoop-1.0.4/bin$ ./start-all.sh   
  35. starting namenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-namenode-ubuntu.out  
  36. localhost: starting datanode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-datanode-ubuntu.out  
  37. localhost: starting secondarynamenode, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-secondarynamenode-ubuntu.out  
  38. starting jobtracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-jobtracker-ubuntu.out  
  39. localhost: starting tasktracker, logging to /home/ubuntu/hadoop-1.0.4/libexec/../logs/hadoop-ubuntu-tasktracker-ubuntu.out  

7、原来是修改了配置文件中的tmp目录后没有对hdfs做初始化,导致启动hadoop时报namenode没有初始化的错误。

分享到:
评论

相关推荐

    hadoop-train-v2-1.0.jar

    hadoop-train-v2-1.0.jar

    hadoop-mapreduce-client-common-2.6.5-API文档-中英对照版.zip

    赠送jar包:hadoop-mapreduce-client-common-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-common-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-common-2.6.5-sources.jar; 赠送Maven依赖信息...

    hadoop-1.0源代码(全)

    hadoop-1.0源代码(全):包括bin、conf、ivy、lib和src等文hadoop-1.0源代码(全):包括bin、conf、ivy、lib和src等文件夹件夹

    hadoop-auth-2.6.5-API文档-中英对照版.zip

    赠送jar包:hadoop-auth-2.6.5.jar 赠送原API文档:hadoop-auth-2.6.5-javadoc.jar 赠送源代码:hadoop-auth-2.6.5-sources.jar 包含翻译后的API文档:hadoop-auth-2.6.5-javadoc-API文档-中文(简体)-英语-对照版...

    hadoop-2.10.0jar.zip

    包含hadoop平台Java开发的所有所需jar包,例如activation-1.1.jar apacheds-i18n-2.0.0-M15.jar apacheds-kerberos-codec-2.0.0-M15.jar api-asn1-api-1.0.0-M20.jar api-util-1.0.0-M20.jar asm-3.2.jar avro-1.7.7...

    hadoop-mapreduce-client-jobclient-2.6.5-API文档-中文版.zip

    赠送jar包:hadoop-mapreduce-client-jobclient-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-jobclient-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-jobclient-2.6.5-sources.jar; 赠送...

    hadoop-common-2.7.3-API文档-中文版.zip

    赠送jar包:hadoop-common-2.7.3.jar; 赠送原API文档:hadoop-common-2.7.3-javadoc.jar; 赠送源代码:hadoop-common-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-common-2.7.3.pom; 包含翻译后的API文档...

    hadoop-yarn-client-2.6.5-API文档-中文版.zip

    赠送jar包:hadoop-yarn-client-2.6.5.jar; 赠送原API文档:hadoop-yarn-client-2.6.5-javadoc.jar; 赠送源代码:hadoop-yarn-client-2.6.5-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-client-2.6.5.pom;...

    hadoop-auth-2.5.1-API文档-中文版.zip

    赠送jar包:hadoop-auth-2.5.1.jar; 赠送原API文档:hadoop-auth-2.5.1-javadoc.jar; 赠送源代码:hadoop-auth-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-auth-2.5.1.pom; 包含翻译后的API文档:hadoop...

    flink-shaded-hadoop-3-uber-3.1.1.7.1.1.0-565-9.0.jar

    Flink-1.11.2与Hadoop3集成JAR包,放到flink安装包的lib目录下,可以避免Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.这个报错,实现...

    hadoop-yarn-common-2.6.5-API文档-中文版.zip

    赠送jar包:hadoop-yarn-common-2.6.5.jar 赠送原API文档:hadoop-yarn-common-2.6.5-javadoc.jar 赠送源代码:hadoop-yarn-common-2.6.5-sources.jar 包含翻译后的API文档:hadoop-yarn-common-2.6.5-javadoc-...

    hadoop2.7汇总:新增功能最新编译64位安装、源码包、API、eclipse插件下载

    hadoop2.7汇总:新增功能最新编译64位安装、源码包、API、eclipse插件下载

    hadoop常见问题及解决办法

    在网上搜集的以及本人自己总结的hadoop集群常见问题及解决办法,融合了网上常常搜到的一些文档以及个人自己的经验。

    hadoop-yarn-api-2.5.1-API文档-中文版.zip

    赠送jar包:hadoop-yarn-api-2.5.1.jar; 赠送原API文档:hadoop-yarn-api-2.5.1-javadoc.jar; 赠送源代码:hadoop-yarn-api-2.5.1-sources.jar; 赠送Maven依赖信息文件:hadoop-yarn-api-2.5.1.pom; 包含翻译后...

    hadoop-annotations-2.7.3-API文档-中文版.zip

    赠送jar包:hadoop-annotations-2.7.3.jar; 赠送原API文档:hadoop-annotations-2.7.3-javadoc.jar; 赠送源代码:hadoop-annotations-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-annotations-2.7.3.pom;...

    hadoop-yarn-server-resourcemanager-2.6.0-API文档-中文版.zip

    赠送jar包:hadoop-yarn-server-resourcemanager-2.6.0.jar; 赠送原API文档:hadoop-yarn-server-resourcemanager-2.6.0-javadoc.jar; 赠送源代码:hadoop-yarn-server-resourcemanager-2.6.0-sources.jar; 赠送...

    flink-hadoop-fs-1.14.3-API文档-中文版.zip

    赠送jar包:flink-hadoop-fs-1.14.3.jar; 赠送原API文档:flink-hadoop-fs-1.14.3-javadoc.jar; 赠送源代码:flink-hadoop-fs-1.14.3-sources.jar; 赠送Maven依赖信息文件:flink-hadoop-fs-1.14.3.pom; 包含...

    flink-hadoop-fs-1.13.2-API文档-中文版.zip

    赠送jar包:flink-hadoop-fs-1.13.2.jar; 赠送原API文档:flink-hadoop-fs-1.13.2-javadoc.jar; 赠送源代码:flink-hadoop-fs-1.13.2-sources.jar; 赠送Maven依赖信息文件:flink-hadoop-fs-1.13.2.pom; 包含...

    hbase-hadoop-compat-1.1.3-API文档-中文版.zip

    赠送jar包:hbase-hadoop-compat-1.1.3.jar; 赠送原API文档:hbase-hadoop-compat-1.1.3-javadoc.jar; 赠送源代码:hbase-hadoop-compat-1.1.3-sources.jar; 赠送Maven依赖信息文件:hbase-hadoop-compat-1.1.3....

    hadoop-mapreduce-client-core-2.5.1-API文档-中文版.zip

    赠送jar包:hadoop-mapreduce-client-core-2.5.1.jar; 赠送原API文档:hadoop-mapreduce-client-core-2.5.1-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-core-2.5.1-sources.jar; 赠送Maven依赖信息文件:...

Global site tag (gtag.js) - Google Analytics