Hadoop错误汇总,启动节点Datanode失败解决

今天启动Hadoop的时候遇到两个错误:

1.ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
java.io.IOException: Incompatible namespaceIDs in
/var/lib/hadoop-0.20/cache/hdfs/dfs/data: namenode

当我动态添加一个Hadoop从节点的之后,出现了一个问题:

  1.   ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
    java.io.IOException: Incompatible namespaceIDs in …   

 

namespaceID = 240012870; datanode namespaceID = 1462711424 .

[root@hadoop current]# hadoop-daemon.sh start datanode
starting datanode, logging to
/usr/local/hadoop1.1/libexec/../logs/hadoop-root-datanode-hadoop.out

  namenode namespaceID = 1691922584; datanode namespaceID = 614022826

2015年2月9日 14:36:38

[root@hadoop ~]# jps

      数据格式不匹配,需要进行 hadoop namenode -format

# find <directory> -type f -name "*.c" | xargs grep "<strings>"

<directory>是你要找的文件夹;如果是当前文件夹可以省略
-type f 意思是只找文件
-name "*.c"  表示只找C语言写的代码,从而避免去查binary;也可以不写,表示找所有文件
<strings>是你要找的某个字符串


Stopping secondary namenodes [bigdata-server-02]
Last login: Thu Dec 21 17:18:39 CST 2017 on pts/0
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Stopping nodemanagers
Last login: Thu Dec 21 17:18:42 CST 2017 on pts/0
Stopping resourcemanager
Last login: Thu Dec 21 17:18:46 CST 2017 on pts/0
[root@bigdata-server-02 hadoop]# vim etc/hadoop/hadoop-env.sh 


[root@bigdata-server-02 hadoop]# find . -type f | xargs grep HADOOP_WORKER
./sbin/workers.sh:#   HADOOP_WORKERS    File naming remote hosts.
./sbin/workers.sh:#   HADOOP_WORKER_SLEEP Seconds to sleep between spawning remote commands.
grep: ./share/hadoop/yarn/webapps/ui2/assetstatables/Sorting: No such file or directory
grep: icons.psd: No such file or directory
./share/doc/hadoop/hadoop-project-dist/hadoop-common/UnixShellAPI.html:<p>Connect to ${HADOOP_WORKERS} or ${HADOOP_WORKER_NAMES} and execute command.</p>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/UnixShellAPI.html:<p>Connect to ${HADOOP_WORKER_NAMES} and execute command under the environment which does not support pdsh.</p>
./bin/hadoop:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./bin/yarn:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./bin/mapred:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./bin/hdfs:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./etc/hadoop/hadoop-env.sh:#export HADOOP_WORKERS="${HADOOP_CONF_DIR}/workers"
./etc/hadoop/hadoop-user-functions.sh.example:#  tmpslvnames=$(echo "${HADOOP_WORKER_NAMES}" | tr ' ' '\n' )
./libexec/hadoop-config.cmd:  set HADOOP_WORKERS=%HADOOP_CONF_DIR%\%2
./libexec/hadoop-config.sh:hadoop_deprecate_envvar HADOOP_SLAVES HADOOP_WORKERS
./libexec/hadoop-config.sh:hadoop_deprecate_envvar HADOOP_SLAVE_NAMES HADOOP_WORKER_NAMES
./libexec/hadoop-config.sh:hadoop_deprecate_envvar HADOOP_SLAVE_SLEEP HADOOP_WORKER_SLEEP
./libexec/yarn-config.sh:  hadoop_deprecate_envvar YARN_SLAVES HADOOP_WORKERS
./libexec/hadoop-functions.sh:    HADOOP_WORKERS="${workersfile}"
./libexec/hadoop-functions.sh:    HADOOP_WORKERS="${HADOOP_CONF_DIR}/${workersfile}"
./libexec/hadoop-functions.sh:## @description  Connect to ${HADOOP_WORKERS} or ${HADOOP_WORKER_NAMES}
./libexec/hadoop-functions.sh:  if [[ -n "${HADOOP_WORKERS}" && -n "${HADOOP_WORKER_NAMES}" ]] ; then
./libexec/hadoop-functions.sh:    hadoop_error "ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting."
./libexec/hadoop-functions.sh:  elif [[ -z "${HADOOP_WORKER_NAMES}" ]]; then
./libexec/hadoop-functions.sh:    if [[ -n "${HADOOP_WORKERS}" ]]; then
./libexec/hadoop-functions.sh:      worker_file=${HADOOP_WORKERS}
./libexec/hadoop-functions.sh:    if [[ -z "${HADOOP_WORKER_NAMES}" ]] ; then
./libexec/hadoop-functions.sh:      tmpslvnames=$(echo ${HADOOP_WORKER_NAMES} | tr -s ' ' ,)
./libexec/hadoop-functions.sh:    if [[ -z "${HADOOP_WORKER_NAMES}" ]]; then
./libexec/hadoop-functions.sh:      HADOOP_WORKER_NAMES=$(sed 's/#.*$//;/^$/d' "${worker_file}")
./libexec/hadoop-functions.sh:## @description  Connect to ${HADOOP_WORKER_NAMES} and execute command
./libexec/hadoop-functions.sh:  local workers=(${HADOOP_WORKER_NAMES})
./libexec/hadoop-functions.sh:        HADOOP_WORKER_NAMES="$1"
./libexec/hadoop-functions.sh:        HADOOP_WORKER_MODE=true
[root@bigdata-server-02 hadoop]# 

 

jps命令发现没有datanode启动,所以去它提示的路径下查看了hadoop-root-datanode-hadoop.out文件,可以是空白的。

2. 2011-05-1 14:30:41,855 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException:
Problem binding to /0.0.0.0:50010 : Address already in useat
org.apache.hadoop.ipc.Server.bind(Server.java:190)at
  org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:309)at
  org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216) 

[root@hadoop3 logs]# cat hadoop-root-namenode-hadoop3.log

2.org.apache.hadoop.security.AccessControlException: Permission denied:
user=xxj

后来在该路径下发现了/usr/local/hadoop1.1/logs/hadoop-root-datanode-hadoop.log文件

  50010端口被占用,在hdfs-site.xml中改成其他端口号

 

hdfs-site.xml文件中加入

 <property>

  <name>dfs.permissions</name>

  <value>false</value>

 </property>

 

 

 

  1. Invalid Hadoop Runtime specified; please click ‘Configure Hadoop
    install directory’ or fill in library location input
     field

查看日志文件

<property>

2017-12-29 15:06:50,183 INFO org.apache.hadoop.http.HttpServer2:
addJerseyResourcePackage:
packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2017-12-29 15:06:50,190 INFO org.apache.hadoop.http.HttpServer2:
HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:9870
at
org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1133)
at
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1155)
at
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1214)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1069)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:173)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:888)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:724)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:950)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:929)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
Caused by: java.net.BindException: 地址已在使用
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:317)
at
org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1120)
at
org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1151)
… 9 more
2017-12-29 15:06:50,192 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
metrics system…

eclipse window->preferences – > Map/Reduce  选择hadoop根目录

[root@hadoop current]# vim
/usr/local/hadoop1.1/logs/hadoop-root-datanode-hadoop.log

STARTUP_MSG:  version = 1.1.2
STARTUP_MSG:  build =
-r
1440782; compiled by ‘hortonfo’ on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
2014-10-31 19:24:28,543 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2014-10-31 19:24:28,565 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2014-10-31 19:24:28,566 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2014-10-31 19:24:28,566 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
system started
2014-10-31 19:24:28,728 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
ugi registered.
2014-10-31 19:24:29,221 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
java.io.IOException: Incompatible namespaceIDs in
/usr/local/hadoop/tmp/dfs/data: namenode namespaceID = 942590743;
datanode namespaceID = 463031076

        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:399)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:309)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1651)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751)

    <name>dfs.datanode.address</name>

但是并没有配置这个端口啊 

4.eclipse error: failure to login

2014-10-31 19:24:29,229 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadoop/192.168.0.100
************************************************************/

    <value>0.0.0.0:50010</value>

 

eclipse hadoop plugin/lib 目录中加入

lib/hadoop-core.jar,

lib/commons-cli-1.2.jar,

lib/commons-configuration-1.6.jar,

lib/commons-httpclient-3.0.1.jar,

lib/commons-lang-2.4.jar,

lib/jackson-core-asl-1.0.1.jar,

lib/jackson-mapper-asl-1.0.1.jar

修改META-INF/MANIFEST.MF

Bundle-ClassPath:
classes/,lib/hadoop-core.jar,lib/commons-cli-1.2.jar,lib/commons-configuration-1.6.jar,lib/commons-httpclient-3.0.1.jar,lib/commons-lang-

2.4.jar,lib/jackson-core-asl-1.0.1.jar,lib/jackson-mapper-asl-1.0.1.jar

 

<description>The address where the datanode server will listen to. 

find检索字符串

5.hadoop 1.0.0版本
hadoop 启动时 TaskTracker无法启动
ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker
because java.io.IOException: Failed to set permissions of path:
\tmp\hadoop-admin

读日志文件:

If the port is 0 then the server will start on a free port.

[root@hadoop3 hadoop]# find . -type f | xargs grep 9870
grep: ./share/hadoop/yarn/webapps/ui2/assetstatables/Sorting:
没有那个文件或目录
grep: icons.psd: 没有那个文件或目录
./share/doc/hadoop/hadoop-yarn/hadoop-yarn-registry/apidocs/org/apache/hadoop/registry/client/types/AddressTypes.html:
[“namenode.example.org”, “9870”]
./share/doc/hadoop/api/org/apache/hadoop/registry/client/types/AddressTypes.html:
[“namenode.example.org”, “9870”]
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml:
<value>0.0.0.0:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html:<p>NameNode
and DataNode each run an internal web server in order to display basic
information about the current status of the cluster. With the default
configuration, the NameNode front page is at
<tt>;. It lists the DataNodes
in the cluster and basic statistics of the cluster. The web interface
can also be used to browse the file system (using “Browse the
file system” link on the NameNode front
page).</p></div>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html:
<value>machine1.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html:
<value>machine2.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html:
<value>machine3.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html:
<value>machine1.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html:
<value>machine2.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html:
<value>machine3.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/release/3.0.0-alpha1/CHANGES.3.0.0-alpha1.html:<td
align=”left”> <a class=”externalLink”
href=”;
</td>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/release/3.0.0-alpha1/RELEASENOTES.3.0.0-alpha1.html:<p>The
patch updates the HDFS default HTTP/RPC ports to non-ephemeral ports.
The changes are listed below: Namenode ports: 50470 –> 9871,
50070 –> 9870, 8020 –> 9820 Secondary NN ports:
50091 –> 9869, 50090 –> 9868 Datanode ports: 50020
–> 9867, 50010 –> 9866, 50475 –> 9865,
50075 –> 9864</p><hr />
./share/doc/hadoop/hadoop-project-dist/hadoop-common/release/2.8.0/CHANGES.2.8.0.html:<td
align=”left”> <a class=”externalLink”
href=”;
</td>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/CommandsManual.html:<pre
class=”source”>$ bin/hadoop daemonlog -setlevel 127.0.0.1:9870
org.apache.hadoop.hdfs.server.namenode.NameNode DEBUG
./share/doc/hadoop/hadoop-project-dist/hadoop-common/SingleCluster.html:<li>NameNode

\mapred\local\ttprivate to 0700
 at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:682)
 at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:655)
 at
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
 at
org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
 at
org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
 at
org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:719)
 at
org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1436)
 at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3694)

先看到ERROT信息中Incompatible这个单词,意思是“不相容的”。所以我们可以看出是datanode的namespaceID出错了。

</description>
  • <tt>;
    ./share/doc/hadoop/hadoop-project-dist/hadoop-common/ClusterSetup.html:<td
    align=”left”> Default HTTP port is 9870. </td></tr>
    ./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:06:50,085 INFO
    org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at:

    ./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in
    use: 0.0.0.0:9870
    ./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in
    use: 0.0.0.0:9870
    ./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:06:50,193 INFO
    org.apache.hadoop.util.ExitUtil: Exiting with status 1:
    java.net.BindException: Port in use: 0.0.0.0:9870
    ./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:23:48,931 INFO
    org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at:

    ./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in
    use: 0.0.0.0:9870
    ./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in
    use: 0.0.0.0:9870
    ./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:23:49,035 INFO
    org.apache.hadoop.util.ExitUtil: Exiting with status 1:
    java.net.BindException: Port in use: 0.0.0.0:9870
    [root@hadoop3 hadoop]# xlc
    Stopping namenodes on [hadoop3]

eclipse运行作业  Failed to set permissions of path:
\tmp\hadoop-admin\mapred\staging\Administrator-1506477061\.staging
to 0700

:Windows环境下的Hadoop TaskTracker无法正常启动 
包括0.20.204、0.20.205、1.0.0版本

网上的解决方案 五花八门  有的说用 0.20.204以下版本 等

我采用修改FileUtil类 checkReturnValue方法代码 重新编译 
替换原来的hadoop-core-1.0.0.jar文件 来解决

改后的hadoop-core-1.0.0.jar下载地址

bug 

所以最后shutDown了。

</property>

 

 

 

  类似的,还有50030端口被占用的情况:

9870为默认

6.Bad connection to FS. command aborted. exception: Call to
dp01-154954/192.168.13.134:9000 failed on connection exception:
java.net.ConnectException: Connection refused: no further information

解决思路:

  2011-05-1 14:30:43,931 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open()
is -1. Opening the listener on 50030

./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml:
<value>0.0.0.0:9870</value>

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory D:\tmp\hadoop-SYSTEM\dfs\name is in an inconsistent

(1)先去hadoop路径下的配置文件hdfs-site.xml,看看:

  2011-05-1 14:30:43,933 FATAL org.apache.hadoop.mapred.JobTracker:
java.net.BindException: Address already in use

<property>
<name>dfs.namenode.http-address</name>
<value>0.0.0.0:9870</value>
<description>
The address and the base port where the dfs namenode web ui will listen
on.
</description>
</property>

state: storage directory does not exist or is not accessible.

[root@hadoop current]# vim
/usr/local/hadoop1.1/conf/hdfs-site.xml

     at sun.nio.ch.Net.bind(NativeMethod) 

 

重新格式化  bin/hadoop namenode -format  (小心不要拼错)

…………………………………………………………………………………………………………………….

     at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119) 

vim查找9870

 

<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>

  在mapred-default.xml中修改下端口号:

:/9870

7.org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot
delete /tmp/hadoop-SYSTEM/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0.9412 has not reached the threshold
0.9990. Safe mode will be turned off automatically.
 at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1992)
 at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1972)
 at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:792)
 at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

<!– Put site-specific property overrides in this file. –>

 

:?9870

  :bin/hadoop dfsadmin -safemode leave (解除安全模式)

 safemode参数说明:

enter – 进入安全模式

leave – 强制NameNode离开安全模式

get –   返回安全模式是否开启的信息

wait – 等待,一直到安全模式结束。

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>${hadoop.tmp.dir}/dfs/name</value>
        <description>this is a comma-delimited list of
directories
          then the name table is replicated in all of the directories,
          for redunancy.
        </description>
    </property>
</configuration>

<property>

 

 

…………………………………………………………………………………………………………………….

    <name>mapred.job.tracker.http.address</name>

 

8 hbase

里面并没有有关datanode的配置信息,如果你有类似于下面的内容:

    <value>0.0.0.0:50030</value>

INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe
mode…

<property>  
                <name>dfs.data.dir</name>  
                <value>/data/hdfs/data</value>
 </property> 

<description>The job tracker http server address and port 

:bin/hadoop dfsadmin -safemode leave (解除安全模式)

说明你的datanode配置文件不再默认路径,而是你自己设置过的路径下。

the server will listen on.If the port is 0 then the server 

9、Unexpected version of storage directory

 

will start on a free port.
2013-05-19 11:29:57,447 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: org.apache.hadoop.hdfs.server.common.IncorrectVersionException:  
Unexpected version of storage directory /home/hadoop/usr/local/hadoop/data1. Reported: -32. Expecting = -18.  
    at org.apache.hadoop.hdfs.server.common.Storage.getFields(Storage.java:647)  
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.getFields(DataStorage.java:178)  
    at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:227)  
    at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:216)  
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:228)  
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)  
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)  
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)  
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)  
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)  
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)  
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)  

2013-05-19 11:29:57,447 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 

(2)进入datanode的 dfs.data.dir 的 current目录,修改里面的文件VERSION

</description>

namdenode
和datanode的namespaceID不一致,   
 
  
 可以通过修改每个datanode的namespaceID位于dfs/data/current/VERSION文件中,
 
  
 使其与/dfs/name/current/VERSION的namenode的namespaceID一致。

由于我是默认的,所以路径是/usr/local/hadoop/tmp/dfs/data/current/VERSION。

</property>

10.win7下 ssh启动不了  错误:ssh: connect to host localhost port 22:
Connection refused

这个版本不同,可能路径也不同,最好自己去找找。

图片 1

  输入windows 登录用户名

图片 2

图片 3

参考:

[root@hadoop current]# vim
/usr/local/hadoop/tmp/dfs/data/current/VERSION

…………………………………………………………………………………………………………………….

#Thu Oct 30 04:52:01 PDT 2014
namespaceID=463031076
storageID=DS-1787154912-192.168.0.100-50010-1413940826285
cTime=0
storageType=DATA_NODE
layoutVersion=-32

…………………………………………………………………………………………………………………….

看里面的namespaceID=463031076,可以发现,跟hadoop-root-datanode-hadoop.log中的datanode namespaceID =
463031076
的一样,这说明他是读取这个文件的,所以我们没有找错。

 

(3)修改这个版本信息文件VERSION

ID与hadoop-root-datanode-hadoop.log中 namenode namespaceID =
942590743
一致

 

ps:我想大家应该可以想到namenode
namespaceID是从哪里来的:

[root@hadoop current]# vim
/usr/local/hadoop/tmp/dfs/name/current/VERSION

…………………………………………………………………………………………………………………….

#Fri Oct 31 19:23:44 PDT 2014
namespaceID=942590743
cTime=0
storageType=NAME_NODE
layoutVersion=-32

…………………………………………………………………………………………………………………….

这里的ID是不是与hadoop-root-datanode-hadoop.log中 namenode namespaceID
= 942590743
一致?

 

(4)修改完以后,重新运行datanode

[root@hadoop current]# hadoop-daemon.sh start datanode

[root@hadoop current]# jps

8581 DataNode

看到DataNode,说明已经跑起来了。

Ubuntu
13.04上搭建Hadoop环境
http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +Hadoop 1.2.1版本集群配置
http://www.linuxidc.com/Linux/2013-09/90600.htm

Ubuntu上搭建Hadoop环境(单机模式+伪分布模式)
http://www.linuxidc.com/Linux/2013-01/77681.htm

Ubuntu下Hadoop环境的配置
http://www.linuxidc.com/Linux/2012-11/74539.htm

单机版搭建Hadoop环境图文教程详解
http://www.linuxidc.com/Linux/2012-02/53927.htm

搭建Hadoop环境(在Winodws环境下用虚拟机虚拟两个Ubuntu系统进行搭建)
http://www.linuxidc.com/Linux/2011-12/48894.htm

更多Hadoop相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

本文永久更新链接地址:http://www.linuxidc.com/Linux/2014-11/108822.htm

图片 4

发表评论

电子邮件地址不会被公开。 必填项已用*标注