Not able to access the directory created in HDFS after all the Hadoop deamons are stopped and restarted again -
i new hadoop have couple of problem , yet not able find solution issue goes below:
**created directory on hdfs using below command: --bin/hadoop fs -mkdir /user/abhijit/apple_poc **checking if directory has been created: --bin/hadoop fs -ls --(output)-->drwxr-xr-x - abhijit supergroup 0 2013-07-11 11:09 /user/abhijit/apple_poc **stopping hadoop daemons: --bin/stop-all.sh **restarting daemons again: --bin/start-all.sh **again checking if directory on hdfs created above present or not: --bin/hadoop fs -ls --(output): 2013-07-11 11:37:57.304 java[3457:1903] unable load realm info scdynamicstore 13/07/11 11:37:58 info ipc.client: retrying connect server:localhost/127.0.0.1:9000. tried 0 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 13/07/11 11:37:59 info ipc.client: retrying connect server: localhost/127.0.0.1:9000. tried 1 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 13/07/11 11:38:00 info ipc.client: retrying connect server: localhost/127.0.0.1:9000. tried 2 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 13/07/11 11:38:01 info ipc.client: retrying connect server: localhost/127.0.0.1:9000. tried 3 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 13/07/11 11:38:02 info ipc.client: retrying connect server: localhost/127.0.0.1:9000. tried 4 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 13/07/11 11:38:03 info ipc.client: retrying connect server: localhost/127.0.0.1:9000. tried 5 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 13/07/11 11:38:04 info ipc.client: retrying connect server: localhost/127.0.0.1:9000. tried 6 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 13/07/11 11:38:05 info ipc.client: retrying connect server: localhost/127.0.0.1:9000. tried 7 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 13/07/11 11:38:06 info ipc.client: retrying connect server: localhost/127.0.0.1:9000. tried 8 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 13/07/11 11:38:07 info ipc.client: retrying connect server: localhost/127.0.0.1:9000. tried 9 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) bad connection fs. command aborted. exception: call localhost/127.0.0.1:9000 failed on connection exception: java.net.connectexception: connection refused
please clarify..
i not sure doing wronge,or there change in property file?
hdfs default directory storage /user//,should change default directory problem solved?
every time have format namenode come out of problem after formating directory created above lost.
please let me know issue behind this.. appreciated help.
thanks, abhijit
this error occur because of multiple reason, have been playing around hadoop. got issue multiple times , cause different
- if main nodes not running -> check logs
- if proper ip not mentioned in host file[after setting hostname, provides ip in hosts file other nodes can access it]
Comments
Post a Comment