Hadoop Error: java.io.IOException: Filesystem closed

Hortonworks Error Log

2014-05-20 17:29:32,242 ERROR org.apache.hadoop.security.UserGroupInformation:   PriviledgedActionException as:lpinsight (auth:SIMPLE) cause:java.io.IOException: Filesystem   closed
2014-05-20 17:29:32,243 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565)
    at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:589)
    at java.io.FilterInputStream.close(FilterInputStream.java:155)
    at org.apache.hadoop.util.LineReader.close(LineReader.java:149)
    at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:243)
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:222)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:421)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at   org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

MapR Error Log:

2016-03-25/22:15:56.381/IST ERROR [http-bio-10181-exec-2] com.dataguise.hadoop.agent.controller.HadoopAgentController:getJobStatus Exception :
java.io.IOException: Filesystem closed
        at com.mapr.fs.MapRFileSystem.checkOpen(MapRFileSystem.java:1384)
        at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:501)
        at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:588)
        at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1230)
        at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:876)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1434)

Standalone test java file:


import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.security.UserGroupInformation;

import java.io.IOException;
import java.security.PrivilegedExceptionAction;
import java.util.Date;

 * Created by shiyanghuang on 16/3/22.
public class filesystemCloseTest {

    final static Configuration conf = new Configuration();

    public static void main(String[] args) {
        conf.setBoolean("fs.hdfs.impl.disable.cache", true);  // For HDFS
        conf.setBoolean("fs.maprfs.impl.disable.cache", true);  // For MapR

        int t = 0;
            Runnable runnable = new Runnable() {
                public void run() {
                    new filesystemCloseTest().doSomething();

        for (int i = 0; i < 100; i++) {
            Thread thread = new Thread(runnable);

    public void doSomething() {

        UserGroupInformation ugi = null;

        try {
            ugi = UserGroupInformation.getLoginUser();
        } catch (IOException e) {

        try {
            final UserGroupInformation finalUgi = ugi;
            // Thread.sleep((int)Math.random()*100);
            FileSystem fs = (FileSystem) ugi.doAs(new PrivilegedExceptionAction() {
                public Object run() throws Exception {
                    FileSystem fsInternal = FileSystem.get(conf);
                    return fsInternal;
            Path path = new Path("hdfs://cdh1.dg:8020/");
            FileStatus[] fsStatus = fs.listStatus(path);
            FSDataInputStream fsi = fs.open(new Path("/amexTest/out.txt"));
            int a = 0;
            while ((a = fsi.read()) > 0) {
                System.out.print((char) a);
            Date date = new Date();
            // System.out.println(date.toString() + " Username: " + finalUgi.getUserName());
            // Thread.sleep((int)Math.random()*100);
        } catch (IOException e) {
        } catch (InterruptedException e) {



然后我就查,为什么呢。我刚刚用final FileSystem dfs = FileSystem.get(getConf()); 得到它啊。




Use the below configuration while accessing file system.

Configuration conf = new Configuration();
conf.setBoolean("fs.hdfs.impl.disable.cache", true);
FileSystem fileSystem = FileSystem.get(conf);


18 thoughts on “Hadoop Error: java.io.IOException: Filesystem closed

  1. I’m truly enjoying the design and layout of your site.

    It’s a very easy on the eyes which makes it much more enjoyable for
    me to come here and visit more often. Did you hire out a developer
    to create your theme? Fantastic work!

  2. I am really impressed with your writing skills
    as well as with the layout on your blog. Is this a paid
    theme or did you customize it yourself? Either way keep
    up the excellent quality writing, it is rare to see a great blog like this one nowadays.

  3. I truly love your site.. Excellent colors & theme.

    Did you make this amazing site yourself? Please reply back
    as I’m looking to create my own personal blog
    and want to know where you got this from or what the theme is
    called. Kudos!

  4. I have come across numerous &#;2i08gu2des” on the “new” Facebook pages.This one is hands-down the *most* comprehensive and practical I have read for quite a while.Many congratulations and a great thank you!I have made serious changes to my Facebook page thanks to your insights.

Leave a Reply to walmart credit cards login Cancel reply

Your email address will not be published. Required fields are marked *