Hadoop Error: java.io.IOException: Filesystem closed

Hortonworks Error Log

2014-05-20 17:29:32,242 ERROR org.apache.hadoop.security.UserGroupInformation:   PriviledgedActionException as:lpinsight (auth:SIMPLE) cause:java.io.IOException: Filesystem   closed
2014-05-20 17:29:32,243 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565)
    at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:589)
    at java.io.FilterInputStream.close(FilterInputStream.java:155)
    at org.apache.hadoop.util.LineReader.close(LineReader.java:149)
    at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:243)
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:222)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:421)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at   org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

MapR Error Log:

2016-03-25/22:15:56.381/IST ERROR [http-bio-10181-exec-2] com.dataguise.hadoop.agent.controller.HadoopAgentController:getJobStatus Exception :
java.io.IOException: Filesystem closed
        at com.mapr.fs.MapRFileSystem.checkOpen(MapRFileSystem.java:1384)
        at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:501)
        at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:588)
        at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1230)
        at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:876)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1434)

Standalone test java file:

filesystemCloseTest.java


import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.security.UserGroupInformation;

import java.io.IOException;
import java.security.PrivilegedExceptionAction;
import java.util.Date;

/**
 * Created by shiyanghuang on 16/3/22.
 */
public class filesystemCloseTest {

    final static Configuration conf = new Configuration();

    public static void main(String[] args) {
        conf.setBoolean("fs.hdfs.impl.disable.cache", true);  // For HDFS
        conf.setBoolean("fs.maprfs.impl.disable.cache", true);  // For MapR

        int t = 0;
            Runnable runnable = new Runnable() {
                @Override
                public void run() {
                    new filesystemCloseTest().doSomething();
                }
            };


        for (int i = 0; i < 100; i++) {
            Thread thread = new Thread(runnable);
            thread.start();
        }
    }

    public void doSomething() {

        UserGroupInformation.setConfiguration(conf);
        UserGroupInformation ugi = null;

        try {
            ugi = UserGroupInformation.getLoginUser();
        } catch (IOException e) {
            e.printStackTrace();
        }

        try {
            final UserGroupInformation finalUgi = ugi;
            System.out.println(Thread.currentThread().getName());
            // Thread.sleep((int)Math.random()*100);
            FileSystem fs = (FileSystem) ugi.doAs(new PrivilegedExceptionAction() {
                @Override
                public Object run() throws Exception {
                    FileSystem fsInternal = FileSystem.get(conf);
                    return fsInternal;
                }
            });
            Path path = new Path("hdfs://cdh1.dg:8020/");
            FileStatus[] fsStatus = fs.listStatus(path);
            FSDataInputStream fsi = fs.open(new Path("/amexTest/out.txt"));
            int a = 0;
            while ((a = fsi.read()) > 0) {
                System.out.print((char) a);
            }
            Date date = new Date();
            // System.out.println(date.toString() + " Username: " + finalUgi.getUserName());
            // Thread.sleep((int)Math.random()*100);
            fs.close();
        } catch (IOException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

    }
}

当读写文件的时候,Hadoop抛异常说文件系统已经关闭。后来发现,是一个多线程的程序。FileSystem.get(getConf())返回的可能是一个cache中的结果,它并不是每次都创建一个新的实例。这就意味着,如果每个线程都自己去get一个文件系统,然后使用,然后关闭,就会有问题。

然后我就查,为什么呢。我刚刚用final FileSystem dfs = FileSystem.get(getConf()); 得到它啊。

后来发现,我是一个多线程的程序。FileSystem.get(getConf())返回的可能是一个cache中的结果,它并不是每次都创建一个新的实例。这就意味着,如果每个线程都自己去get一个文件系统,然后使用,然后关闭,就会有问题。因为你们关闭的可能是同一个对象。而别人还在用它!

所以最好是在main函数中就创建好filesystem对象然后在不同函数之间来回传递吧。在main函数用用try…finally关闭它。

多线程程序中,如果你确保在你的get和close之间不会有别人调用get,也没问题。

Use the below configuration while accessing file system.

Configuration conf = new Configuration();
conf.setBoolean("fs.hdfs.impl.disable.cache", true);
FileSystem fileSystem = FileSystem.get(conf);

http://os.51cto.com/art/201305/394782.htm
http://stackoverflow.com/questions/23779186/ioexception-filesystem-closed-exception-when-running-oozie-workflow

18 thoughts on “Hadoop Error: java.io.IOException: Filesystem closed

  1. I’m truly enjoying the design and layout of your site.

    It’s a very easy on the eyes which makes it much more enjoyable for
    me to come here and visit more often. Did you hire out a developer
    to create your theme? Fantastic work!

  2. I am really impressed with your writing skills
    as well as with the layout on your blog. Is this a paid
    theme or did you customize it yourself? Either way keep
    up the excellent quality writing, it is rare to see a great blog like this one nowadays.

  3. I truly love your site.. Excellent colors & theme.

    Did you make this amazing site yourself? Please reply back
    as I’m looking to create my own personal blog
    and want to know where you got this from or what the theme is
    called. Kudos!

  4. I have come across numerous &#;2i08gu2des” on the “new” Facebook pages.This one is hands-down the *most* comprehensive and practical I have read for quite a while.Many congratulations and a great thank you!I have made serious changes to my Facebook page thanks to your insights.

Leave a Reply

Your email address will not be published. Required fields are marked *