![](/img/trans.png)
[英]How to prevent sbt from creating source/resource directories in parent project
[英]How can I prevent Hadoop's HDFS API from creating parent directories?
如果創建子目錄時父目錄不存在,我希望HDFS命令失敗。 當我使用FileSystem#mkdirs
任何一個時,我發現沒有引發異常,而是創建了不存在的父目錄:
import java.util.UUID
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path}
val conf = new Configuration()
conf.set("fs.defaultFS", s"hdfs://$host:$port")
val fileSystem = FileSystem.get(conf)
val cwd = fileSystem.getWorkingDirectory
// Guarantee non-existence by appending two UUIDs.
val dirToCreate = new Path(cwd, new Path(UUID.randomUUID.toString, UUID.randomUUID.toString))
fileSystem.mkdirs(dirToCreate)
在沒有繁瑣的檢查是否存在負擔的情況下,如果父目錄不存在,如何強制HDFS引發異常?
FileSystem API不支持這種類型的行為。 相反,應該使用FileContext#mkdir
。 例如:
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileContext, FileSystem, Path}
import org.apache.hadoop.fs.permission.FsPermission
val files = FileContext.getFileContext()
val cwd = files.getWorkingDirectory
val permissions = new FsPermission("644")
val createParent = false
// Guarantee non-existence by appending two UUIDs.
val dirToCreate = new Path(cwd, new Path(UUID.randomUUID.toString, UUID.randomUUID.toString))
files.mkdir(dirToCreate, permissions, createParent)
上面的例子將拋出:
java.io.FileNotFoundException: Parent directory doesn't exist: /user/erip/f425a2c9-1007-487b-8488-d73d447c6f79
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.