简体   繁体   English

如何:使用 FileOutputStream 编写文件?

[英]How To : Write a File with FileOutputStream?

What I'm doing is that the file and each line of data I'm reading for BufferOutputStream will be saved in a FileOutputStream and I want to specify the output data.我正在做的是,我正在为 BufferOutputStream 读取的文件和每一行数据都将保存在 FileOutputStream 中,我想指定 output 数据。

If i have understood correct, you want to download the file from S3 and write to your local directory using BufferedOutputStream.如果我理解正确,您想从 S3 下载文件并使用 BufferedOutputStream 写入您的本地目录。

    S3Object object = s3.getObject(new GetObjectRequest(bucketName, key));
    InputStream is = object.getObjectContent();



 // Creating file
        File file= new File(localFilePath);
        FileOutputStream fos = new FileOutputStream(file);
        BufferedOutputStream bos= new BufferedOutputStream(fos);

    int read = -1;

    while ((read = is.read()) != -1) {
        bos.write(read);
    }

    bos.flush();
    bos.close();
    is.close();

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在 shell 脚本文件中编写截断表查询 - How to write truncate table query in shell script file 如何在数组中写入 map 并将其添加到 go 中的 json 文件中? - How to write a map inside a array and adding that to a json file in go? 如何在 Flink 中使用 sinkTo 在 AWS S3 中写入多文件 - How to use sinkTo in Flink to write multi file in AWS S3 如何使用 write() 方法使用 spark 在 s3 存储桶中写入 txt 文件 - How to write txt file in s3 bucket with spark using write() method 卓:如何使用“sam local”写入本地文件系统? - AWS: How to write to the local file system with 'sam local'? 如何在 Apache Beam Java 中写入带有动态标头的 CSV 文件 - How do I write CSV file with dynamic headers in Apache Beam Java gcloud SDK:无法写入文件 - gcloud SDK: Unable to write file 如何编写一个 Cloud Function 来监听和响应文件创建、更改或删除等事件? - How to write a Cloud Function to listen and respond events such as when a file is created, changed, or removed? 如何访问 s3 中创建的文件夹以将 csv 文件写入其中? - How can I access the created folder in s3 to write csv file into it? 如何确保 spark.write.parquet() 将数据帧写入使用相对路径指定的文件,而不是在 EMR 上的 HDFS 中? - How to make sure that spark.write.parquet() writes the data frame on to the file specified using relative path and not in HDFS, on EMR?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM