[英]Spring batch FlatFileItemWriter - Showing exponential value in csv after writing it
[英]Spring Batch FlatFileItemWriter does only process data after second run
我有一个春季批处理,可以在步骤1中通过sftp下载多个csv文件,然后在步骤2中对其进行一些处理。
一切正常,但仅当我两次启动该程序时(以便在启动该程序时已存在下载的文件)。
似乎软件在启动时不在文件中就看不到文件。
我想这与在执行step1()
之前读取@Value
。
@Configuration
@EnableBatchProcessing
public class BatchConfiguration {
private static final Logger log = LoggerFactory.getLogger(BillProcessor.class);
@Autowired
public JobBuilderFactory jobBuilderFactory;
@Autowired
public StepBuilderFactory stepBuilderFactory;
@Bean
public Job mainJob() throws IOException, InterruptedException {
return jobBuilderFactory
.get("mainJob")
.incrementer(new RunIdIncrementer())
.start(step1())
.next(step2())
.build();
}
@Bean
protected Step step1() {
return stepBuilderFactory
.get("step1")
.tasklet(sftpTransferer())
.build();
}
@Bean
public Step step2() throws IOException, InterruptedException {
return stepBuilderFactory.get("step2").<Bill, Bill>chunk(1000)
.reader(multiResourceItemReader())
.writer(writer())
.build();
}
@Bean
public SftpTransferer sftpTransferer() {
SftpTransferer tasklet = new SftpTransferer();
return tasklet;
}
@Bean
public FlatFileItemReader<Bill> reader() throws IOException, InterruptedException {
log.info("start reader");
...
return reader;
}
@Value("file:*.tsv")
private Resource[] inputResources;
@Bean
public MultiResourceItemReader<Bill> multiResourceItemReader() throws InterruptedException, IOException {
log.info("start MultiResourceItemReader");
MultiResourceItemReader<Bill> resourceItemReader = new MultiResourceItemReader<Bill>();
resourceItemReader.setResources(inputResources);
resourceItemReader.setDelegate(reader());
return resourceItemReader;
}
@Bean
public BillProcessor processor() {
log.info("start processor");
return new BillProcessor();
}
@Bean
public FlatFileItemWriter<Bill> writer() throws IOException {
log.info("start writer");
FlatFileItemWriter<Bill> writer = new FlatFileItemWriter<>();
writer.setResource(new FileSystemResource("_accounting.csv"));
log.info("set append to true");
writer.setAppendAllowed(true);
HeaderWriter headerWriter = new HeaderWriter("id;path;processed;records");
writer.setHeaderCallback(headerWriter);
writer.setLineAggregator(new DelimitedLineAggregator<Bill>() {{
setDelimiter(";");
setFieldExtractor(new BeanWrapperFieldExtractor<Bill>() {{
setNames(new String[]{"id", "path", "processed", "records"});
}});
}});
return writer;
}
}
我在网上找不到任何东西。 任何帮助表示赞赏!
我建议使bean multiresourceItemReader()
逐步作用域,并使其接受inputResources
作为参数:
@Bean
public Step step2() throws IOException, InterruptedException {
return stepBuilderFactory.get("step2").<Bill, Bill>chunk(1000)
.reader(multiResourceItemReader(null))
.writer(writer())
.build();
}
...
@StepScope
@Bean
public MultiResourceItemReader<Bill> multiResourceItemReader(@Value("file:*.tsv") Resource[] inputResources) throws InterruptedException, IOException {
log.info("start MultiResourceItemReader");
MultiResourceItemReader<Bill> resourceItemReader = new MultiResourceItemReader<Bill>();
resourceItemReader.setResources(inputResources);
resourceItemReader.setDelegate(reader());
return resourceItemReader;
}
这样,收集inputResources
时间将推迟到执行step1
之后,并且文件实际存在。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.