Spring Batch 读取后删除或存档文件

Spring Batch 读取后删除或存档文件

原文: https://howtodoinjava.com/spring-batch/delete-archive-files-after-read/

在 Spring Batch 作业中,在读取或处理后删除平面文件的最佳方法是创建单独的Tasklet,并在处理结束时在作业结束时执行它。

删除文件的任务

这是此类Tasklet的示例,它将在作业结束时从c:/temp/input/位置删除所有 CSV 文件。

FileDeletingTasklet.java

import java.io.File;

import org.springframework.batch.core.StepContribution;
import org.springframework.batch.core.UnexpectedJobExecutionException;
import org.springframework.batch.core.scope.context.ChunkContext;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.core.io.Resource;
import org.springframework.util.Assert;

public class FileDeletingTasklet implements Tasklet, InitializingBean {

    private Resource[] resources;

    public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {

    	for(Resource r: resources) {
    		File file = r.getFile();
    		boolean deleted = file.delete();
            if (!deleted) {
                throw new UnexpectedJobExecutionException("Could not delete file " + file.getPath());
            }
    	}
        return RepeatStatus.FINISHED;
    }

    public void setResources(Resource[] resources) {
        this.resources = resources;
    }

    public void afterPropertiesSet() throws Exception {
        Assert.notNull(resources, "directory must be set");
    }
}

存档文件

随时修改FileDeletingTasklet中的逻辑以将文件归档到其他位置或实现自己的归档逻辑。

简单的 IO 操作

如何使用FileDeletingTasklet

在主要步骤之后创建另一个要执行的步骤并执行FileDeletingTasklet

BatchConfig.java

import org.springframework.batch.core.Job;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.support.RunIdIncrementer;
import org.springframework.batch.item.file.FlatFileItemReader;
import org.springframework.batch.item.file.MultiResourceItemReader;
import org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper;
import org.springframework.batch.item.file.mapping.DefaultLineMapper;
import org.springframework.batch.item.file.transform.DelimitedLineTokenizer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.Resource;

import com.howtodoinjava.demo.model.Employee;

@Configuration
@EnableBatchProcessing
public class BatchConfig
{
	@Autowired
	private JobBuilderFactory jobBuilderFactory;

	@Autowired
	private StepBuilderFactory stepBuilderFactory;

	@Value("file:c:/temp/input/inputData*.csv")
	private Resource[] inputResources;

	@Bean
	public Job readCSVFilesJob() {
		return jobBuilderFactory
				.get("readCSVFilesJob")
				.incrementer(new RunIdIncrementer())
				.start(step1())
				.next(step2())
				.build();
	}

	@Bean
	public Step step1() {
		return stepBuilderFactory.get("step1").<Employee, Employee>chunk(5)
				.reader(multiResourceItemReader())
				.writer(writer())
				.build();
	}

	@Bean
    public Step step2() {
		FileDeletingTasklet task = new FileDeletingTasklet();
		task.setResources(inputResources);
        return stepBuilderFactory.get("step2")
        		.tasklet(task)
                .build();
    }

	@Bean
	public MultiResourceItemReader<Employee> multiResourceItemReader()
	{
		MultiResourceItemReader<Employee> resourceItemReader = new MultiResourceItemReader<Employee>();
		resourceItemReader.setResources(inputResources);
		resourceItemReader.setDelegate(reader());
		return resourceItemReader;
	}

	@SuppressWarnings({ "rawtypes", "unchecked" })
	@Bean
	public FlatFileItemReader<Employee> reader()
	{
		// Create reader instance
		FlatFileItemReader<Employee> reader = new FlatFileItemReader<Employee>();
		// Set number of lines to skips. Use it if file has header rows.
		reader.setLinesToSkip(1);
		// Configure how each line will be parsed and mapped to different values
		reader.setLineMapper(new DefaultLineMapper() {
			{
				// 3 columns in each row
				setLineTokenizer(new DelimitedLineTokenizer() {
					{
						setNames(new String[] { "id", "firstName", "lastName" });
					}
				});
				// Set values in Employee class
				setFieldSetMapper(new BeanWrapperFieldSetMapper<Employee>() {
					{
						setTargetType(Employee.class);
					}
				});
			}
		});
		return reader;
	}

	@Bean
	public ConsoleItemWriter<Employee> writer()
	{
		return new ConsoleItemWriter<Employee>();
	}
}

现在运行该应用程序并查看日志。

Console

2018-07-11 12:30:00 INFO  - Job: [SimpleJob: [name=readCSVFilesJob]] launched with the following parameters: [{JobID=1531292400004}]

2018-07-11 12:30:00 INFO  - Executing step: [step1]

Employee [id=1, firstName=Lokesh, lastName=Gupta]
Employee [id=2, firstName=Amit, lastName=Mishra]
Employee [id=3, firstName=Pankaj, lastName=Kumar]
Employee [id=4, firstName=David, lastName=Miller]
Employee [id=5, firstName=Ramesh, lastName=Gupta]
Employee [id=6, firstName=Vineet, lastName=Mishra]
Employee [id=7, firstName=Amit, lastName=Kumar]
Employee [id=8, firstName=Dav, lastName=Miller]
Employee [id=9, firstName=Vikas, lastName=Kumar]
Employee [id=10, firstName=Pratek, lastName=Mishra]
Employee [id=11, firstName=Brian, lastName=Kumar]
Employee [id=12, firstName=David, lastName=Cena]

2018-07-11 12:30:00 INFO  - Executing step: [step2]

Deleted file :: c:\temp\input\inputData1.csv
Deleted file :: c:\temp\input\inputData2.csv
Deleted file :: c:\temp\input\inputData3.csv

2018-07-11 12:30:00 INFO  - Job: [SimpleJob: [name=readCSVFilesJob]] completed with the following parameters: [{JobID=1531292400004}] and the following status: [COMPLETED]

将我的问题放在评论部分。

学习愉快!