简体   繁体   中英

What is a good practice or design to swap algorithms at runtime?

I have several data processing algorithms that can be assembled into a pipeline to transform data. The code is split into two components: A pre-processing component that does data loading-related tasks, and a processing pipeline component.

I currently have the two parts compiled and packaged into two separate jars. The idea is that the same pre-processing jar can be shipped to all customers, but the pipeline jar can be exchanged depending on customer requirements. I would like to keep the code simple and minimize configuration, so that rules out the use of OSGi or CDI frameworks.

I've gotten some hints by looking at SLF4J's implementation. That project is split into two parts: A core API, and a bunch of implementations that wrap different logging APIs. The core API makes calls to dummy classes (which exist in the core project simply to allow compilation) that are meant to be overridden by the same classes found in the logging projects. At build time, the compiled dummy classes are deleted from the core API before packaging into jar. At run time, the core jar and a logging jar are required to be included in the class path, and the missing class files in the core jar will be filled in by the files from the logging jar. This works fine, but it feels a little hacky to me. I'm wondering if there is a better design, or if this is the best that be done without using CDI frameworks.

Sounds like the strategy software design pattern.

https://en.wikipedia.org/wiki/Strategy_pattern

Take a look at the ServiceLoader .

Example Suppose we have a service type com.example.CodecSet which is intended to represent sets of encoder/decoder pairs for some protocol. In this case it is an abstract class with two abstract methods:

 public abstract Encoder getEncoder(String encodingName); public abstract Decoder getDecoder(String encodingName); 

Each method returns an appropriate object or null if the provider does not support the given encoding. Typical providers support more than one encoding. If com.example.impl.StandardCodecs is an implementation of the CodecSet service then its jar file also contains a file named

 META-INF/services/com.example.CodecSet 

This file contains the single line:

  com.example.impl.StandardCodecs # Standard codecs 

The CodecSet class creates and saves a single service instance at initialization:

 private static ServiceLoader<CodecSet> codecSetLoader = ServiceLoader.load(CodecSet.class); 

To locate an encoder for a given encoding name it defines a static factory method which iterates through the known and available providers, returning only when it has located a suitable encoder or has run out of providers.

 public static Encoder getEncoder(String encodingName) { for (CodecSet cp : codecSetLoader) { Encoder enc = cp.getEncoder(encodingName); if (enc != null) return enc; } return null; } 

A getDecoder method is defined similarly.

You already understand the gist of how to use it:

  • Split your project into parts (core, implementation 1, implementation 2, ...)
  • Ship the core API with the pre-processor
  • Have each implementation add the correct META-INF file to its .jar file.

The only configuration files that are necessary are the ones you package into your .jar files.

You can even have them automatically generated for you with an annotation:

 package foo.bar; import javax.annotation.processing.Processor; @AutoService(Processor.class) final class MyProcessor extends Processor { // … } 

AutoService will generate the file

 META-INF/services/javax.annotation.processing.Processor 

in the output classes folder. The file will contain:

 foo.bar.MyProcessor 

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM