People depend on various data store, some hosted by 3rd parties and some running on premise and they all need backups.
Since all of them have similar requirements towards security and availability, it makes sense to provide a common set of tools.
Each pipeline consists of one input, any number of filters and one output.
The pipeline supports multiple collectors for various data sources.
Reads a file.
Reads a directory and streams it to the next filter in the pipeline.
Filters can be chained.
Demo filter for testing.
Outputs write the data at the end of the pipeline to some location.
Wrtie to a local file.
There is a json based configuration which defines the pipelines.
See examples
Byte piper is implemented in Go. The inputs implement io.Reader and read data from various locations. Filters implement io.Reader, reading from the previous element in the pipeline. Outputs implement io.Writer . The last step is just a io.Copy to the output from the last filter in the chain.