简体   繁体   中英

python tox, creating rpm virtualenv, as part of ci pipeline, unsure about where in workflow

I'm investigating how Python applications can also use a CI pipeline, but I'm not sure how to create the standard work-flow.

Jenkins is used to do the initial repository clone, and then initiates tox. Basically this is where maven, and/or msbuild, would get dependency packages and build.... which tox does via pip, so all good here.

But now for the confusing part, the last part of the pipeline is creating and uploading packages. Devs would likely upload created packages to a local pip repository, BUT then also possibly create a deployment package. In this case it would need to be an RPM containing a virtualenv of the application. I have made one manually using rpmvenev, but regardless of how its made, how what such a step be added to a tox config? In the case if rpmvenv, it creates its own virtualenv, a self contained command so to speak.

I like going with the Unix philosophy for this problem. Have a tool that does one thing incredibly well, then compose other tools together. Tox is purpose built to run your tests in a bunch of different python environments so using it to then build a deb / rpm / etc for you I feel is a bit of a misuse of that tool. It's probably easier to use tox just to run all your tests then depending on the results have another step in your pipeline deal with building a package for what was just tested.

Jenkins 2.x which is fairly recent at the time of this writing seems to be much better about building pipelines. BuildBot is going through a decent amount of development and already makes it fairly easy to build a good pipeline for this as well.

What we've done at my work is

  • Buildbot in AWS which receives push notifications from Github on PR's
  • That kicks off a docker container that pulls in the current code and runs Tox (py.test, flake8, as well as protractor and jasmine tests)
  • If the tox step comes back clean, kick off a different docker container to build a deb package
  • Push that deb package up to S3 and let Salt deal with telling those machines to update

That deb package is also just available as a build artifact, similar to what Jenkins 1.x would do. Once we're ready to go to staging, we just take that package and promote it to the staging debian repo manually. Ditto for rolling it to prod.

Tools I've found useful for all this:

  • Buildbot because it's in Python thus easier for us to work on but Jenkins would work just as well. Regardless, this is the controller for the entire pipeline
  • Docker because each build should be completely isolated from every other build
  • Tox the glorious test runner to handle all those details
  • fpm builds the package. RPM, DEB, tar.gz, whatever. Very configurable and easy to script.
  • Aptly makes it easy to manage debian repositories and in particular push them up to S3.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM