If you're basing deploys on image builds, you're not "rolling back" anything, but are building a new host (or host image) based on the correct dependencies.
That process relies on your platform's own dependency-resolution system, and I hope you're using something sane such as Debian/Ubuntu, or are building from source via Gentoo. RPM distros can work but tend to be far flakier.
Start with a base install, have a package for your own source which specifies deps, including if necessary _maximum_ version numbers for deps, and build the target image. Once that's built, you can generally deploy that directly rather than re-build for each deployed host.
Packaging and image preparation _aren't_ tasks which can be abstracted away entirely. It's this point which the containers craze founders on the reefs of reality. Yes, packaging software properly is a pain. But not packaging it properly is an even bigger pain.
I think that the "real" problem here is that Python's package management and apt/yum don't really interface well. I've built .debs for Python packages before, and it was a huge pain in the ass, even with the scripts and automations that I was able to find for it.
It's 'simple' for me to build a virtualenv in a directory with `pip install -r requirements.txt` in my source repo, but everything I've read about making those virtualenvs portable (even moving them between directories on the same server you built them on) is that it is a path fraught with peril.
The 'real' problem is that you shouldn't have to install dependencies in OS-level locations for an application-level product.
In other words, the app should be able to bundle dependencies without having to use a crazy opaque container system, and those dependencies should be easily auditable.
This is the case for Java, where dependencies are 1) bundled with the application, 2) declared explicitly, 3) signed, 4) centrally managed with maven repository software.
In Python, you get similar things with the exception of "bundled with the application" and IIRC "signed" only happens when uploading to the Python Package Index.
That process relies on your platform's own dependency-resolution system, and I hope you're using something sane such as Debian/Ubuntu, or are building from source via Gentoo. RPM distros can work but tend to be far flakier.
Start with a base install, have a package for your own source which specifies deps, including if necessary _maximum_ version numbers for deps, and build the target image. Once that's built, you can generally deploy that directly rather than re-build for each deployed host.
Packaging and image preparation _aren't_ tasks which can be abstracted away entirely. It's this point which the containers craze founders on the reefs of reality. Yes, packaging software properly is a pain. But not packaging it properly is an even bigger pain.