Bonjour,
I’m frustrated that the CI cannot run all the tests because the OpenStack control plane (at OVH or elsewhere for that matter; or even GCP or AWS) is, by design, asynchronous and prone to failure. That works well in a production environment but it is too noisy for a CI. It is difficult to sort out if an error comes from OpenStack or from the Ansible / python code that is under test.
About a year ago discussions with @pilou convinced me it would be a good idea to be able to run Enough on a single Docker machine. It would be useful in general for people who don’t want to rely on the cloud and prefer bare metal (@dam ). And it would allow to run almost all tests in the CI because the failure rate of docker-compose up and docker run or docker build is very low and won’t confuse the developer.
The work done on docker suffered regression in the past six month because of:
- backup restoration (the backup strategy is 100% OpenStack specific)
- upgrade tests (not sure how difficult it would be to run them on docker)
- VPN implementation (tox -e openvpn runs a VPN client in a docker container but I’m not sure what it would mean for a VPN server to run on docker)
- every VM now has two interfaces (one public, one private) and the docker container should be configured to be attached to two networks instead of one
On the plus side:
- the tests now run from a dedicated docker container that could be attached to a docker network dedicated to testing and there would be no need to worry about exposing ports that already are in use
- the ownca certificate authority is now well tested and exposing all web services via SSL is no longer a concern
In conclusion, I wonder if it would be worth resuming the work to run Enough on Docker. I got used to manually running tox -e wekan (and coping with environmental errors) before merging. But it may be too much for new contributors (@nqb ).
To be continued