continuous integration – Clarifying the steps in a CI/CD, but namely if if unit testing should be done building a Docker image or before

I’m building at a Build and Deployment pipeline and looking for clarification on a couple points. In addition, I’m trying to implement Trunk Based Development with short-lived branches.

The process I have thus far:

  1. Local development is done on the main branch.

  2. Developer, before pushing to remote, rebases on remote main branch.

  3. Developer pushes to short-lived branch: git push origin main:short_lived_branch.

  4. Developer opens PR to merge short_lived_branch into main.

  5. When PR is submitted it triggers the PR pipeline that has the following stages:

    1. Builds the microservice.
    2. Unit tests the microservice.
    3. If passing, builds the Docker image with a test-latest tag and push to container registry.
    4. Integration testing with other microservices (still need to figure this out).
    5. Cross-browser testing (still need to figure this out).
  6. If the PR pipeline is successful, the PR is approved, commits are squashed, and merged to main.

  7. The merge to main triggers the Deployment pipeline, which has the following stages:

    1. Builds the microservice.
    2. Unit tests the microservice.
    3. If passing, builds the Docker image with a release-<version> tag and push to container registry.
    4. Integration testing with other microservices (still need to figure this out).
    5. Cross-browser testing (still need to figure this out).
    6. If passing, deploy the images to Kubernetes cluster.

I still have a ton of research to do on the integration and cross-browser testing, as it isn’t quite clear to me how to implement it.

That being said, my questions thus far really have to do with the process overall, unit testing and building the Docker image:

  1. Does this flow make sense or should it be changed in anyway?

  2. Regarding unit testing and building the Docker image, I’ve read some articles that suggest doing the unit testing during the building of the Docker image. Basically eliminating the first two stages in my PR and Deployment pipelines. Some reasons given:

    • You are testing the code and not the containerized code which is actually what will be run.
    • Even if unit testing passes, the image could be broke and it will be even longer before you find out.
    • Building on that, it increases the overall build and deployment time. From my experience, the first two stages in my pipelines for a specific service take about a minute and half. Then building and pushing the image takes another two and half minutes. Overall about four minutes. If the unit tests were incorporated into the Docker build, then it could possibly shave a minute or more off the first three stages in my pipeline.

    Would this be a bad practice to eliminate the code build and unit testing stages, and just moving unit testing into the Docker build stage?

Thanks for weighing in on this while I’m sorting it out.

The "real and effective" GIT CI/CD strategy

I’ve started in this new company a few weeks ago, this is the CTO CI/CD strategy:

current git strategy

Current: Developer team has the repo prod/master and they merge everything into master (no branching strategy).
Once the code is ready in prod/master they’ll ask Infrastructure team to start the deployment process which uses Jenkins.

The Infrastructure team executes a job in Jenkins that performs this actions:

  1. Clone the whole prod/master into build/master (so they don’t mess with the developers)
  2. Execute scripts to build the binary(ies)
  3. Generate a .txt file with the version of the build
  4. Commit and push this changes into build/master (reason: prepare the deployment)
  5. Apply environment specific settings and push, configurations, binaries and code to distro/master

We end up with three repos at the end of the day for each application,
that means, if we have 10 applications we would have 30 repositories

Reasons of the CTO for this:

  1. prod/master: For developers and their code (no branching, only master)
  2. build/master: For Infra team to generate versions (to prepare the deployment)
  3. distro/master: Binaries + code + specific environment configurations (to perform rollbacks, traceability and havebackup)


  • Really complex process
  • Unnecesary large data ammounts in repositories and slower processing when performing deployments
  • Only works for FileSystem deployments (Databases are not considered in this sceneario and that kind of changes are manually performed)
  • No instant feedback for developers
  • Complexity when crossed patches/fixes and deployments
  • Developers are involved in the production deployment (quite often, in order to test and apply changes on hot)
  • Most of the deployments are performed directly into production


  • There’s backup and posibility to rollback
  • Easy traceability (for rollbacks, not for development)
  • Specific configurations per environment are stored in the repos with the code and binaries

And this is my approach:


  1. Developers create a JIRA ticket, which will be used as tag for the build and to create the branch
  2. Developers will deploy and test in a Q.A/PRE-PROD environment
  3. Once the code works, it will be integrated to master
  4. Once integrated with master, the binary goes to a "binary repo like artifactory or other"


  1. Traceability: The code deployed is easy to find through the tag (JIRA-XXX) for an specific build.
  2. Rollback: Taking the binary from the repo (Artifactory)
  3. One Repository per project, it means 10 projects are 10 repos, not 30.
  4. Instant feedback to developers, if the deployment is not sucessful they can change their code
  5. This design contemplates db scripts as hooks
  6. The configurations per environment will be handled with Ansible + GIT, generating templates with placeholders and a backup of each configuration.


  • Re-educate developers to work in branches
  • Force developers to integrate code only when it really works
  • Change the CTO mindset only will happen through examples (working on it)
  • We must create new infra (new environments to create deployments and not going to production directly)
  • Lots of hours automating through hooks, rest apis
  • Need to implement new technologies


  • Well, many of them.

I’d like to know the opinion of people with expertise on this git strategies.



git – Best GITHUB branching strategy plan CI/CD for kafka platform development

I’m working on a Kafka platform development project, the team is keeping all the code in master with no branches, next moving to QA, Stg, Prod. If in between these stages if they don’t need any changes to move to next stage, they then revert all the unwanted changes and deploy the code and then put the unwanted changes back again in master. It’s hard to go over every commit and check manually to revert the changes.

Is there any other way to avoid these reverts by creating any branches or any other strategies (CI/CD new structure plan)?

versioning – The meaning of “Fix Version” field in Jira, while working in agile, microservices and CI/CD processes

When working on a monolith application in a “waterfall” process, a “Fix Version” field makes a lot of sense.

Say you have the “November 2020” planned release, you can add/plan dozens of relevant bugs and features to that release. Other bugs and features may be planned for the next release of “December 2020” and so on. So the “Fix version” field can have the same value for multiple issues/tickets.

Moving to a microservices architecture, the above process is significantly changed – Each microservice has its own release cycles that are completely independent of other MS. If a development team is responsible for 10 different MS, theoretically, each one of them can and should be released without any coupling to the others.

When we add on top of that a CI/CD process, things get even more volatile:

When each and every commit to the master results in a full automation tests suite and potential (or de facto) version that can be deployed to staging/production, then every commit has its own “Fix Version”.

Taking it to the Jira world (or any other issue tracking system) the meaning of that is that each and every ticket/issue may have its own “Fix Version”. No more a common value that is shared across many tickets, but a disposable “Fix Version” value that will be used once.

Moreover, when an issue is created, you have no way to know in which build it will end up, as other tasks may finish before or after it.

In Jira, creating a “Fix Version” is manual, making the process of updating and ticket’s “Fix Version” a tedious and error-prone work.

So my questions are:

  • Are the above assumptions correct?
  • Is there a meaning to a “Fix Version” field in an Agile + Microservices + CI/CD environment?
  • How do you handle the “Fix version” challenge?
  • How do you do it in Jira?

Store OAuth 2.0 tokens for use in testing and CI/CD

I have a web application where users must authenticate with a 3rd-party OAuth 2.0 service in order to do what they need to do in the app. On initial registration/login, they will connect with the service and my backend will get their access tokens and refresh tokens and keep those fresh for the duration of their user existing in our system (unless they are revoked).

Since the majority of my application needs a valid Oauth 2.0 token with this service to properly function, I also need a valid token in my tests (both local and on my CI/CD – currently on AWS).

What I am looking for is a “proper” way to both get and store these tokens non-interactively (since during testing, the user would have to open the browser, connect to the service, then be redirected back to my app). I see a few ways to do this; however, I do not see anything out there right now that is off-the-shelf and I could use immediately. I am looking for options if people are aware.

Possible “ways” of doing this that I can see:

  1. Use something like Selenium to automate the login and OAuth 2.0 connection flow with a real browser as a fixture that will be used by other tests in the test suite. Not sure how well this would work on something like AWS CodeBuild without using a headless browser.
  2. Build a new HTTPS application, hosted in AWS or elsewhere, where you can configure OAuth 2.0 connections with a sandbox server on the 3rd-party service, and the server, much like my actual app, will keep the tokens fresh using background tasks. Then, the idea would be to provide ANOTHER set of credentials to this server that I could fetch in my local tests and in CI/CD to get the OAuth 2.0 access token for a specific connected user that has been done once manually in this new application.
  3. Some way of interactively connecting to the OAuth 2.0 server as part of the local test suite (I am using Python, Flask, and PyTest). I am not sure how/if this could work on AWS.
  4. Finally, take the most recent tokens I have from a local version of my app, and always be changing them out in the environment variables of both the AWS CI/CD and my local tests. Extremely non-optimal.

I thought that something out there to do this would exist, given how often now 3rd-party services do not provide a key and secret like OAuth 1.0 where no interaction is required.

google cloud platform – securing project source codes of enterprise at CI/CD stage

I’m looking for a way of security testing involves in software developing lifecycle including CI/CD process.
for example, Gitlab provides SAST configuration do to that.

While a developer is committing a piece of code to GCP,
wondering if i could implement source code vulnerability scan automatically.

Thank you in advance.