## continuous integration – Clarifying the steps in a CI/CD, but namely if if unit testing should be done building a Docker image or before

I’m building at a Build and Deployment pipeline and looking for clarification on a couple points. In addition, I’m trying to implement Trunk Based Development with short-lived branches.

The process I have thus far:

1. Local development is done on the `main` branch.

2. Developer, before pushing to remote, rebases on remote `main` branch.

3. Developer pushes to short-lived branch: `git push origin main:short_lived_branch`.

4. Developer opens PR to merge `short_lived_branch` into `main`.

5. When PR is submitted it triggers the `PR` pipeline that has the following stages:

1. Builds the microservice.
2. Unit tests the microservice.
3. If passing, builds the Docker image with a `test-latest` tag and push to container registry.
4. Integration testing with other microservices (still need to figure this out).
5. Cross-browser testing (still need to figure this out).
6. If the `PR` pipeline is successful, the PR is approved, commits are squashed, and merged to `main`.

7. The merge to `main` triggers the `Deployment` pipeline, which has the following stages:

1. Builds the microservice.
2. Unit tests the microservice.
3. If passing, builds the Docker image with a `release-<version>` tag and push to container registry.
4. Integration testing with other microservices (still need to figure this out).
5. Cross-browser testing (still need to figure this out).
6. If passing, deploy the images to Kubernetes cluster.

I still have a ton of research to do on the integration and cross-browser testing, as it isn’t quite clear to me how to implement it.

That being said, my questions thus far really have to do with the process overall, unit testing and building the Docker image:

1. Does this flow make sense or should it be changed in anyway?

2. Regarding unit testing and building the Docker image, I’ve read some articles that suggest doing the unit testing during the building of the Docker image. Basically eliminating the first two stages in my `PR` and `Deployment` pipelines. Some reasons given:

• You are testing the code and not the containerized code which is actually what will be run.
• Even if unit testing passes, the image could be broke and it will be even longer before you find out.
• Building on that, it increases the overall build and deployment time. From my experience, the first two stages in my pipelines for a specific service take about a minute and half. Then building and pushing the image takes another two and half minutes. Overall about four minutes. If the unit tests were incorporated into the Docker build, then it could possibly shave a minute or more off the first three stages in my pipeline.

Would this be a bad practice to eliminate the code build and unit testing stages, and just moving unit testing into the Docker build stage?

Thanks for weighing in on this while I’m sorting it out.

## family – Visit visa refused for not clarifying the source of funds

My mother applied for a visit visa which it has been refused as the bank statement doesn’t show the source of the funds , these funds are available for her which she has in her saving account and she got these money for her pension after retiring 16 years ago and we are not able to obtain a proof of this

## clarifying understanding of expectation of the absolute value of a random variable

Given pdf f(x), is $$mathbb{E}(|X|)=sum_{-infty}^{infty}|x|f(|x|)$$? why would it not be $$sum_{-infty}^{infty}xf(|x|)$$ or $$sum_{-infty}^{infty}|x||f(x)|$$?

## json rpc – Clarifying descendants/ancestors in error cases for too-long-mempool-chain

In practice I’ve seen two instances of the `too-long-mempool-chain` error when trying to send a transaction:

When there are too many unconfirmed transactions chained together:

``````"too many descendants for tx <txid> [limit: 25]"
``````

When the chain of unconfirmed transactions is too big in size:

``````"exceeds descendant size limit for tx <txid> [limit: 101000]"
``````

In the code I also see two more error cases that look similar, but I can’t grasp what the difference is in these:

``````"exceeds ancestor size limit [limit: %u]"
``````

and

``````"too many unconfirmed ancestors [limit: %u]"
``````

The names “ancestor” and “descendant” seem backwards given that we use the reverse terminology in something like “child pays for parent”. Am I thinking about this correctly? Also, when would the latter 2 error messages get triggered?

## Search Engine – Clarifying Previous Post by David Anderson and Steve Chambers Request

To: Steve Chambers and David Anderson,
Below is the process I used that led me to step 6. Everything worked wonderfully. I restarted my Mac Mini and held down the option key. Mac HD and EFI HD appeared. I clicked on the EFI. The Windows logo or badge appeared (the little boxes) and it seemed like everything was fine for about a minute or so and then the screen turns black. The Mac Mini stays on but the screen is blank. When I talked about "Pathways of the search engine", what I meant and should have said was the new process David devised to avoid the need for a flash drive and load the Windows Support folder and the Windows ISO on the 16gb partition , inside the Winstall volume.

(((Download latest Windows 10 ISO file from Microsoft website Download Windows 10 disk image (ISO file). Currently would be the 1909 update (September 2019).

When finished, exit Boot Camp Assistant.
Plug in the external drive. Open the Disk Utility application. In the drop-down menu in the upper left corner of the Disk Utility app, make sure Hide Sidebar is unchecked and Show All Devices is checked, as shown below.

Highlight the external drive and select the Delete button. Enter the following into the popup, then click the Delete button.

When finished deleting, click the Done button. With the external drive still highlighted, click the Partition button. Make the following changes in the order listed below.

Note: The size entered below must be large enough to create a volume that can hold Windows ISO files and Windows support software files. The 16GB value should provide more than enough space. However, a smaller value can be substituted.
Click on the + button.
Please enter a size of 16 GB.
Enter the name WINSTALL.
Select the ExFAT format.

The result should appear as shown below.

Click the Apply, Partition and Done buttons in the order given. When done, exit Disk Utility.
Using the Finder application, mount the Windows 10 ISO file and copy the content to the WINSTALL volume. Then copy the contents of the WindowsSupport folder to the WINSTALL volume. In your case, the result should appear as shown below.

Restart the Mac and immediately hold down the option key until the Startup Manager icons appear. Start from the external drive by selecting the icon for the external drive labeled EFI Boot.
When a window similar to the one shown below appears, press Shift + F10 c)))

I really appreciate the opportunity to chat with all of you.

Thanks for all that you do!

To be honest,

Andrew Wilis

## web development: clarifying the misconceptions about the Flask backend and the representation of the client side

I am building a website and, along the way, I found many things that I did not know and I hoped to get some help to understand some of them.

I started to build a website using Flask and the Jinja template language. This was very intuitive and easy to understand: the client makes a request, the server checks that all requests are good and then delivers the entire work page at once.

Then I wanted to update my interface to use ReactJS. From my research I found that I might I'm still going to the rendering path on the server side if I used a library like `python react`, but I opted for the representation of the client side instead. This meant a couple of things.

• I will need a frontend server (for example, node.js) to render the frontend
• I'll have to refactor my Flask backend to behave like an API

Then, when a client makes a request for, for example, the home page, the process will look similar to the following (if I understand it correctly):

### Assuming I am right up to this point, my question is, what does the previous diagram look like in the case of submitting a form?

A form requires that a CSRF token be embedded when it is processed, that is, the token is created on the frontend server. But then, how does the Backend verify that token? Is it necessary for all authentication to occur on the Frontend server? I'm not sure how Frontend and Backend fit together.