## opengl – Reconstruct the position of the world from the depth

I am trying to restore the position of the depths. I created a separate position map in the visualization space, which I use as the basis for the tests. But I can not get the same result when I try to reconstruct the position from the depth. This is how I am trying to do this:

``````#define textures getPosition (texCoord) (positionMap, texCoord) .xyz

vec3 posFromDepth (depth of water) {
float z = depth * 2.0 - 1.0;

vec4 clipSpacePosition = vec4 (texCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = inverse (projection) * clipSpacePosition;

// Perspective Division
viewSpacePosition / = viewSpacePosition.w;

returns viewSpacePosition.xyz;
}

main vacuum () {
float d = - (projection * vec4 (getPosition (texCoord), 1.0)) z; // get depth
fragColor = posFromDepth (d);
// fragColor = getPosition (texCoord);
fragColor = clamp (fragColor, 0, 1);
}
``````

Here is the correct position map:

But rebuilt looks like this:

What am I doing wrong?

## Depth of field effect?

Hi all,
I just try to create a modal that enlarges and reduces the background. Based on avgrund by hakim https://lab.hakim.se/avgrund/
Now you have nesting manners also with the help of @deathshadow in the programming forums.
Here is the current status:
https://jsfiddle.net/postcolonialboy/1jz0q3xk/19/
Now, what I'm trying to do is:
When clicking on the nested modal link, everything should go back and blur, like simulating a layer backslide / depth of field.
I have …

Depth of field effect?

## Data structures – Do balanced-height binary trees have a logarithmic depth?

Leave $$T_h$$ Be any tree that satisfies its property and has at least height. $$h$$. Leave $$| T_h |$$ Be the number of your nodes.

by $$1 le h le c$$, $$| T_h | ge 1$$.
by $$h> c cdot i$$, with $$i in mathbb {N} ^ +$$, do you have:
$$| T_h | ge 1 + | T_ {c cdot i} | + | T_ {c cdot (i-1)} | ge 2 | T & # 39; _ {c cdot (i-1)} |$$
where $$T_ {c cdot i}$$, $$T_ {c cdot (i-1)}$$Y $$T & # 39; _ {c cdot (i-1)}$$ They are tall trees at least. $$c cdot i$$, $$c cdot (i-1)$$Y $$c cdot (i-1)$$, respectively, that also satisfy their property.

Leave $$T$$ Be a tall family tree. $$H$$. From the previous observations (election $$h = H$$) it turns out that $$| T | ge 2 ^ { lfloor H / c rfloor} ge 2 ^ {H / c – 1}$$, or the equivalent, $$H le c log | T | + c = O ( log | T |)$$.

## Minimum depth of a leaf in a tree that corresponds to a classification algorithm based on comparison

The lower limit of comparisons in a classification algorithm based on comparison is $$log2 (n!) = Θ (nlog (n))$$. However, there may be fewer comparisons needed in an algorithm.
If you take an ordered array, it will take $$n-1 = O (n)$$ comparisons (with insertion ordering) to order the matrix, comparing each pair of adjacent numbers. I do not understand then how it is $$log2 (n!) = Θ (nlog (n))$$ the lower limit.

I also do not understand the trees corresponding to the classification-based algorithms:
Each leaf in the tree corresponds to one of the permutations of the n numbers. If the minimum number of comparisons needed is $$log2 (n!)$$ then the depth of each sheet should be at least $$log2 (n!)$$However, I saw that it is possible that a leaf has a depth of $$O (n)$$.

Can there be leaves with an even lower depth than $$n-1$$?

## Equipment protection: How can I protect the scanner glass for exploration by scanning the depth of field and avoiding artifacts?

Recently I have got into the scan, but I have not yet found a good method to protect the scanner glass from objects that scratch or dirty it. I need to find a material that is optically neutral and thin.

I have tried acetate sheets (sold as document protectors), but they are only optically neutral for perfectly flat items, like documents. I have some transparent plastic document protectors, but they shrivel up and have undesirable reflective properties. I could use glass (I have a broken scanner), but unless the glass is incredibly thin, it will cut into my depth of field, which is important to me.

And that is The optimal material? It must be optically neutral, non-reflective, not prone to wrinkles, as thin as possible, and preferably inexpensive and / or reusable.

## Focus – How can I get a preview of the depth of field on my Canon M50?

Your camera is working as it should. "Liveview" is a term that describes the use of the rear LCD screen as a viewfinder.

In the "OLD" days, the film cameras used some type of optical viewfinder.

When digital cameras were lengthened, 35mm digital SLR cameras continued to use an optical viewfinder, while "point-and-shoot" digital cameras and "mirrorless" cameras tended to use the rear LCD screen as a viewfinder.

When this feature was added to the DSLRs it was called "Live View".

"Exposure simulation" can show you how the exposure of the photo will look on the back LCD before the photo is taken. It can be activated or deactivated on some cameras, while some cameras will always show a full brightness screen of Liveview by default. There may be advantages for both types of Liveview screens, so it is good to have the option to activate or deactivate Exp Sim. Your M50 has this option.

You need to learn about the EXHIBITION TRIANGLE. The aperture, shutter speed and ISO interact to give you the correct exposure. If you change one, you must also change one of the others to preserve the correct exposure.

In your video, the brightness of the photo does not change because you have automatic ISO enabled that automatically changes the sensitivity of the sensor to compensate for the changes you made in the aperture.

If you want to see the effects of any change in the shutter speed and aperture in your photo, you must shoot in Manual mode with manual control over aperture and shutter speed. You must also use the ISO Manual and establish your own ISO. THEN, if you have activated the Exposure Simulation option, you can see the brightness effects of the changes you make in any of the THREE parts of the exposure triangle. (Aperture, Shutter speed, ISO)

The purple label in the background became sharper with a smaller aperture (higher number) due to the "Depth of field". You can think of depth of field as depth of focus. The DOF increases with small openings and a greater amount of background and foreground will be focused as you use smaller openings.

Some cameras have an option to enable "Depth of Field Preview". You should be able to enable the depth-of-field preview on your M50 using "Custom Function Settings" or "Assigning Functions to Buttons".

From the M50 user guide:

## Calculation of the depth mask from different lighting

I have an object that is static, the camera is static and the light source is moving. How can the depth mask be calculated?

The concept is to use – calculate the height from the length of the shadow

## 2d – Depth of bone in the unit

I am very new in game development and I am trying to manipulate a 2D character in Unity 2019 through the 2D animation package and the IK manager.
So far, most are working well, but unfortunately, can not I get the bone depth to work well for some reason?
No matter how I set the bone depth for my character, the wrong leg is still in the foreground.
Any useful advice or a solution to my problem?

Thank you very much in advance.

## Depth of field – Long-range softness on a wide-angle lens

I just received a copy of the aspherical IF Tamron AF 20-40 mm F2.7-3.5 SP for my Sony a7 (using the Sony A / E adapter that is not AF) on eBay. I'm seeing what I find a surprising degree of softness at the long end of the lens. This is especially surprising to me, since the revisions (see the link) praise the clarity of this lens. However, I have not used a wide-angle lens before, much less this specific lens, so I'm not sure if this degree of smoothness is to be expected or indicates a problem with the lens. (It was listed as "excellent" condition without optical problems on eBay).

All of these shots are untrimmed, manually focused, approximately 25 feet (~ 7.5 meters) away from the fence, and hand held because I was too lazy to set up my tripod for three shots. In each case, the sharpest focus was timid of infinity. (The lens does not have focus marks between 10 feet / 3 m and infinity). I took these shots in Raw, I loaded them in On1 Photo Raw and exported them as 90% of JPEG files without any editing. Photo Raw does not seem to have a profile for this lens, so it does not seem to have applied any correction. (At least, I do not see any difference in activating and deactivating the lens correction panel.)

First, for reference, here is a photo at 20 mm, 1/200 sec, f / 10, ISO 160. With a 50% zoom I can see some small differences in the sharpness between the center and the edges, but that does not worries. I consider that this photo has a good sharpness for a lens of the mid 90's.

Then 40 mm, 1/200 sec, f / 10, ISO 125. The center has a good sharpness, but everything else is extremely smooth, even in the small preview version. The right side is especially bad. This is the problem that I am trying to understand.

Finally, only in the case that f / 10 can be considered "open" in this lens, or if there is something about the depth of field with wide lenses that I do not know, I tried the smallest aperture in this lens. 40mm, 1/200 sec, f / 32, ISO 1250. I think I can still see more sharpness decrease towards the edges, compared to the 20mm shot, but it's somewhere between tolerable and good.

## Depth of field: is it beneficial to use a full frame for a blurred background?

From the depth of field and the defocus equations of the background, we can derive the following approximate equations for a background at infinity:

``````b = f ^ 2 / (x_d * N)
DoF = 2 * x_d ^ 2 * N * C / f ^ 2
``````

where `second` the disk size is blurry, `DoF` It's the depth of field, `F` is the focal length, `x_d` is the distance of the subject, `north` is the opening F number and `do` It is the circle of confusion.

Immediately we noticed that `second` can be represented in terms of `DoF` and vice versa:

``````b = 2 * x_d * C / DoF
DoF = 2 * x_d * C / b
``````

Now, since we are behind great blurred background `second`, we must consider how it varies depending on the size of the sensor. `x_d`, the distance of the subject, is the same for the equivalent image. `DoF` It is also the same for an equivalent image, as long as you can achieve the `DoF` You are looking for both the full frame and the cut body. What is different is the circle of confusion. `do`: divided by the clipping factor in a clipping sensor. Therefore, background blurring is also divided by the clipping factor in a clipping sensor. However, the sensor size is divided too much by the clipping factor in a clipping sensor.

Then, as a conclusion, the amount of blurring of the background as a percentage of the size of the sensor remains the same (with the size of the sensor and the size of the background blurring disk divided by the clipping factor). Therefore, in the full-frame and clipping sensor bodies, an equivalent amount of background blurring is obtained, as long as you can achieve the `DoF` you are looking for.

Therefore, we must analyze if it is possible to achieve what is desired. `DoF` in a culture sensor body. When mounting a f / 1.8 lens (such as the Canon Nifty Fifty, EF 50mm f / 1.8 STM), it is possible to have a depth of field of just centimeters as shown in this example:

Clearly, we can see in the image that the depth of field is too shallow (image information: focal length of 50 mm, subject distance of 1000 mm, f / 1.8, body of the clipping sensor, which gives a DoF of 27 mm). Therefore, it is generally possible to achieve a sufficiently superficial DoF in the body of a culture sensor.

Now, I must admit that it is easier to achieve an extremely superficial DoF in a full-frame camera, but that is not usually what the photographer wants, as the image of the cactus shows. In addition, full-frame cameras require a longer focal length, and maintaining the aperture number F is not easy with increasing focal length, so it is debatable whether it is really easier to achieve an extremely shallow DoF in a full-frame camera .