## focus: let's see if my vision of the moon illusion is correct

Let's see if my vision of the moon illusion is correct:

The true cause of the moon illusion?

As shown in the figure, the blue line is the lens, w is the height of the object, x is the height of the image, v is the distance of the image, u is the distance of the object, and f is the focal distance. The red line is the path of light.
The relationship between u, v, f is

1 / u + 1 / v = 1 / f

and so

f = uv / (v + u) (1)

The observer's eyes are constant, so v is fixed. The distance between the observer and the object is constant, so u is also fixed. Once v and u are fixed, we can tell from equation (1) that f is also fixed. If v is fixed and u decreases, then f also decreases.

x / w = (v-f) / f = v / f-1

and so

x = w (v / f-1) (2)

According to formula (2), if v and w are fixed, x will increase when f decreases.

When the observer observes the moon on the horizon, due to the influence of mountains and trees, f is smaller than that when observing the moon at zenith. According to formula (2), we can know that when f decreases, x increases. Then the observer will feel that the moon on the horizon is larger and closer than the moon at zenith.

I think this is the reason for the moon illusion.

Simple calculation

When looking at nearby trees with your eyes:

u = 200 m (assuming 200 m from the tree)

v = 0.024 m (diameter of the eyeball, assumed length of the image)

w = 10 m (assuming the tree is 10 m high)

f = uv / (v + u)
= 0.0239971 m

x = w (v / f-1)
= 0.0012m = 1.2mm (height of tree image)

When looking at the zenith moon (without the influence of the trees on the ground) with the eyes:

u = 380000000 m (distance from the observer to the moon)

v = 0.024 m

w = 3476000 meters (moon diameter)

f = uv / (v + u) = A (we set this focal length to A)

x = w (v / f-1)
= 0.000219537 m = 0.219537 mm

In the direction of the horizon, if you look at the moon at the focal length of the observation tree:

f = 0.0239971 m

x = w (v / f-1)
= 420,067 m

Observing the image of the moon at the zenith is 0.219537 mm, and observing the image of the moon on the horizon is 420.067 m, showing a great difference between the two. So using a focal length less than A will "magnify" the moon.

Of course, the eyes generally do not observe the moon with a focal length of 0.0239971 m. Because the image may not be clear. But if the moon image is not clear at this focal length, then the eyes will adjust the focal length. Adjust to a focal length that is clear to the image. This focal length is less than A, but it is the focal length for clear images. Because the moon is so far away, the depth of field of the moon image is very large. Therefore, there is a focal length that is smaller than A and can display images clearly. So the moon illusion is caused by a relatively short focal length. I think that's why the illusion of the moon.

reference

https://en.wikipedia.org/wiki/Moon_illusion

## Computer vision: what is the best algorithm to detect the texture of a person's clothes in real time for a mobile robot?

Thanks for contributing with a response to Computer Science Stack Exchange!

But avoid

• Make statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

## computer vision: using a rotation matrix to transform / change a pinhole camera

I have a pinhole camera model with the following extrinsic intrinsic (in Earth Centered, Earth Fixed Coordinate, (ECEF)) and intrinsic parameters.

Focal length (x, y) = 55000 px, optical center = (2400,540)

camera center (x, y, z), (terrain coordinates) = -2322996.2171387854 -3875494.0767072071 5183320.6008059494 (ECEF)

Large screen bra.

I need to change the camera so that it points to the correct position on the ground based on an ECEF (4 X 4) transformation matrix, which looks like this:

((0.99999922456661872, 0.00043965959331068635, -0.0011651461883787318, 7033.5303197340108),
(-0.00044011741039666426, 0.99999982604190574, -0.00039269946235032278, 814.02427618065849),
(0.0011649733316053631, 0.00039321195895935108, 0.99999924411047925, 4139.9400998316705), (0, 0, 0, 1))

The 3 x 3 matrix portion formed by the first three rows and columns is the rotation component, the first three values ​​in the last column are the translation component. My general understanding is that I need to add the translation component to the coordinates of the center of the camera, while multiplying the camera to the ground rotation matrix with the rotation component. Is this enough, or would I need to do something extra?

## User interface design: What would be the best way to create a cone that represents the field of vision of an entity in Godot Engine?

I would like to create a game in which PCs and NPCs would have a field of vision, which is shown as a cone in front of the entity (see image to see an example).

As it would be the first game I developed using Godot Engine, I wonder what would be the best way to implement this function. Also, if possible, in later versions of the game, I would like to be able to make the decoration stop the field of vision, like what is done in Monaco: what's yours is mine.

Would you have any recommendations?

## pathfinder 1e: Is Kenabres an extreme example of the vision of Tieflings and demons in the world wound?

In response to my question: Tieflings situation with Worldwound defenders? Kenabres was used as an example of how tieflings looked.

While creating characters for a global campaign, one of my players and I had a discussion about this. With the amount of Kenabres that was mentioned there, he thinks that Kenabres is absolutely extreme there, while I think it is more of a typical city there with his views on everything that is Tiefling and Demonic.

Now my question is:
Is Kenabres an extreme example for the vision of Tieflings and demons in the world wound?

## Is the technical problem of debt management plus a cultural problem or a matter of vision

Disclaimer: I do not expect any technological debt. In this publication, the technical problem of debt refers to the severity that has caused a negative impact.

Recently I was thinking of creating a tool to automatically generate a technology debt report from the problem tracker: introductory rate versus cleaning rate over time. In addition to the total, there will also be numbers broken down by project team and by manager, so that managers can easily obtain information on the current level of technology debt, without delving into the problem tracker and details (such a tool can now exist, I need research to avoid reinventing the wheel).

Motivation wise, technology debts have been increasing for years. Each time the developers increase the project estimate to include the cleaning of the technological debt, they will most often be asked to eliminate those numbers from the estimate, so the refactoring / cleaning works generally end up postponed indefinitely. I hope that the periodic report helps to improve the problem of technological debt management.

However, in second thoughtI wonder if increasing the visibility of the level of technological debt will really help to increase the priority. In general, is the problem of technological debt a problem of organizational culture or simply lack of tools / information? I assumed there is no universal answer, I wonder what is the most common cause. What is your experience

## computer vision – Laser plane estimation for the laser camera system?

I have to configure a laser line / plane projector system and a webcam to locate the 3D position of the laser in the camera image. I have read / found several resources, but the idea is not yet very specific in my head.

So, my intuition is that, since we have a configuration of a laser projector and the camera, and we want to find the position of the laser point in the image, we have to find the laser plane & # 39; correct & # 39; that intersects with the camera / image plane I am confused about how we find the relative pose of this plane with respect to the camera, and how we can use this to find the 3D coordinates.

## For the physical board game, how do you restrict the vision of PCs in situations such as fog?

For situations such as Obscureing Mist, darkness and other visual disabilities, how have you succeeded in the physical board game?

Success measurements:

• Players can manipulate their character in a battle grid to explore the restricted viewing area
• The DJ can interrupt players when their sensory information changes without significantly interrupting the game in each round.
• Ideally, you should also avoid constantly moving them backwards.
• Do you have any method to provide information to players about what their character perceives
• Ideally, only the player who "should" have the exact knowledge would have it, and others only what they can convey

The good response (s) will follow the concept of Good subjective and cover key points:

• What has been your experience using the system?
• that is, comments from the players, how did you feel it worked
• How could you restrict your view and also reveal information
• Did you just tell the player? Did everyone know when Bob was next to the bad guy?

Related question to play online with the Roll20 virtual table: in Roll20, how do you restrict PCs? vision in situations like fog?

## computer vision: how to automatically detect the intrinsic parameters of the camera with a single grid?

Thank you for contributing a response to Computer Science Stack Exchange!

But avoid

• Make statements based on opinion; Support them with references or personal experience.

Use MathJax to format equations. MathJax reference.