Does IBIS reduce the resolution of the image? How does it compare to the IS-based lens?

There is only one way in which IBIS could work without resolution reduction: if the sensor area was significantly larger than the area exposed to light, so that there is still a sensor available when it is currently out of its initial position .

Actually it is exactly the opposite. The image circle, the result of that cone of light striking the image plane, is larger than the sensor. It has to be, first because that is the only way to cut a complete rectangle out of a circle, but also because the edges of the circle are not well defined, it is a kind of ugly fading with an artifact disorder. Therefore, the circle covers more than the minimum.

It is true that for IBIS to be effective, this minimum must be a little larger. To give a concrete example: a full frame sensor is 36 × 24 mm, which means a diagonal of approximately 43.3 mm. That means that the minimum circle without moving the sensor must be at least 43.3mm in diameter. The Pentax K-1 has image stabilization by sensor change, which allows movements of up to 1.5 mm in any direction, so the sensor can be within a space of 36 + 1.5 + 1.5 by 24 + 1.5 + 1.5, or 39 × 27 mm. That means that the minimum diameter of the image circle to avoid problems is 47.4 mm, a little larger, but not drastically.

But then, the resolution of the cut of the circle sensor remains the same. It has only changed a little.

Actually, it is quite easy to find some examples that demonstrate the concept of an image circle, because sometimes people use lenses designed for smaller sensors in cameras with larger sensors, which results in less than one full frame coverage . Here is an example of this site … do not pay too much attention to the quality of the image, as it is clearly a test image taken through a glass window (even with a window screen). But it illustrates the concept:

Image of Raj, from https://photo.stackexchange.com/questions/24755/why-does-my-nikkor-12-24mm-lens-vignette-on-my-nikon-d800#

You can see the round circle projected by the lens. It is cut at the top because the sensor is wider than it is tall. This sensor measures (approximately) 36 × 24 mm, but the lens is designed for a smaller sensor of 24 × 16 mm, so we get this effect.

If we take the original and draw a red box that describes the size of the smaller "correct" sensor, we see:

with frame

Then, if the lens was taken in the "correct" camera, the whole image would have been that area inside the box:

Image of Raj, cropped.

You've probably heard of "crop factor." This is literally that.

Now, if IBIS needs to move the sensor enough (here, the same relative amount as the travel limit of 1.5 mm in the Pentax full frame), you can see this, with the lighter red line representing the original position and the new change . You can see that although the corner is getting closer, it is still inside the circle:

change

resulting in this image:

shift and harvest

Actually, if you look at the very extreme lower right corner, there is a little shading that should not be there; this artificial example goes a little too far. In the extreme case of a lens that is designed to push the edges of the minimum (to save costs, weight, size, etc.), when the IBIS system needs to make the most extreme change, it is actually possible to see an increase in artifacts such as This in the affected corners of the image. But, that is a rare case in real life.

As Michael Clark points out, in general it is true that the quality of the image falls near the edge of the lens, and if it will obtain the maximum resolution (in the sense of captured detail), deviating from the center can impact that. But in terms of captured pixels, the count is identical.

Besides the problem of centering, this can also affect the composition: if you are trying to be very careful to include or exclude something from a frame edge, but do not stand still, you could be close to a 5% discount. you thought you were But, of course, if you do not stay still, you could get it just by movement.

In fact, Pentax (at least) actually uses this to offer a novel feature: you can use a configuration to intentionally Change the sensor, allowing a different composition (the same as a small amount of change of a bellows chamber or a tilt shift lens). This can be particularly useful with architectural photography. (See this in action in this video.)

Also: it's worth thinking about what's happening. during the course of the exhibition. The goal is to reduce the blur, right? If the camera is perfectly still (and assuming perfect focus, of course), each light source in the image goes to one place, resulting in a perfectly sharp drawing of that source in its image. But let's examine a fairly long half-second shutter speed during which the camera moves. Then you get something like this:

… where the movement of the camera during the exhibition has caused me to draw lines instead of points. To compensate for this, The stabilization of the image, either in the lens or in the change of sensor, not only jumps to a new location. Follow (as best you can) that possibly erratic movement when the shutter is open. For the video, you can make a software-based correction for this by comparing the differences from one frame to another. For a single photographic exhibition, there is no such thing, so hypocrisy Work as in your appointment at the top of this answer. Instead, you need a sophisticated mechanical solution.