I followed this answer in a slightly different direction, not from a debug, but from a test perspective:
Usually, a pure function has a very simple configuration:
Same entry, same result.
That makes it very easy to define appropriate test cases: select a set of standard test values and some of special interest (depending on what your function does), write your test methods for it, comparing what the function returns with What I expected is to return. This provides you with a solid set of test methods that you can execute whenever you make changes.
Once you start to have randomness in your methods, you encounter this problem:
The same entry, many possible values.
Now you need to know ALL the possible values that your random function could return for a given entry, and test them. And depending on what your random function does, that is simply unfeasible (due to many options that you need to add manually or programmatically), or directly impossible.
Suddenly, the amount of effort to create a solid set of tests increased dramatically, at which point it usually skips directly.
How would we avoid this?
To avoid this while writing your algorithms, you would encapsulate the piece of randomness. He would try to place the fragment that really causes the random part within his own object, outside the algorithm. Now you can really make funof that object during its test set, basically fixing the values that the "randomized" object would return. It is not ideal, and does not cover all scenarios, but neither does almost any test. The effort to create your test suite increased slightly, since "randomness" is gone now, and you can treat it as a normal and pure function for testing purposes.
Similar principles apply to the manual debugging of a function: most of the time it is not really known what random value the algorithm generated and, therefore, it is difficult to recreate what exactly happened in the function that broke it. That makes manual debugging very difficult.