analytical number theory: intuition about how Voronoi formulas change the lengths of sums

When reading the literature, innumerable examples of Voronoi formulas are found, that is, formulas that take a sum over Fourier coefficients, twisted by some character and controlled by some suitable test function, and spit a different sum over the same coefficients Fourier, twisted by some different characters, and this time controlled by some integral transformation of the test function.

The reason why one wants to do this in practice is that the second sum is somehow better, of course, which in my experience (certainly limited) tends to be reduced to the duration of the second sum that has changed significantly to best.

I will give an example (from Xiaoqing Li Bounds for GL (3) × GL (2) L-functions and GL (3) L-functions, because that is what I have in front of me).
In this case we have the formula GL (3) Voronoi
$$
sum_ {n> 0} A (m, n) e Bigl ( frac {n bar d} {c} Bigr) psi (n) sim sum_ {n_1 mid cm} sum_ {n_2 > 0} frac {A (n_2, n_1)} {n_1 n_2} S (md, n_2; mc n_1 ^ {- 1}) Psi Bigl ( frac {n_2 n_1 ^ 2} {c ^ 3 m} Great R),
$$

where $ psi $ It is a soft and compact test function, $ Psi $ as suggested above is an integral transformation of it, $ A (m, n) $ are Fourier coefficients of (in this case) a form of Maass SL (3), $ (d, c) = 1 $Y $ d bar d equiv 1 pmod {c} $.

(I have omitted many details here, but I think the details are not relevant to my question).

Doing so essentially transforms the $ n $-sum in the $ n_2 $-sum, where, as is evident in the formula, the $ n_2 $-sum has a very different argument in its test function.

What happens in practice now is that, once we reach a point where applying the Voronoi formula is appropriate, we transform the sum and study the integral transformation, mainly by stationary phase analysis to find the length of the new $ n_2 $-sum is.

In the particular example we have at hand, this, after identifying the stationary phase and playing, takes us from a $ n $-sum in $ N leq m ^ 2 n leq 2 N $that is to say $ n sim frac {N} {m ^ 2} $at one $ n_2 $-sum in
$$
frac {2} {3} frac {N ^ {1/2}} {n_1 ^ 2} leq n_2 leq 2 frac {N ^ {1/2}} {n_1 ^ 2},
$$

that is to say., $ n_2 sim frac {N ^ {1/2}} {n_1 ^ 2} $, which means that the arguments in the test function are now sized $ frac {N ^ {1/2}} {c ^ 3 m} $.

I can follow the movements of performing this stationary phase analysis, etc., but my question is this: Is there any intuition about how and how these Voronoi formulas alter the lengths of the sums?

utility – Intuition for proper monotne functions

While reading documents about utility theory, I came across a definition of a suitable monotonous function, which is a function with $ u & # 39;> 0 $, $ u & # 39; & # 39; <0 $, $ u & # 39; & # 39; & # 39;> 0 $ and so.
The first conditions are clear: more is better than less and marginal utility decreases. I am not sure how to interpret the higher order conditions and, in general, what is "appropriate" about this function?

Any intuition is appreciated.

Intuition behind derivative with respect to conjugate

When f is a holomorphic function, we have:

$$ frac {df} {d bar {z}} = 0 $$

I know how to prove it, but is there any intuition behind this? Like a geometric or something like that?

pr.probability – Intuition behind the local limit theorem in hyperbolic groups

Leave $ Gamma $ be a finely generated group and leave $ mu $ be a measure of probability in $ Gamma $. We denote by $ X_n $ The induced random walk. Finally leave $ p_n = mu ^ {* n} (e) = P_e (X_n = e) $. The local boundary problem is finding precise asymptotics of $ p_n $. In hyperbolic groups, we have the following.

Theorem Leave $ Gamma $ be a Gromov hyperbolic group and leave $ mu $ be a measure of probability symmetrically finely supported in $ Gamma $, whose support generates $ Gamma $. So there $ C geq 0 $ Y $ R> 1 $ such that
$ p_n sim CR ^ – n} n – {3/2} $, $ n to infty $.

This is due to Gerl-Woess and Lalley for free groups and Gouëzel in general.

Question What is the intuition behind the exponent? $ 3/2 $ ?

Here is a heuristic explanation in the particular case of simple random walk in the free group $ F (a, b) = F_2 $ with two generators $ to $ Y $ b $, that is to say $ mu = frac {1} {4} ( delta_a + delta_ {a ^ {-1}} + delta_b + delta_ {b ^ {-1}}) $. Then, at each step, the random walk $ X_n $ has three possibilities over four to get away from identity $ e $ and a possibility over four to return closer to $ e $. Therefore, intuitively, $ X_n $ it can be compared with the random not centered walk $ Z_n $ in $ mathbb {Z} $ measure driven $ frac {3} {4} delta_1 + frac {1} {4} delta _ {-1} $. When looking $ X_n $ moving away or closer to $ e $, we only care about the distance between $ e $ Y $ X_n $so we have to compare $ X_n $ to the random variable $ Z_n ^ + $, which is the random variable $ Z_n $ conditioned to stay positive. Now, the theorem of the local boundary in non-centered random walk assigned to remain positive in $ mathbb {Z} $ is in fact the way $ CR ^ n – n 3/2, Look over there.

However, there is an obvious problem with this intuition: when looking at the distance $ d (e, X_n) $, does not obtain independent random variables, so it is not clear if a rigorous statement can be made comparing $ X_n $ Y $ Z_n ^ + $.

In addition, it is not clear how to give such a heuristic argument in general hyperbolic groups (except to say very vaguely that the random walk should behave like a tree).

Bayesian probability – Intuition with Bayes' theorem

I told a couple of friends a problem (which I solved after hearing another problem wrong):

You have a bag of marbles, there are nine marbles in it. There is a probability of 1/9 of having 1 green marble, a probability of 1/9 of having 2 green marbles, and so on up to a probability of 1/9 of having the 9 complete green marbles.

You draw a marble and it comes out green, what is the probability that there are 4 green marbles in it originally, before drawing the marble?

I vaguely remembered Bayes' formula, so I tried to use that and connected the values ​​and got a 4/45 chance that there were 4 green marbles.

A = Bag contains 4 green
B = Draw one green

P(B|A) = 4/9
P(A) = 1/9
P(B) = 45/81

But my friend said he simply took P (B | A) divided by "45/9" and came to the same answer as me (much faster than it took me to google bayes). He described it as dividing the correct result with the "sum of all possible results". Also declaring about the divisor & # 39; are no longer probabilities, it's more like the integral of probabilities or something like that & # 39;

Did you skip some steps in bayes with your intuition and is your method solid for this case? Or is it wrong to use Bayes?

Maybe someone who understands your intuition can give an explanation? Is there a minimal change in the problem so that Bayes' theorem should be used more fully?

Linking the intuition of the topology with its axiomatic definition.

For a long time I had an intuition about what topology is, without having studied it formally. that is, the classic example of a cup that is the same, topologically, as a donut, when stretching and remodeling while retaining "holes."

Recently I began to study it formally, with the idea of ​​a topology of a closed set under finite junctions and intersections, etc. I can understand these definitions, but in no way can I relate them to my initial intuitive understanding of what topology is; They seem completely separate subjects.

Could someone help me close this gap in my understanding?

Algorithms: intuition behind the minimum cut in a flow network? Either baseball elimination or project selection

I was wondering if someone can give me a general definition of a minimum cut in addition to being the maximum flow of a network.

For example, in the baseball elimination problem, if we wanted to find out if team z is eliminated, the minimum cut represents the team (s) that will beat team z in the first place if the edges are not completely saturated. If the edges are completely saturated, then the minimum cut is everything except t, and the team z still has a chance.

For project selection, the minimum cut contains the projects that you must do to maximize your return.

How do people realize that min-cut can be applied to these problems? What has the minimum cut that gives it so much power?

Thank you!

definite integrals – Alternate series alternate linked intuition

If a question asks to use a power series to approximate a definite integral given to six decimals and my power series ends up being an alternate series, I would use the linked alternate series error to help find the answer. Yes $ | R_n | = | ($real sum of series$) – ($partial sum for the nth term$) | $would write $ | R_n | <0.000001 $ or $ | R_n | leq0.000001 $? Also, I just memorized the fact that, for the phrase "in n decimals," you write a decimal with 5 zeros followed by a 1 after the decimal point. But intuitively, why the upper limit of the absolute error does not $ 0.0000001 $ or $ 0.00001 $ with six and four zeros after the decimal, respectively?

linear algebra: intuition behind alternative property

I am trying to prove the alternate property of the determinant. From what I have seen so far, everything is resolved around the idea that if I have a matrix and start changing columns / rows, the number of switches needed to return to the original matrix will determine my sign.

Example: let this matrix be A and be the original matrix

begin {pmatrix}
a & b \
CD
end {pmatrix}

Now change the columns.
begin {pmatrix}
b & a \
d & c
end {pmatrix}

The result of this new matrix will be (-1) (D | A |) because a column change is needed to go to the original. This expression is generalized by $ (- 1) n; $ n = the number of switches.

My question is where does this come from? $ (- 1) n comes from. What is your logic / intuition behind this? I am not looking for an answer that says if it is strange, it is negative, even positive. I would like to really understand. Please give me a basic answer that I can eventually expand on.

Thanks for your time

Merkle Tree – Intuition for Simplicity CheckSigHashAll

So, I downloaded Simplicity and started a REPL using cabal new-repl Simplicity. Then I enabled the type applications using :set -XTypeApplications.

Consider these invocations:

> (pkwCheckSigHashAll @CommitmentRoot @() lib (Schnorr.PubKey True (read "0")) (Schnorr.Sig (read "0") (read "1")))
CommitmentRoot {commitmentRoot = Hash256 {hash256 = 
"CANr201231SO0:1183130aDC3m4u!193247eEOT194nOUS208150&2182ACK203151>"}}

> (pkwCheckSigHashAll @CommitmentRoot @() lib (Schnorr.PubKey True (read "1")) (Schnorr.Sig (read "0") (read "1")))
CommitmentRoot {commitmentRoot = Hash256 {hash256 = 
"215222oj251&134SOZ202@N161(185j2DELt156147136Nz183179EOTH166FS141F"}}

> (pkwCheckSigHashAll @CommitmentRoot @() lib (Schnorr.PubKey True (read "1")) (Schnorr.Sig (read "0") (read "0")))
CommitmentRoot {commitmentRoot = Hash256 {hash256 = 
"215222oj251&134SOZ202@N161(185j2DELt156147136Nz183179EOTH166FS141F"}}

If I understand correctly, the root of the commitment goes into a transaction exit. Since the firm should not play a role, the result of Haskell is because since the change of the signature and does not affect the root of the commitment. The previous example seems consistent with this intuition. Are both my intuition and the sample code correct?

Now, I can call the same method with WitnessRoot:

> (pkwCheckSigHashAll @WitnessRoot @() lib (Schnorr.PubKey True (read "0")) (Schnorr.Sig (read "0") (read "0")))
WitnessRoot {witnessRoot = Hash256 {hash256 = 
"185FS176179b{Xc2216n240186205v208164NW\DLE193:ETBbMO211152*I%"}}

> (pkwCheckSigHashAll @WitnessRoot @() lib (Schnorr.PubKey True (read "1")) (Schnorr.Sig (read "0") (read "0")))
WitnessRoot {witnessRoot = Hash256 {hash256 = 
"240hNUL188z4ACK200ETX151DC1&Y253t152176P146186137NAKm!STXDC3182148193246172O"}}

> (pkwCheckSigHashAll @WitnessRoot @() lib (Schnorr.PubKey True (read "0")) (Schnorr.Sig (read "0") (read "1")))
WitnessRoot {witnessRoot = Hash256 {hash256 = 
"205DEL132Q245166!196178248136194aO243+145T200E129I#253F134173i243K154J"}}

Again, if I understand correctly, the witness's root enters the expense transaction entry. It seems that the witness's root is affected if I change the signature or the public key, which corresponds to my intuition. Simplicity uses BIP-Schnorr that does not allow pubkey recovery. Again, is my intuition consistent with the previous code?

I guess these hash values ​​contain simulated transaction metadata like nLocktime, embedded in libRight?

Now, let's say you wanted to make a multisig based on MAST 2-of-2 with simplicity, how would that be? There must be a way to compose two different calls to CheckSigHashAll. And the spending transaction would surely need to provide a MAST path through the script, how did that happen to Simplicity? With oooh?