## pr.probability – Minimizing \$ int lambda ({ rm d} y) frac { left | g (y) – frac {p (y)} c lambda g right | ^ 2} {r ((i, x), y)} \$ with respect to the discrete parameter \$ i \$

Leave $$I subseteq mathbb N$$ be finite and not empty, $$(E, E math, lambda)$$ be a $$sigma$$-finite space measure, $$lambda f: = int f : { rm d} lambda$$ for $$lambda$$integrable $$f: E to mathbb R$$, $$p: E to (0, infty)$$ to be $$mathcal E$$-measurable with $$c: = lambda p in (0, infty)$$, $$r: (I times E) times E to (0, infty)$$ to be $$(2 ^ I otimes mathcal E) otimes mathcal E$$-measurable with $$lambda r ((i, x), ; cdot ðŸ˜‰ = 1$$ for all $$(i, x) in I times E$$ Y $$g: E to (0, infty)$$ to be $$mathcal E$$-measurable.

How can I calculate for what $$i in I$$ the amount $$sigma_i: = int lambda ({ rm d} y) frac { displaystyle left | g (y) – frac {p (y)} c lambda g right | ^ 2} {r ((i, x), y)}$$ is minimal (if there is more than a minimum $$i$$, I want to choose the smallest)? I am no interested in the value of $$sigma_i$$.

At the moment, I am estimating each $$sigma_i$$ using the Monte Carlo integration (sampling of $$r ((i, x), ; cdot ðŸ˜‰$$ possible) and computing the smallest index $$i$$ whose corresponding estimate of $$sigma_i$$ However, this is error prone (since the estimation of $$sigma_i$$ might not be close enough to $$sigma_i$$) and extremely slow. The latter is my real problem as I need to solve this problem by $$approx1e5$$ different $$g$$ inside a range loop $$ge1e5$$.

So is there a smarter solution to this problem?

## ag.algebraic geometry: obtain the equation of a straight line symmetrical to the line l: y = x-3 with respect to the line m: (-1/3) x- (1/3)

What my solution book shows me:

First find the point of intersection of line l and m which is (2, -1)

Pick a point on line l (this is where I'm wrong and pick (1, -2))

Calculate the perpendicular bisector using the point-slope formula

Find the point of intersection between the equation of the last step and line m

Let the coordinates of point B be (X, Y) and discover point B from the coordinates of the last step

Using (2, -1) and Point B, find the equation of the straight line.

The big problem is when I use the point (1, -2), the answer is not y = -7x + 13. Can anyone show me how?

## time complexity: subset of vectors \$ k \$ with the shortest sum, with respect to the norm \$ ell_ infty \$

I have a collection of $$n$$ vectors $$x_1, …, x_n in mathbb {R} _ { geq 0} ^ {d}$$. Given these vectors and an integer $$k$$, I want to find the subset of $$k$$ vectors whose sum is shorter than the uniform norm. That is, find the set (possibly not unique) $$W ^ * subset {x_1, …, x_n }$$ such that $$left | W ^ * right | = k$$ Y

$$W ^ * = arg min limits_ {W subset {x_1, …, x_n } land left | W right | = k} left lVert sum limits_ {v in W} v right rVert _ { infty}$$

The brute force solution to this problem takes $$O (dkn ^ k)$$ operations – there $${n choose k} = O (n ^ k)$$ subsets to test, and each takes $$O (dk)$$ Operations to calculate the sum of the vectors and then find the uniform norm (in this case, only the maximum coordinate, since all the vectors are not negative).

My questions:

1. Is there an algorithm better than brute force? The approximation algorithms are fine.

One idea I had was to consider a convex relaxation where we assigned each vector a fractional weight in $$(0, 1)$$ and require the weights to add $$k$$. The resulting subset of $$mathbb {R} ^ d$$ encompassed by all those weighted combinations is in fact convex. However, even if we can find the optimal weight vector, I'm not sure how to use this set of weights to choose a subset of $$k$$ vectors In other words, which integral rounding scheme to use?

I've also thought about dynamic programming, but I'm not sure if this would end up being faster in the worst case.

1. Consider a variation where we want to find the optimal subset for each $$k$$ in $$(n)$$. Again, is there a better approach than naively solving the problem for each $$k$$? I think there should be a way to use information from executions on subsets of size $$k$$ to those of size $$(k + 1)$$ and so.

2. Consider the variation where instead of a subset size $$k$$, one is given some objective norm $$r in mathbb {R}$$. The task is to find the largest subset of $${x_1, …, x_n }$$ whose sum has a uniform norm $$leq r$$. In principle, one should seek $$O (2 ^ n)$$ subsets of vectors. Do the algorithms change? Also, it is the decision version (for example, we could ask if there is a subset of size $$geq k$$ whose sum has a uniform norm $$leq r$$) of the NP-hard problem?

## Turing machines. Temporal complexities of the latest generation SAT solvers with respect to formula length

I am learning about DPLL and CDCL SAT solvers, and I know they have time complexity exponential to the amount of variables.

If I'm not mistaken, one of the reasons why most believe that P is not equal to NP is that there is no polynomial time algorithm for SAT despite the tremendous effort of mathematicians and computer scientists. However, the P vs NP problem is only concerned with the complexity of time with respect to the length of the formula, not the number of variables.

Is there evidence or proof that the time complexities of next-generation SAT solvers are exponential to formula size? If so, could you quote them please?

## algebraic ag geometry – Specialization of \$ Z subset mathbb {P} ^ n_k \$ with respect to a valuation ring \$ R \$

Leave $$R$$ be a titration ring with an algebraically closed fraction field $$k$$. Leave $$L$$ be the residual field of $$R$$. At Mumford & # 39; s & # 39; Red Schematic Book & # 39;, Theorem 1, Chapter II, Section 8, shows:
For all closed subsets $$Z subset mathbb {P} ^ n_k$$, there is a single closed subset
$$W subset mathbb {P} ^ n_L$$ such that
$$rho (Z (k)) = W (L),$$
where $$rho: mathbb {P} ^ n (k) to mathbb {P} ^ n (L)$$.

I have similar problems with the test like in this MathSE question
and I was wondering if anyone could point me to an alternative reference for this result.
Thank you!

## Plot – Integrals of two variables: How to integrate with respect to one variable, then evaluate and plot the result with respect to the second variable?

I started using Mathematica for a moment, but now I'm struggling to do some more advanced calculations, for which I don't know how to overcome some difficulties.
In fact, my wish is to evaluate an integral with respect to one variable and plot the result with respect to another variable. The problem I am facing here is that integration cannot be obtained with an explicit full form since the integrand is a very complicated function. As a result, the code keeps running indefinitely every time I try to plot what should be the result of the integration.
My code is as follows, using Mathematica 9.

``````x = 10^-4; y = 10^4; n2 = 2; Nf = 100; alpha = 2;
W1 = Sqrt(n2^2 - z^2);
W2 = Sqrt(z^2 - n2^2);
Pr = ((alpha - 1) (x y)^(alpha - 1))/(y^(alpha - 1) - x^(alpha - 1));
D1 = Pr Exp(-z T) (Cos(T W1)/z^2 + 1/(z W1) Sin(T W1));
D2 = Pr Exp(-z T) (Cosh(T W2)/z^2 + 1/(z W2) Sinh(T W2));
XX = Integrate(D1, {z, x, n2}) + Integrate(D2, {z, n2, y})
Plot(XX, {T, 0, 10})
``````

Please, I need someone's help for more advanced strategies to overcome this problem.

## mp.physical mathematics: the derivative of the filter a with respect to an individual output

Two signs: $$d (t)$$ Y $$p (t)$$, the corresponding filter $$w (t)$$, they have $$d (t) * w (t) = p (t)$$ ( $$*$$ of note convolution), $$w (t)$$ can be calculated in frequency domain: $$w (t) = F ^ {- 1} left ( frac {F (p (t)) overline {F (d (t))}} {F (d (t)) overline {F ( d (t))} + epsilon} right)$$
How can I get the derivative of the filter? $$w (t)$$ with respect to $$p (t)$$ :
$$frac { partial {w}} { partial {p}} =?$$

## ag.algebraic geometry – With respect to \$ mathbb {C} (s_1, s_2, s_3, y) = mathbb {C} (x, y) \$, where \$ s_1, s_2, s_3 \$ are symmetric

Perhaps the following question is not at the MO question level, but has not received comments on MSE, so I also ask it here:

Leave $$beta: mathbb {C} (x, y) to mathbb {C} (x, y)$$ be the involution in $$mathbb {C} (x, y)$$
defined by $$(x, y) mapsto (x, -y)$$.

Leave $$s_1, s_2, s_3 in mathbb {C} (x, y)$$ be three symmetric elements with respect to $$beta$$.
It is not difficult to see that a symmetric element w.r.t. $$beta$$ It has the following form:
$$a_ {2n} y ^ {2n} + a_ {2n-2} y ^ {2n-2} + cdots + a_2y ^ 2 + a_0$$, where $$a_ {2j} in mathbb {C} (x)$$.

Assume that the following two conditions are met:

(one) Every two of $${s_1, s_2, s_3 }$$ are algebraically independent about $$mathbb {C}$$.
Note that the three $$s_1, s_2, s_3$$ are algebraically dependent upon $$mathbb {C}$$, from the degree of transcendence of $$mathbb {C} (x, y)$$ finished $$mathbb {C}$$ are two

(two) $$mathbb {C} (s_1, s_2, s_3, y) = mathbb {C} (x, y)$$; this notation means the fraction fields of $$mathbb {C} (s_1, s_2, s_3, y)$$ Y $$mathbb {C} (x, y)$$respectively.

Example:
$$s_1 = x ^ 2 + x ^ 5 + A (y), s_2 = x ^ 5y ^ 2 + B (y), s_3 = x ^ 3y ^ 2 + C (y)$$, where $$A (y), B (y), C (y) in mathbb {C} (y ^ 2)$$.

Question 1: Is it possible to find a specific & # 39; & # 39; of at least one of $${s_1, s_2, s_3 }$$?

A plausible answer can be: one of $${s_1, s_2, s_3 }$$ it's the way
$$Î» x ^ n and 2m + D (y)$$ for some $$D (y) in mathbb {C} (y ^ 2)$$, $$lambda in mathbb {C} ^ { times}$$, $$n geq 1$$, $$m geq 0$$; Is it possible to find a counterexample to my plausible answer?

It may be better to first consider two (easier) questions that replace the conditions (one) Y (two) by:

(1 & # 39;) $${s_1, s_2 }$$ are algebraically independent about $$mathbb {C}$$ +
(2 & # 39;) $$mathbb {C} (s_1, s_2, y) = mathbb {C} (x, y)$$; call this Question 1 & # 39;.

(1 & # 39; & # 39;) $$s_1 neq 0$$ +
(2 & # 39; & # 39;) $$mathbb {C} (s_1, y) = mathbb {C} (x, y)$$; call this Question 1 & # 39; & # 39;.
I guess the answer to question 1 & # 39; & # 39; it is: $$s_1 = lambda xE (y) + F (y)$$, where
$$lambda in mathbb {C} ^ { times}$$ Y $$E (y), F (y) in mathbb {C} (y ^ 2)$$.

Observations:

(I) In the previous example we already have
$$mathbb {C} (s_2, s_3, y) = mathbb {C} (x, y)$$ Y $$mathbb {C} (s_1, s_2, y) = mathbb {C} (x, y)$$.

(ii) We can write $$x = frac {u (s_1, s_2, s_3, y)} {v (s_1, s_2, s_3, y)}$$ for some $$u, v in mathbb {C} (X, Y, Z, W)$$. So, if I'm not wrong, taking $$y = 0$$ (if possible?) we get that $$x = frac {u (s_1 (x, 0), s_2 (x, 0), s_3 (x, 0))} {v (s_1 (x, 0), s_2 (x, 0), s_3 (x , 0))}$$Thus $$mathbb {C} (s_1 (x, 0), s_2 (x, 0), s_3 (x, 0)) = mathbb {C} (x)$$.

Question 2: Is there any example where everyone $$s_1, s_2, s_3$$ are necessary to obtain $$mathbb {C} (s_1, s_2, s_3, y) = mathbb {C} (x, y)$$? That is, it is not possible to skip one of $${s_1, s_2, s_3 }$$ and still get $$mathbb {C} (x, y)$$.
I guess the answer is positive.

Thank you!

## posets – antichains (maximum) with respect to two different partial orders in the same set

In my recent work I encountered a problem of this type:

Given a strictly growing sequence of non-empty sets $$(A_n) _ {n in mathbb {N}}$$ with two partial orders $$leq$$ Y $$preceq$$ in each set, that is, for each $$n in mathbb {N}$$ $$A_n subset A_ {n + 1}$$ Y $$(A_n, leq)$$ Y $$(A_n, preceq)$$ they are partially ordered sets (partial orders do not depend on $$n$$)

The question now is whether there are non-trivial sets $$B in mathcal {P} (A)$$ which are maximum antichains with respect to $$leq$$ Y $$preceq$$? If so, how many?

An algorithm is also provided to create all antichains of $$(A_n, leq)$$ of all the antichains of $$(A_ {n-1}, leq)$$ and a different algorithm that does the same to $$preceq$$, but there is no known algorithm for doing so for both partial orders at the same time, therefore, I think I cannot work with classical induction.

Do you know any theorem or book that deals with different partial orders in a set or could it be useful? Or do you have any idea how to deal with that problem?

Thank you very much in advance!

## dnd 5e – How is the challenge rating (CR) calculated for a mixed group of multiple monsters with respect to the treasure table?

The DMG section in Treasure is a bit confusing, probably because they had to fit in a lot of tables and I know from experience that it's hard to do. However, there is a guide on how to use the tables.

From page 133 of the DMG:

When determining the content of a treasure that belongs to a monster, use the table that corresponds to that monster's challenge rating. When rolling to determine a treasure that belongs to a large group of monsters, use the challenge rating of the monster that leads the group. If the treasure does not belong to anyone, use the challenge rating of the monster that presides over the dungeon or den you are storing. If the treasure is a gift from a benefactor, use the challenge rating equal to the average level of the party.

These tables include entries where you roll on the magic object table X N times (X = A, B, C, …) and N is determined by a dice roll. This is also clarified before these tables:

When you use a Treasure table to randomly determine the content of a treasure and its roll indicates the presence of one or more magic items, you can determine the specific magic items by rolling over the appropriate table (s) here.

Dungeon Master's Guide, P. 144