## Functional analysis. – Relationship between two different functions: $lVert p ^ – max} _ {- varepsilon} rVert$ and $kappa_ {p} ^ {- 1} (1- varepsilon)$

Given a non-negative sequence $$p = (p_i) _ {i in mathbb {N}} in ell_1$$ such that $$lVert p rVert_1 = 1$$, we define the following two quantities, for each $$varepsilon in (0,1)$$

$. 1. Assuming, without loss of generality, that $$p$$ It's not growing, come on $$k geq 1$$ be the smallest integer such that $$sum_ {i geq k} leq varepsilon$$. Then we define $$Phi ( varepsilon, p): = left ( sum_ {i = 2} ^ {k-1} p_i ^ {2/3} right) ^ {3/2} tag {1}$$ that is, the $$2/3$$-quasinorm $$lVert p _ {- varepsilon} ^ {- max {}} rVert_ {2/3}$$ vector $$p ^ {- max {}} _ {- varepsilon}$$ obtained by removing the largest element and the $$varepsilon$$tail of $$p$$. 2. Defining, for $$t> 0$$, the $$K$$-functional between $$ell_1$$ Y $$ell_2$$ $$kappa_p (t) = inf_ {a + b = p} lVert a rVert_1 + t rVert b rVert_2$$ and leaving $$Psi ( varepsilon, p): = kappa_p ^ {- 1} (1- varepsilon) tag {2}$$ (right reverse, IIRC) So, can we prove the upper and lower limits by relating (1) and (2)? Recent works of Valiant and Valiant [1] and Blais, Canonne, and Gur [2] imply this relationship in a rather indirect way (for $$p$$The non-trivial point masses, that is, say, $$lVert p rVert_2 <1/2$$) (By showing that both quantities "approximately characterize" the complexity of the sample of a particular hypothesis test problem in $$p$$ seen as a discrete probability distribution), but a direct the proof of such a relationship is not known (at least for me), even if it is only a loose one. Is there a related direct test (upper and lower limits)? $$Phi ( cdot, p)$$ Y $$Psi ( cdot, p)$$, from the way $$forall p text {s.t. } lVert p rVert_2 ll 1, forall x, qquad x ^ alpha Phi (c x, p) leq Psi (c x, p) leq x ^ beta Phi (C x, p)$$ ? [2] (Following some previous work by Montgomery-Smith [3]) shows a relationship between (2) and a third quantity that is interpolated between $$ell_1$$ Y $$ell_2$$ rules $$T in mathbb {N} mapsto lVert p rVert_ {Q (T)}: = sup { sum_ {j = 1} ^ T left ( sum_ {i in A_j p_i ^ 2} right) ^ {1/2} A_1, dots, A_t text {partition} mathbb {N} } tag {3}$$ as for all $$t> 0$$ such that $$t ^ 2 in mathbb {N}$$, $$lVert p rVert_ {Q (t ^ 2)} leq kappa_p (t) leq lVert p rVert_ {Q (2t ^ 2)}.$$ [1] Gregory Valiant and Paul Valiant. An automatic inequality tester and an optimal identity test instance. SIAM Journal on Computing 46: 1, 429-455. 2017 [2] Eric Blais, ClĂ©ment Canonne and Tom Gur. Distribution of lower limit tests by reducing the complexity of the communication. ACM Transactions on Computation Theory (TOCT), 11 (2), 2019. [3] Stephen J. Montgomery-Smith. The distribution of the Rademacher sums. Proceedings of the American Mathematical Society, 109 (2): 517-522, 1990. ## dnd 5e – Is there a functional difference between having an indicated climbing speed and being able to climb at the maximum walking speed? The normal rules for scaling state that: Each foot of movement costs 1 extra foot (2 extra feet in difficult terrain) when you are climbing, swimming or crawling. Ignore this additional cost if you have a climbing speed and use it to climb, or a swimming speed and use it to swim. At the DM's option, climbing a slippery vertical surface or one with few grabs requires a successful Force (Athletics) test. Similarly, gaining any distance in turbulent waters may require a successful Force (Athletics) test. (noting that the version of the basic rules of the same neglects to clarify that you really have to use your climbing speed to benefit, and accidentally suggests that your climbing speed mentioned is irrelevant if your walking speed is faster) The robber thief archetype has the ability to work second story that provides the following benefit: When you choose this archetype in the 3rd level, you get the ability to scale faster than normal; Climbing no longer costs you more movement. The Athlete feat, as one of its benefits, offers the same effect: The climbing does not cost you additional movement. As far as I can tell, a third-level thief or an athlete functionally has an ascent speed equal to their normal walking speed, but they definitely do not. at present Have a climbing speed. Unlike previous editions, having a climbing speed does not seem to offer any additional benefits, such as a bonus to skill / skill tests made to climb in difficult circumstances. Is there any mechanical difference between having a real climbing speed equal to the walking speed or having the ability to climb with your walking speed without penalty? Are there other abilities or effects to which a character may be subject when the distinction is significant? ## inequalities – a simple functional inequality Is there any general solution to the functional inequality f (x * y)? <= f(x) + f(y) . where x and y are in [0,1]? I can find many particular solutions but I just wonder if there is a general description of functions f satisfying such inequation. I have the following conditions 1) f : [0,1] -> R 2) f is not negative in [0,1] 2) f (1) is bounded 3) f is once continuously differentiable in (0,1) Thanks for any suggestions ## Functional analysis. Convergence of the regression coefficients to the probability density. By simulation we create a vector. $$Y = (y_1, y_2, …, y_n)$$where each $$y_i in R$$ it is extracted independently of a given non-degenerate distribution. Next we create a vector by simulation. $$xi = ( xi_1, xi_2, …, xi_n)$$ where each $$xi_i$$ are independent realizations of a random variable that takes only a finite number of values $$[alpha_1,alpha_2,…alpha_k]$$ with probabilities $$p_1, p_2, …, p_k$$ respectively. $$alpha_i$$ They are given. Suppose we have function $$f: R a R$$ We make a regression of$$begin {bmatrix} f (y_1 + xi_1) \ f (y_2 + xi_2) \ … \ f (y_n + xi_n) end {bmatrix}$$ in $$begin {bmatrix} f (y_1 + alpha_1) & f (y_1 + alpha_2) & … & f (y_1 + alpha_k) \ f (y_2 + alpha_1) & f (y_2 + alpha_2) & … & f (y_2 + alpha_k) \ … and … and … and … \ f (y_n + alpha_1) & f (y_n + alpha_2) & … & f (y_n + alpha_k) end {bmatrix}$$ By regression I mean we are optimizing. $$beta_i$$ minimize: $$sum_ {i = 1} ^ n (f (Y + xi) – sum_ {j = 1} ^ k beta_jf (Y + alpha_j)) ^ 2$$ Intuitively I think that as $$n a infty$$ The least squares procedure should give us the following equation: $$f (Y + xi) = p_1 * f (Y + alpha_1) + p_2 * f (Y + alpha_2) + … + p_k * f (Y + alpha_k)$$ where $$f (Y + xi)$$ Y $$f (Y + alpha_i)$$ they are only representations of vector columns above. So my guess is that as $$n a infty, beta_i a p_i$$. My question is what conditions should be imposed on the function. $$f$$ to get the equation above? Is it my correct intuition that we should normally get such an equation? Maybe we need to impose some conditions on the distribution of $$y_i$$ also. ## java – Name functional functions map type I'm working in a library that needs two. Map-as functions: one is assigned from the current type to another type, the other is assigned from the current type to a known destination type that must be reached at some point: class assignor { SomeType map (Function mapper SomeOtherType mapTo (Function mapper }  Keep in mind that this is an extremely small example and that this question is not about whether these two variants are necessary or not, I'm just looking for a good name for Map Y map for that best convey what these functions are doing as Map Y map for do not do this I was looking for other examples on the web but alternatives like tube or process Do not describe the functionality better. I can not overload Map with the two definitions due to the elimination of java types. Is there a term that I do not know that can correspond to these two functions? ## Functional analysis – Hilbert spaces and linear subspaces. Let X and Y be linear subspaces of a Hilbert space $$mathcal {H}$$. $$\$$ Remember that $$\$$ $$X + Y$$ = {$$x + y: x in X, and in Y$$} $$\$$ Show that $$\$$ $$(X + Y) ^ bot$$ = $$X ^ bot cap Y ^ bot$$ $$\$$ I tried to solve the problem in the following way;$$\$$ Leave $$z in (X + Y) ^ bot$$, Yes$$;$$ $$for all$$ $$x + y in (X + Y)$$ we have $$= 0$$. What if $$x + y neq 0$$, this implies $$z = 0$$.Thus $$(X + Y) ^ bot = z = 0$$. $$\$$ Again from $$= 0$$, we have $$= += 0$$. Which it involves $$z in X ^ bot$$ Y $$z in Y ^ bot$$. A) Yes $$X ^ bot cap Y ^ bot = z = 0$$. $$\$$ Thus, $$(X + Y) ^ bot$$ = $$X ^ bot cap Y ^ bot$$. $$\$$ I do not know if my work is logically correct. Your comments and suggestions would be greatly appreciated. Thank you. ## nt.number theory – What is the most direct test of the Riemann hypothesis for Dirichlet's L functions on functional fields? Leave $$mathbb {F}$$ be a finite field of order $$q$$, leave $$m$$ Be an irreducible polynomial in the ring. $$mathbb {F}[T]$$, and let $$chi$$ be a character module of Dirichlet $$m$$. Define the Dirichlet function field. $$L$$-function $$L (s, chi): = sum & # 39; _f frac { chi (f)} {| f | ^ s}$$ where the sum is about single polynomials in $$mathbb {F}[T]$$Y $$| f | = q ^ { mathrm {deg} (f)}$$ It is the usual assessment. The Riemann hypothesis for this. $$L$$-function states that the non-trivial zeros of this $$L$$-function everyone is on the line $$mathrm {Re} (s) = frac {1} {2}$$. An equivalent form of this result is that the error term in the prime number theorem for arithmetic progressions is a square root type in the function field configuration. The only way I know how to prove this is to prove that such Dirichlet $$L$$-function can be multiplied with another Dirichlet $$L$$-functions or zeta functions to create (up to some local factors) a zeta function Dedekind over some finite extension of $$mathbb {F}[T]$$, which is essentially the local zeta function of some curve over $$mathbb {F}$$, and at this point any of the usual HR tests can be used for such curves (Weil, Bombieri-Stepanov, etc.). But to obtain the finite extension, either I must resort to some general theorem in the theory of class fields (existence of fields of ray class, which I understand to be a difficult result) or construct the extension explicitly using Carlitz modules or something equivalent to those modules. (The latter is discussed, for example, in the response to this other publication of MathOverflow). My question is whether there is a more direct way to establish RH for the Dirichlet L functions on the function fields without having to locate an appropriate field extension (or if there is some "soft" way to abstractly demonstrate the existence of such extension without a lot of effort). For example, is it possible to interpret the Dirichlet? $$L$$-It works directly as the zeta function of some. $$ell$$Wrong sheaf? Or can the elementary methods of the Stepanov type be adapted directly to the Dirichlet? $$L$$-function (or maybe the product of everything $$L$$-Function of the given module. $$m$$? ## Functional analysis. Does this ideal in$ B (L_1) $have a correct approximate identity (bounded)? I will take an indirect way of defining this ideal, because (a) this route is the way my collaborators and I get to it (b) this alternative definition, instead of the standard, can suggest a direct attack on the question asked in title. Notational conventions: $$L_p$$ It's a short hand for $$L_p ([0,1]$$ with the measure of Lebesgue in $$[0,1]$$. Yes $$E$$ It is then a Banach space. $$L_ infty (E)$$ It denotes the space of essentially delimited and strongly measurable functions. $$[0,1] a E$$ (equivalence module a.e.). In the case where $$E = L_1$$, we will consider elements of space. $$A = L_ infty (L_1)$$ As functions of two variables that are $$L_ infty$$ at second variable and $$L_1$$ at First variable. This convention has the advantage that given $$f in A$$ Y $$xi in L_1$$ We can define $$T_f ( xi) in L_1$$ by $$T_f ( xi) (s) = int_0 ^ 1 f (s, y) xi (y) , dy$$ The map $$f mapsto T_f$$ It is an isometric incrustation of $$A$$ as a closed subalgebra of $$B (L_1)$$. By die $$f, g in A$$, We can define $$(f g) (s, t) = int_0 ^ 1 f (s, x) g (x, t) , dt$$ So $$f glet in A$$ Y $$T_fT_g = T_ {f glet g}$$. The picture of $$A$$ in $$B (L_1)$$ Under this inlay, which we will denote by $$J$$, turns out to be an object very studied in the theory of the operators in $$L_1$$: Is the set of representable operators of $$L_1$$ Likewise. QUESTION: does $$J$$ (or the equivalent $$(A, )$$) Do you have an approximate identity of bounded right? What happens if we eliminate the delimitation requirement? Keep in mind that the naive attempt to take simple functions in $$L_ infty (L_1)$$ that approximate the "Dirac" measure concentrated on the diagonal in $$[0,1]^ 2$$ will not work, because such functions are sent through our incorporation into elements of $$K (L_1)$$, which is adequately contained in $$J$$ (look down). Some observations for the background context, which may be relevant for a solution. It can be shown that $$J$$ it does not have an approximate identity to the left (limited or otherwise). I would like to thank W. B. Johnson for indicating why this is the case; Below is an expanded and paraphrased version of his explanation. From some theory of vector measurement, we know that $$J$$ contains the ideal $$W (L_1)$$ of weakly compact operators. Further, $$J$$ is contained in the ideal $$CC (L_1)$$ of completely continuous operators; "completely continuous" means that weakly convergent sequences are assigned to the convergent sequences of the norm). (The containment $$J subseteq CC (L_1)$$ it follows from a Lewis and Stegall theorem, which characterizes the operators in $$J$$ as those that factor through $$ell_1$$. This also shows that $$J$$ is a $$2$$from an ideal side in $$B (L_1)$$, not just a subalgebra. It follows that if $$S in W (L_1)$$ Y $$T in J$$, so $$TS in K (L_1)$$. Since there are weakly compact operators $$S$$ in $$L_1$$ that are not compact (for example, take any non-compact map $$L_1 a ell_2$$ and then compose with an isometric incrustation. $$ell_2 a L_1$$), it follows that there is no network $$(T_ alpha)$$ in $$J$$ such that $$Vert T_ alpha S – S Vert to 0$$. ## Functional analysis of fa – Projection of a function$ f in L ^ 1 ( Omega) $in a subspace of finite dimension Suppose $$Omega subset Bbb R ^ n$$ be a domain such that $$| Omega | < infty$$, $$f in L ^ 1 ( Omega)$$. Leave $$Y = text {span} {g_1, dots, g_k }$$. Is there a characterization of the set of projections of $$f$$ on $$Y$$, that is, the whole $$M$$ of everything $$g in Y$$ such that $$int_ Omega | f (x) – g (x) | dx = inf_ {h in Y} int_ Omega | f (x) – h (x) | dx,$$ or the equivalent, $$M = text {argmin} { int_ Omega | f (x) – g (x) | dx: g in Y }$$? Similar problem in $$L ^ 2$$ (minimizing $$int_ Omega | f (x) – g (x) | ^ 2 dx$$) is very easy to solve since we have internal product there. However, as $$L ^ 1$$ The norm is not strictly convex, our whole. $$M$$ It needs to contain a single element in general. The solution to $$1$$three-dimensional case $$Y = text {span} {1 }$$ it is well known, that is, $$g = c$$ where $$c in Bbb R$$ is any median of $$f$$. Is there a complete treatment of this type of problem anywhere? Thanks in advance. ## Functional equations –$ 2f (x) = f (y) Rightarrow 2f (tx) = f (ty) \$

Thank you for contributing an answer to MathOverflow!

• Please make sure answer the question. Provide details and share your research!

But avoid

• Ask for help, clarifications or respond to other answers.
• Make statements based on opinion; Support them with references or personal experience.

Use MathJax to format equations. MathJax reference.

For more information, see our tips on how to write excellent answers.