## dg.differential geometry – Induced action by an involution on spinor bundle and Dirac operator

Let $$M$$ be a $$4n$$-dimensional spin manifold with a fixed Riemannian metric $$g$$. Let $$S$$ be a spinor bundle over $$M$$ and fix the Riemannian connection on it. There is a decomposition $$S=S^+oplus S^-$$, where $$S^pm$$ is the $$pm$$1-eigenbundle with respect to the Clifford multiplication by $$omega=e_1cdot …cdot e_{4n}$$, where $${e_1,…,e_{4n}}$$ is a positively oriented orthonormal basis of $$T_xM$$ at every $$xin M$$.

Now let $$t:Mrightarrow M$$ be an orientation and spin structure preserving involution which is an isometry w.r.t. the metric $$g$$. This involution lifts to an action $$T:Srightarrow S$$ on the spinor bundle.

Question 1: How does one show that $$T$$ preserves the above $$mathbb{Z}_2$$-grading and that it commutes with the Dirac operator $$D:Gamma(S)rightarrow Gamma(S)$$ defined by $$Dsigma=sum e_jcdot nabla_{e_j}sigma$$, where $$nabla$$ is the covariant derivative associated to the Riemannian connection?

Question 2: How does the above Dirac operator induce a Dirac operator on the quotient manifold $$M/t$$?

## fa.functional analysis – Condition on kernel convolution operator

I am studying a about O’Neil’s convolution inequality. It is stated that for $$Phi_1$$ and $$Phi_2$$ be $$N$$-functions, with $$Phi_i(2t)approx Phi_i(t), quad i=1,2$$ with $$tgg 1$$ and $$k in M_+(R^n)$$ is the kernel of a convolution operator.

The $$rho$$ is an r.i. norm on $$M_+(R^n)$$ given in terms of the r.i norm $$bar rho$$ on $$M_+(R_+)$$ by
$$rho(f)=bar rho(f^*), quad f in M_+(R_+)$$

Denote Orlicz gauge norms, $$rho_{Phi}$$, for which
$$(bar rho_{Phi})_dapprox bar rho_{Phi}left(int_0^t h/tright).$$

It is stated that
$$rho_{Phi_1}(k+f)leq C rho_{Phi_2}(f)$$
if
$$(i) quad bar rho_{Phi_1}left(frac 1t int_0^t k^*(s)int_0^sf^*right)leq C bar rho_{Phi_2}(f^*)$$
$$(ii) quad bar rho_{Phi_1}left (frac 1tint_0^t f^*(s)int_0^sk^*right)leq C bar rho_{Phi_2}(f^*)$$
$$(iii) quad bar rho_{Phi_1}left(int_t^{infty}k^*f^*right)leq C bar rho_{Phi_2}(f^*).$$

I cannot understand under which conditions on kernel those inequalities (i),(ii) and (iii) would hold.

## Simplifications and error in matrix integral operator

I defined the matrix integral operator

``````(ScriptCapitalD)11(w_) := (1/2)*D(w(x), x) -
r(x)*Integrate(q(x)*w(x), x)
(ScriptCapitalD)12(w_) := r(x)*Integrate(r(x)*w(x), x)
(ScriptCapitalD)21(w_) := (-q(x))*Integrate(q(x)*w(x), x)
(ScriptCapitalD)22(w_) := (-2^(-1))*D(w(x), x) +
q(x)*Integrate(r(x)*w(x), x)
m = {{(ScriptCapitalD)11, (ScriptCapitalD)12}, {(ScriptCapitalD)21, (ScriptCapitalD)22}};
``````

and defined how it acts in a column vector

``````operate(matrix_, column_) :=
Table(Inner(#1(#2) & , (Mu), column, Plus),
{(Mu), matrix})
``````

Code result is

``````result = operate(m, {r(x), q(x)});
MatrixForm(FullSimplify(result))
``````

How can I further simplify the vanishing integrals? (see image below). What I am doing wrong that I obtain the correct result up to a factor of $$2$$? ## How can I do the Pfaffian d operator in LaTeX?

It is a ‘d’ with a slash on the top. Like `hslash`.

## ap.analysis of pdes – Lions/diPerna type commutator estimates for second-order differential operator in Fokker-Planck type equation

I have a question about a particular commutator estimate as it occurs in the study of Fokker-Planck equations with low regularity data, see e.g. (1,2).

Denote by $$rho_varepsilon$$ some usual regularizing kernel (mollifier) family and let $$1 < r < 2$$ be given and fixed. We have functions $$sigma in W^{1,infty}_{loc}(mathbb{R}^N)^{N times K} quadtext{and}quad p in L^infty(0,T;L^r(mathbb{R}^N)) quad text{with} quad sigma^top nabla p in L^2(0,T;L^r(mathbb{R}^N))$$ at hand. (No info on $$nabla p$$ itself.) Let $$p_varepsilon := rho_varepsilon star p$$ be the regularized $$p$$.

Define the mollification-commutator $$(rho_varepsilon,D)(f) := rho_varepsilon star (Df) – D(rho_varepsilon star f)$$ for some function $$f$$ and a differential operator $$D$$. Let $$gamma in C^2(mathbb{R})$$ with bounded first and second derivative and let $$varphi$$ be a test function on $$(0,T) times mathbb{R}^N$$.

I want to show that $$int_0^Tint_{mathbb{R}^N} varphi , gamma'(p_varepsilon) , r_varepsilon longrightarrow 0 quad text{as}~varepsilon to 0,tag{1}$$
where
$$r_varepsilon := rho_varepsilon star partial_i(sigma_{ik}sigma_{jk}partial_j p) – partial_i(sigma_{ik}sigma_{jk}partial_j p_varepsilon) = partial_i bigl((rho_varepsilon,sigma_{ik}sigma_{jk}partial_j)(p)bigr).$$

In (1,2) the case $$r=2$$ is considered where this indeed works out. (One also has $$p in L^infty(0,T;L^infty(mathbb{R}^N)$$ there, in addition to the integrability of $$p$$ introduced above, but this does not necessarily enter this argument under the assumptions above, as far as I see it.) They rewrite $$r_varepsilon$$ further into begin{align*}r_varepsilon &= partial_i bigl(sigma_{ik}(rho_varepsilon,sigma_{jk}partial_j)(p)bigr) + (rho_varepsilon,partial_isigma_{ik})(sigma_{jk}partial_j p) + (rho_varepsilon,sigma_{ik}partial_i)(sigma_{jk}partial_j p) \ &:= partial_i (sigma_{ik}R_varepsilon) + S_varepsilon + T_varepsilon.end{align*}

These commutators are then shown to converge to zero suitably by the commutator lemma (see below) which transfers to (1) by dominated convergence. However I cannot seem to make the one term with a derivative on the commutator $$partial_i (sigma_{ik}R_varepsilon)$$ work out in this setting since one needs to integrate by parts for this term in (1), and the resulting integrable bounds in the critical term $$int_0^T int_{mathbb{R}^N} varphi , gamma”(p_varepsilon) , sigma^top nabla p_varepsilon cdot R_varepsilon$$ are only $$1/r + 1/r$$-integrable in space (for $$sigma^top nabla p_varepsilon$$ and $$R_varepsilon$$), which works out to $$1$$ exactly for $$r=2$$.

I have tried modifying the splitting, but to no avail. I am willing to assume more or less any regularity on $$sigma$$. However I would like to avoid $$sigmasigma^top$$ being uniformly positive definite. (This would lead to $$p(t) in W^{1,r}(mathbb{R}^N)$$ and then I think I can do it, but this is the one assumption I would like to avoid.)

I had suspected (hoped) that with a second derivative on $$sigma$$ one could compensate lack of differentiability of $$p$$, but the proof of the commutator lemma seems to use a quite nice cancellation property which works only for first order combinations, so this led nowhere.

The motivation is to consider an inhomogeneous Fokker-Planck equation with data comparable to the cited works (1,2).

Any hints would be welcome.

Lemma (Commutator lemma, (3, Lemma II.1)). Let $$1/beta = 1/alpha + 1/s$$ and

• $$g in L^{alpha}_{loc}(mathbb{R}^N)$$ and $$f_1 in L^s_{loc}(mathbb{R}^N)$$, and
• $$c in W^{1,alpha}_{loc}(mathbb{R}^N)$$ and $$f_2 in L^s_{loc}(mathbb{R}^N)$$.
Then $$(rho_varepsilon,g)(f_1) to 0 quad text{and} quad (rho_varepsilon,c partial_i)(f_2) to 0, quadtext{each in}~L^beta_{loc}(mathbb{R}^N).$$

(1) Le Bris, C.; Lions, P.-L., Existence and uniqueness of solutions to Fokker-Planck type equations with irregular coefficients, Commun. Partial Differ. Equations 33, No. 7, 1272-1317 (2008). ZBL1157.35301.

(2) Luo, De Jun, Fokker-Planck type equations with Sobolev diffusion coefficients and BV drift coefficients, Acta Math. Sin., Engl. Ser. 29, No. 2, 303-314 (2013). ZBL1318.35130.

(3) DiPerna, R. J.; Lions, P. L., Ordinary differential equations, transport theory and Sobolev spaces, Invent. Math. 98, No. 3, 511-547 (1989). ZBL0696.34049.

## operator algebras – L(G) is a factor \$implies\$ G is i.c.c.

I have easily shown that if $$G$$ is an i.c.c. (infinite conjugacy class) group, then its group von Neumann algebra, $$L(G) := mathbb{C}(lambda(G))” = (text{span} lambda(G))”$$ is a factor (its center is $$mathbb{C} 1$$, with no nontrivial projections). However, I can’t see why it is guaranteed the other direction, that $$L(G)$$ being a factor would force $$G$$ to be i.c.c. I can’t find anywhere on this website a full iff proof, just the above direction. But it appears that an iff statement is true. Can anyone point me to a source? I have tried assuming $$G$$ NOT i.c.c. and looking for a nontrivial projection in the center of $$L(G)$$ to prove it myself, but haven’t gotten anywhere.

## operator theory – holomorphic functional calculus for hereditary C*-subalgebras

Let $$A$$ be a unital $$C^*$$-algebra in $$mathcal{B}(H)$$ with unit the identity operator $$I$$.

Assume that $$mathcal{A}subset A$$ is a $$*$$-subalgebra of $$A$$ that contains $$I$$.

Moreover, assume that $$mathcal{A}$$ is closed under holomorphic functional calculus; that is, for every $$ain mathcal{A}$$ and every function $$f$$ holomorphic in a neighbourhood of the spectrum $$sigma_A(a)=sigma_{mathcal{}B(H)}(a)$$, we have $$f(a)in mathcal{A}$$.

Question: Let $$pin mathcal{A}$$ be a non-zero projection. Is it true that the $$*$$-subalgebra $$pmathcal{A}psubset pAp$$ is also closed under holomorphic functional calculus?

Issues:

• The unit of $$pAp$$ is now $$p$$.
• Is the spectrum of an element in $$pAp$$ related to the spectrum in $$A$$?

Would it help if $$p$$ was a very big projection? Like a full projection; that is, $$ApA$$ is dense in $$A$$.

Thanks!

## Using IF AND operator in MS Excel leads to argument error

I have been trying to make this formula work by using IF and AND operators in excel. However I keep getting the too many argument error. I have check for the open and closed braces. Can someone correct the formula if there is anything wrong.

=IF(AND(F2>=190, F2<=194),30,0,IF(AND(F2>=196, F2<=200),30,0,IF(AND(F2>=185, F2<=189),20,0,IF(AND(F2>=201, F2<=205),20,0,IF(AND(F2>=175, F2<=184),10,0,IF(AND(F2>=206, F2<=215),10,0, IF(EXACT(F2,Sheet2!\$A4),50,0)))))))

## optimization – How use gradient descent on proximal operator

I know we use proximal gradient method for optimization problems like this:
$$min_w f(w) = L(w) + R(w)$$
where $$L(w)$$ is convex and differentiable, but $$R(w)$$ is convex and not differentiable. If both of them were differentiable we could use a gradient descent method. A solution to this kind of problems is the proximal gradient method where the proximal operator is:
$$prox_R(u) = argmin_w (R(w) + frac{1}{2}|w-u|_2^2)$$
Then using function $$f$$ we linearize $$L(w)$$ at $$w^k$$:
$$w^{k+1} in argmin_w(R(w) + L(w^k) + nabla L(w^k)^T(w-w^k) + frac{1}{2t_k}|w-w^k|_2^2$$
This can be simplified and results in:
$$(1) text{ } w^{k+1} in prox_{t_kT} (w^k-t_kg^k) = argmin_w(t_kR(w) + frac{1}{2}|w-w^k+t_kg^k|_2^2$$
where $$g^k:= nabla L(w^k)$$.
Therefore, in algorithm ISTA $$w^{k+1}$$ is computed by solving (1).

Now my question is the following:
How are we going to solve $$w_{k+1}$$? The proximal operator still contains $$R(w)$$ which is not differentiable and therefore we cannot get the minimal point using the gradient and setting it to zero?
Thus, what was the goal of using the proximal operator? In our script, it says that it is used for classes of functions which part of them are not differentiable and standard gradient descent does not work. But how has this reformulation helped us?

## sql server – Eliminate filter operator before columnstore index scan operator

I have a large fact table with millions of rows called MyLargeFactTable,
and its a clustered column store table.

There is a composite primary key constraint on it there as well
(customer_id,location_id,order_date columns).

I also have a temp table #my_keys_to_filter_MyLargeFactTable,
with the very same 3 columns,
and it contains few thousand UNIQUE combination of these 3 key values.

The following query gives me back the desired result set

``````...
FROM #my_keys_to_filter_MyLargeFactTable AS t
JOIN dbo.MyLargeFactTable AS m
ON m.customer_id = t.customer_id
AND m.location_id = t.location_id
AND m.order_date = t.order_date
``````

but i notice that the Index Scan Operator on the fact table returns more rows than it should (about a million) and feed it into a Filter operator, which further reduce the result set to the desired few thousand rows. Index Scan operator reads way to much rows (they quite wide rows) increasing IO, and significantly slows down the whole query.

Are my parameters not sargable?

How could I remove the Filter operator and somehow force the Index Scan operator to read only the few thousand rows?