continuity: can linear maps be continuous / differentiable in non-complete vector spaces?

Leave $ E, F $ be normative vector spaces, and leave $ mathcal {L} (E, F) $ be the set of linear maps of $ E $ to $ F $.

All the definitions of continuous (and differentiable) linear maps I've seen require that $ E $ Y $ F $ be Banach spaces (i.e. full normative vector spaces).

Why is this necessary? Yes $ E $ or $ F $ it's not banach, aren't there linear maps of $ E $ to $ F $ what are continuous / differentiable?

dg. differential geometry: group of lies action (topological group) in a differentiable stack (topological stack)

Leave $ G $ be a group of lies and $ mathcal {D} $ be a differentiable stack (I'm also fine to start with a topological group and a topological stack).

I've seen someone mention somewhere that the notion of group action in the piles first appeared in "Group actions in batteries and applications" by M. Romagny (correct me if I'm wrong). The following definition of group action in a differentiable stack is from group actions in stacks and applications to equivalent chain topology for Gregory Ginot and Behrang Noohi stacks.

A Lie group action in a differentiable stack is given by a morphism stack $ alpha: G times mathcal {D} rightarrow mathcal {D} $ Satisfy some conditions. Although they did not specify, I think that $ G $ refer to the stack $ (* / G) $, then an action of a Lie group $ G $ in a differentiable stack $ mathcal {D} $ it's a battery morphism $ alpha: (* / G) times mathcal {D} rightarrow mathcal {D} $ Satisfy some conditions (correct me if I'm wrong).

Questions:

  1. Is there any notion of a group of lies? $ G $ action in a Lie group $ ( mathcal {G} _1 rightrightarrows mathcal {G} _0) $? It would be a couple of maps $ (G times mathcal {G} _1 rightarrow mathcal {G} _1, G times mathcal {G} _0 rightarrow mathcal {G} _0) $ giving an action of the Lie group in the multiple $ mathcal {G} _1, mathcal {G} _0 $ compatible with maps of origin, destination, etc. from Lie groupoid, a good notion of Lie's group action in a variety?
  2. Does the notion of Lie group action in a differentiable stack mentioned above be deduced / inspired from any notion of Lie group action in a Lie group, in the sense that this notion of Lie group action in the group Is Lie invariant of Moria giving a Lie group action in a differentiable stack?
  3. Is this definition of Lie group action in a differentiable stack directly / indirectly related to the notion of action of a group object in an object of a category as mentioned in Definition $ 2.15 $ from Notes on Grothendieck topologies, categories with fibers
    and offspring theory
    ?

real analysis – Lipschitz and continuously differentiable nowhere

It is well known that by Rademacher's Theorem, a function of Lipschitz $ f: [0,1] a mathbb R $ It is differentiable almost everywhere.

This leads to two related follow-up questions:

  • Can the set where $ f $ it is not differentiable to be dense in $ [0.1] $, but has zero measure?
  • Can the set of discontinuities in $ f & # 39; $ be dense in $ [0.1] $?

real analysis: monotonous function limited by a continuously differentiable function

Leave $ f: (0, infty) a (0, infty) $ be a non-decreasing function such that $ f (x) a 0 $ how $ x to infty $. Show that there is a continuously differentiable function $ tilde f: (0, infty) to (0, infty) $ such that $ tilde f (x) geq f (x) $ for all $ x geq 0 $ Y $ tilde f (x) a 0 $ how $ x to infty $. Further, $ tilde f $ can be chosen for $ x mapsto -x ln ( tilde f (x)) $ it's convex

I was reading these notes on uniform integrability and this is a claim in the Lemma 12.7 test that is taken for granted. Both statements seem intuitively clear, but I have no idea how I would do to prove them. Could someone give me a hint?

Riemann integration: a continuously differentiable function $ f $ from $[0,1]$ to $[0,1]$ has the properties (a) f (0) = f (1) = 0.

A continuously differentiable function $ f $ since $ (0.1) $ to $ (0.1) $ has the properties

(a) f (0) = f (1) = 0.

(yes) $ f ^ & # 39;} (x) $ It is a non-increasing function of x.

Prove that the arc length of the graph does not exceed 3.

As I understand the question we want to show that $ int_ {0} 1 {f} x dx <3 $.

The first property that gives the conditions of Rolle's theorem implies that $ f ^ & # 39;} (c) = 0 $, $ c in (0,1) $.

The second property gives the hint of the maximum value existing in $ c $.

I tried to use the first theorem of the average value of integral, but found no conclusion.

Is there any other technique to solve this question?

pde: weakly differentiable and locally limited implies limited?

Yes $ Omega subset mathbb {R} ^ n $ is bounded and $ u in H_0 ^ 1 ( Omega) $ satisfies $ u en L ^ infty_ {loc} ( Omega) $, then we can conclude that $ u en L ^ infty ( Omega) $. I am pretty sure that this is true and I would like to use some kind of continuity argument to show that this is true (since $ u = 0 $ in $ partial Omega $) but we don't know a priori that $ u $ It is continuous Any help is appreciated!

Find the values โ€‹โ€‹of a and b that make f (x) differentiable

They ask me if there is any real number for a and b that makes f (x) differentiable. I tried to look for examples of similar questions but I couldn't find any where the denominator is 0
https://i.stack.imgur.com/RUEly.png

differential geometry: if $ w $ is a differential map $ S a TS $, there is $ a $ and $ b $ s.t. differentiable $ w = a psi_u + b psi_v $

I have $ w $ a differential map $ S a TS $, where $ S $ it is a surface in $ mathbb {R} ^ 3 $.

I need to prove that, given a parameterization $ psi: U subset mathbb {R} ^ 2 to mathbb {R} ^ 3 $:

exists $ a: V subset U to mathbb {R} ^ 3 $ Y $ b: V subset U to mathbb {R} ^ 3 $ differentiable in some neighborhood $ psi (V) $ from $ p $ S t. $ w = a psi_u + b psi_v $.

I appreciate some clue, because I don't have an address to follow.

real analysis: how to show a given PDE does not have a continuously differentiable solution

$ iu_x-u_y = 0 $ you are given PDE with $ u (0, s) = g (s) $ boundary condition. Y $ g (s) $ It is not analytical, so I have to show that given pde has a solution that does not even $ C ^ 1 $

I know that the given curve is not features

and the solution can be found using the character method as $ u (x, y) = g (x + iy) $

I know that g is not analytical, but how to say that you are not even $ C ^ 1 $

Any help would be appreciated

real analysis: look for differentiable symmetric functions whose global minimizer has all the different components

For symmetric functions, people ask Do symmetric problems have symmetric solutions?, for example, (3) and (4).
The answer is not in general. However, symmetric problem solutions often exhibit some symmetry.
In (1), for a class of symmetric polynomials, the global minimum in some conditions
it is reached at some points with $ | {x_1, x_2, cdots, x_n } | le 2 $, that is, at most two different components.
In (2), for a linear combination of elementary symmetric polynomials, in some conditions, each of the local extremes ($ n $-dimensional vector)
has at most $ k $ Different components

However, I'm curious to know if there are examples (for differentiable symmetric functions in some conditions) in which all components of the global minimizer are different, if not impossible.

First, let's see an example. Under the conditions $ x, y, z> 0 $ Y $ xyz = 1 $,
the global minimum of $$ g (x, y, z) = frac { sin frac { pi x} {2}} {x} + frac { sin frac { pi y} {2}} {y} + frac { sin frac { pi z} {2}} {z} $$
it is reached in some points $ (x_0, y_0, z_0) $ with exactly two of $ x_0, y_0, z_0 $ being the same
for example $ (x_0, x_0, frac {1} {x_0 ^ 2}) $ where $ x_0 approximately 2,852 $ and the minimum $ g min approx. $ 0.878.
as well $ g (1,1,1) = 3> g min Y $ lim _ { min (x, y, z) a 0 + g (x, y, z)> 0.884> g {{min} $.

In the previous problem, the components of the global minimizer are no different.
Now suppose $ f: (0, infty) rightarrow mathbb {R} $ It is a differentiable function.
Leave $ F (x, y, z) = f (x) + f (y) + f (z) $.
I want to find some examples of $ f $ such that under the conditions $ x, y, z> 0 $ Y $ xyz = 1 $,
the global minimum of $ F (x, y, z) $ is reached at some point $ (x_0, y_0, z_0) $
with none of $ x_0, y_0, z_0 $ being the same, if not impossible.

By the way, for cyclic symmetric functions, I found examples where the global minimum
it is reached at some point with different components. For example,
leave
$$ F_1 (a, b, c) = frac {a ^ 2b + 2a ^ 2c + 2ab ^ 2 + b ^ 3 + 31abc} {(a + b + 50c) (a + b + c) ^ 2} $$
and let $ G (a, b, c) = F_1 (a, b, c) + F_1 (b, c, a) + F_1 (c, a, b) $.
Then the minimum of $ G (a, b, c) $ under the conditions $ a, b, c ge 0 $ Y $ a + b + c = $ 3 it is not achieved in $ (1, 1, 1) $ or $ abc = 0 $.
In fact, we have $ G (1,1,1) = 37/156 approximately 0.2372 $ Y
$$ G (a, 3-a, 0) = frac {49a ^ 4-8094a ^ 3 + 45900a ^ 2-66177a-4050} {9 (49a + 3) (49a-150)}> 0.21, forall 0 le a le 3. $$
But nevertheless, $ G (1/2, 1/8, 19/8) = 1018835/4907936 approximately 0.2076 $;
In reality, the global minimum is reached at some point with different components (neither is zero).

Reference:

(1) Vasile Cirtoaje, "The same variable method", J. Inequal. Pure and applied. Mathematics, 8 (1), 2007.
Online: https://www.emis.de/journals/JIPAM/images/059_06_JIPAM/059_06.pdf

(2) Alexander Kovacec, et. al., "A note on extremes of linear combinations of elementary symmetric functions",
Linear and multilinear algebra, Volume 60, 2012 – Number 2.

(3) R. F. Rinehart, "On the extreme functions that fulfill certain symmetry conditions",
The American Mathematical Monthly, vol. 47, no. 3 (March 1940), p. 145-152.

(4) William C. Waterhouse, "Do symmetric problems have symmetric solutions?", The American Mathematical Monthly, vol. 90, 1983, p. 378-387.