Differential equations – The regularity of the ODE with the Zygmund coefficients.

A function of zygmund $ f in mathscr C ^ 1 $ It is a continuous function that satisfies. $ | f (x + h) + f (x-h) -2f (x) | le C | h | $ for all $ x, h in mathbb R ^ n $ in the domain

According to Markus' article: A singularity theorem for ordinary differential equations involving smooth functions, a $ mathscr C ^ 1 $-Vector field $ X $ can define a flow $ mathscr F_X (t, p): mathbb R times M to M $, that is the only map that satisfies $ frac partial { partial t} mathscr F_X (t, p) = X circ mathscr F_X (t, p) $, $ mathscr F_X (0, p) = p $.

But unlike Lipschitz or Holder-$ mathscr C ^ gamma $ for $ gamma> 1 $, where those functions are stable under composition, the composition of Zygmund's function may not be Zygmund.

For example, $ x log x circ x log x = x log ^ 2x log log x $Y $ x log ^ 2x $ It is no longer Zygmund.

I think the following is true, and my Question That is, is there any elementary example for:

Show that there is a $ mathscr C ^ 1 $-function $ f (u, s) $ in $ mathbb R ^ 2 $ such that for the singularity solution $ frac partial { partial t} phi (t, s) = f ( phi (t, s), s) $, $ phi (0, s) = s $ It is not $ mathscr C ^ 1 $ in any neighborhood of $ 0 $.

The original question I can think of is, if possible, to find an example to:

Show that there is a $ mathscr C ^ 1 $-Vector field $ X $ so that if $ Phi (t, s) $ is the parameterization (a continuous map homeomorphic to your image) near 0 such that $ partial_t Phi in C ^ 0 $ Y $ partial_t Phi (t, s) = X ( Phi (t, s)) $, so $ Phi notin mathscr C ^ 1 $ Y $ forall s, Phi ( cdot, s) notin mathscr C ^ 2_t $ in any neighborhood of $ 0 $.

The idea that I have is to do $ u $-variable of $ f $ behave to oscillate like $ u log u $. I try $ f (u, s) = u log u $ Y $ phi (1, s) = – s log s $. I get $ partial_t phi (1,0) notin mathscr C ^ 1 $but this case $ phi $ it seems still $ mathscr C ^ 1 $ near $ (1,0) $, and of course not locally restricted near 0.

The obstruction also comes from verifying how the second differentiation explodes: $ frac { phi (t, s) + phi (t, s & # 39;)} 2- phi (t, frac {s + s & # 39;} 2) = int_0 ^ t frac {f ( phi (u, s), s) + f ( phi (u, s & # 39;), s & # 39;)} 2-f ( phi (t, frac {s + s & # 39;} 2), frac {s + s & # 39;} 2) du $. Just consider $ int_0 ^ tf ( frac { phi (u, s) + phi (u, s & # 39;)} 2, frac {s + s & # 39;} 2) -f ( phi (u , frac {s + s & # 39;} 2), frac {s + s & # 39;} 2) du >> | s-s & # 39; | $.

But the fact $ phi (0, s) = s $ Y $ phi $ is $ C ^ 1 $ shows that the integral is bounded by $ | int_0 ^ tf ( frac { phi (u, s) + phi (u, s & # 39;)} 2, frac {s + s & # 39;} 2) -f ( phi (u, frac {s + s & # 39;} 2), frac {s + s & # 39;} 2) du | lesssim_ epsilon | s-s & # 39; | ^ {1- epsilon} t ^ 2 $ (as $ f in mathscr C ^ {1- epsilon} $). The calculation shows that we are different from having the idea if we arrange $ s $ or $ s & # 39; $ be $ 0 $.