ap.analysis of pdes – Variational formulation of Abstract Cauchy Problem, possible?

Recently I have come across a method known as “variational method” in which we try to establish weak solutions of various boundary value problems involving ordinary derivatives, partial derivatives, and fractional derivatives. The use of Sobolev spaces is the key to finding such solutions in which the Lax Milgram theorem is applied to show the existence and uniqueness of solutions. The general approach can be seen in the 8th chapter of the book “Functional Analysis, Sobolev Spaces, and Partial Differential Equations” https://www.springer.com/gp/book/9780387709130.

Now I was wondering if a similar approach can be applied to check the weak solutions of an abstract evolution equation whose solutions are Banach space-valued functions, i.e, bounded linear operators. The abstract Cauchy problem is:

$$u'(t)=Au(t), text{for}hspace{3pt} tgeq 0$$
$$u(0)=x$$
where $A:D(A)subset X rightarrow X$ is the linear operator and the generator of a semigroup and $X$ is a Banach space.
So far I am aware of two types of solutions namely classical solution in which $u(0)=x in D(A)$ and a mildintegral solution in which $u(0)=xin X$.

camera – Why inverse the rigid transformation in perspective projection formulation?

    cv::Mat Rc = Rz(cam_ori(i).z) * Ry(cam_ori(i).y) * Rx(cam_ori(i).x);
    cv::Mat tc(cam_pos(i));
    cv::Mat Rt;
    cv::hconcat(Rc.t(), -Rc.t() * tc, Rt);
    cv::Mat P = K * Rt;
    cv::Mat x = P * X;

cv::hconcat() is horizontally concatenating the first two input matrices and store the output matrix in the third argument Rt, which is the right transformation matrix in homogeneous coordinates.

I think the X coordinates are recorded in terms of world coordinates. So, why the author implements the projection formulation with the rigid transformation being inversed?

The source code is in https://github.com/sunglok/3dv_tutorial/blob/834b1fe39583d71bb57f4a93c015a2494a78c1cc/src/image_formation.cpp#L35

Integration: weak PDE formulation with weighted internal product

In Boyd's book on spectral methods (available here: https://depts.washington.edu/ph506/Boyd.pdf), I stumbled upon section 3.5 "Weak and Strong Forms of Differential Equations: The Usefulness of Integration by – Part "

As an example, the following PDE is provided (Equation 3.35):

$$ u_ {xx} (x) = q (x) u (x) = – f (x), $$

then this equation is transformed into its weak formulation using the internal product, defined by:
$$ (u, v) = int_a ^ b omega (x) u (x) v (x) , mathrm {d} x, $$
where $ omega (x) $ It is a weighting function.
Using this internal product, the weak formulation is derived:
$$ (v, u_ {xx} + q u) = – (f, v), $$
for some suitable test function $ v $. Then the integration by parts is used and after simplifying due to the homogeneous boundary conditions, we arrive at this expression:

$$ (v_x, u_x) + (v, qu) = (v, f) $$
My question is: How does this work if the weighting function $ omega $ is not equal to 1 everywhere?
This is what I get when I apply integration by parts:
$$ (v, u_ {xx}) = int_a ^ bv (x) u_ {xx} (x) omega (x) , mathrm {d} x = lbrace v (x) u_x (x) omega (x) rbrace_a ^ b – int_a ^ b v_x (x) u_x (x) omega (x) , mathrm {d} x – int_a ^ b v_x (x) u (x) ome__ ( x) , mathrm {d} x $$

The problem here is the third term (with the $ omega_x $)
Note that, to use the pseudo-spectral method to numerically solve the PDE, the weighting function is determined by the base of orthogonal polynomials used to represent the approximate solution.

Full linear programming formulation of induced subgraph connectivity

Can anyone help me figure out what the ILP formulation of a case should be when I try to label vertices by saying 0, 1 and 2 and I want the graph subgraph (V, E) to be made by the same set of vertices but the set of edges are those that have at least one endpoint such as 1 or 2, to connect. I don't need objective functions, I just want to know how to show this as a constraint.

Functional analysis: the formulation of the Kaplansky density theorem ball in non-self-adhesive algebras

Leave $ H $ Be a separable Hilbert space. Leave $ A subseteq B (H) $ be a unital algebra Leave $ M $ be the strong topology closure operator of $ A. $
Leave $ B_A $ be the ball closed in $ A $ Y $ B_M $ be the ball closed in $ M. $
Is $ B_M $ the strong closure of the operator's topology of $ B_A? $ (More likely, is there a counterexample?)

non-linear optimization: LP / MILP formulation with OR logic

I am solving an LP / MILP problem using ILOG CPLEX.


int n = ...;
range time =1..n;

dvar float+ c(time) in 0..0.3;
dvar float+ d(time) in 0..0.3;
dvar float+ x(time);

int beta(time)=...;
float pc(time)=...;
float pd(time)=...;

//Expressions

dexpr float funtion = sum(t in time) (d(t)*pd(t)-c(t)*pc(t)); 

//Model

maximize function;

subject to {

    x(1) == 0.5;
    c(1) == 0;
    d(1) == 0;

    forall(t in time)
        const1:
            x(t) <= 1;          

    forall(t in time: t!=1)
        const2:
            (x(t) == x(t-1) + c(t) - d(t));         

    forall(t in time: t!=1)
        const3:         
            ( d(t) <= 0) || (c(t) <= 0); 

As you can see, I have forced c (t) and d (t) to never be greater than 0 at the same time with "const3".

My question is, how would this restriction be represented in an LP / MILP mathematical formulation?

Is it enough to add this new variable? :

$ y (t) leq c (t) + d (t) $

$ y (t) geq c (t) $

$ y (t) geq d (t) $

$ 0 leq and (t) leq M $ (M is the maximum value of both cod)

linear algebra – General formulation of the tensor product

I would like some clarification on how to define general tensor products. Coming from a physics background, tensor products are introduced only in the case of finite-sized vector space for the sake of quantum mechanics of individual particles.

Unfortunately, even in the case of QM, we find Hilbert spaces of infinite dimensions and this is treated in a rather hectic way. Therefore, I would like to understand the tensor product in the general case. The basic definition presented to me is:

Suppose $ V, W $ They are vector spaces of finite dimensions. Then we define the space $ V otimes W $ as the linear map space $ L: V times W rightarrow mathbb {F} $ where $ mathbb {F} $ is the underlying field of $ V $ Y $ W $. Since we require elements of $ V otimes W $ To be linear, I suppose we need a similar definition with "linear" being replaced by "preserve the structure" for a general algebraic structure.

linear algebra – General formulation of the tensor product

I would like some clarification on how to define general tensor products. Coming from a physics background, tensor products are introduced only in the case of finite-sized vector space for the sake of quantum mechanics of individual particles.

Unfortunately, even in the case of QM, we find Hilbert spaces of infinite dimensions and this is treated in a rather hectic way. Therefore, I would like to understand the tensor product in the general case. The basic definition presented to me is:

Suppose $ V, W $ They are vector spaces of finite dimensions. Then we define the space $ V otimes W $ as the linear map space $ L: V times W rightarrow mathbb {F} $ where $ mathbb {F} $ is the underlying field of $ V $ Y $ W $. Since we require elements of $ V otimes W $ To be linear, I suppose we need a similar definition with "linear" being replaced by "preserve the structure" for a general algebraic structure.

Theory of the ct category: understand the reason for the particular formulation of the definition of a specific reflector (as indicated in The Joy of Cats)

This question is essentially a follow-up to this question. But before entering the question, let me present the relevant definitions contained in The joy of cats.

Definition 1. Leave $ bf {X} $ be a category ONE specific category about $ bf {X} $ it's a pair $ ({ bf {A}}, U) $, where $ bf {A} $ it's a category and $ U: { bf {A}} a X $ It is a faithful functor.

Definition 2 Yes $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ , Then a concrete functor
$ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $
it's a functor $ F: { bf {A}} a { bf {B}} $ with $ U = V circ F $. We denote such functor
by $ F: ({ bf {A}}, U) to ({ bf {B}}, V) $.

Definition 3. Yes $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ , then a concrete functor of
$ F: ({ bf {A}}, U) to ({ bf {B}}, V) $ it is said to be a concrete isomorphism if $ F: { bf {A}} a { bf {B}} $ It is an isomorphism.

Definition 4. Yes $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ Y $ E: mathbf {A} hookrightarrow mathbf {B} $ Be the inclusion functor. So $ ( mathbf {A}, U) $ it's called a concrete subcategory of $ ( mathbf {B}, V) $ Yes $ U = V circ E $.

Definition 5. Leave $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ such that $ ({ bf {A}}, U) $ It is a specific subcategory of $ ({ bf {B}}, V) $. So $ ({ bf {A}}, U) $ it is said to be a concretely reflective subcategory of $ ({ bf {B}}, V) $ Yes

(1) for each $ mathbf {B} $-object $ B $ there is $ mathbf {B} $-morphism $ r: B a A $ (where $ A $ is a $ mathbf {A} $-object) such that for any $ mathbf {A} $-object $ A & # 39; $ and any $ mathbf {B} $-morphism $ f: B a A & # 39; $, there is only one $ mathbf {A} $-morphism $ g $ such that $ g circ r = f $. These $ r $called $ mathbf {A} $reflection arrows for $ B $& # 39; s.

(2) for each one $ r $ we have $ V (B) = V (A) $ Y $ V (r) = text {id} _ {V (A)} $.

Definition 6. Leave $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ such that $ ({ bf {A}}, U) $ it is a concrete reflective subcategory of $ ({ bf {B}}, V) $. So he fuctor reflector so induced is called a concrete reflector.

Yes $ mathbf {A} $ Y $ mathbf {B} $ be two categories and $ mathbf {A} $ It is a subcategory of $ mathbf {B} $, so $ mathbf {A} $ it's called a reflective subcategory of $ mathbf {B} $ If (1) of the above you are satisfied. In this terminology Definition 5 just say yes $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ such that $ ({ bf {A}}, U) $ it is a specifically reflexive subcategory of $ ({ bf {B}}, V) $ then in particular $ mathbf {A} $ it is a reflexive subcategory of $ mathbf {B} $.

My attempt

Motivated by this response, I first tried to conceptualize the concrete reflector as a reflector that is a concrete functor. But unfortunately this is not the case. Then I tried to conceptualize the concrete reflector as a reflector that is a concrete functor that also retains the information that " $ mathbf {X} $-objects of the domain and codomain of a $ mathbf {A} $reflection arrow for a $ mathbf {B} $-object $ B $ it's the same. "But frankly, this is just reinterpreting the definition in a different language and, therefore, I'm not satisfied with this and I think there must be some deep reason to add this condition.

While looking for a motivation for the reason for the presentation (2), I came across J. Fiadeiro's book Software Engineering Categories. There it is written that,

Intuitively, so that the (co) reflection is
"concrete", that is, to be consistent with the classification that the underlying
Functor provides, we would like to stay within the same fiber. That is to say,
we would like the arrows of (co) reflection to be identities.

But I didn't understand this comment since I don't even have a vague and intuitive idea for a specific functor (the answers to the question I linked above only focus on the point of view of the concrete notion isomorphism)

Question

I am trying to understand the reason for adding (2). What is the motivation for this?

Derived from the full time in the Lagrangian formulation.

Here is the expression

My question is: should not the second terms be a sum, according to the product rule? Why do we simply multiply the derivatives of both thetas?

This is the formula used to go from step 1 to step 2 in the image above