Leave $ H $ Be a separable Hilbert space. Leave $ A subseteq B (H) $ be a unital algebra Leave $ M $ be the strong topology closure operator of $ A. $
Leave $ B_A $ be the ball closed in $ A $ Y $ B_M $ be the ball closed in $ M. $
Is $ B_M $ the strong closure of the operator's topology of $ B_A? $ (More likely, is there a counterexample?)
Tag: Formulation
nonlinear optimization: LP / MILP formulation with OR logic
I am solving an LP / MILP problem using ILOG CPLEX.
int n = ...;
range time =1..n;
dvar float+ c(time) in 0..0.3;
dvar float+ d(time) in 0..0.3;
dvar float+ x(time);
int beta(time)=...;
float pc(time)=...;
float pd(time)=...;
//Expressions
dexpr float funtion = sum(t in time) (d(t)*pd(t)c(t)*pc(t));
//Model
maximize function;
subject to {
x(1) == 0.5;
c(1) == 0;
d(1) == 0;
forall(t in time)
const1:
x(t) <= 1;
forall(t in time: t!=1)
const2:
(x(t) == x(t1) + c(t)  d(t));
forall(t in time: t!=1)
const3:
( d(t) <= 0)  (c(t) <= 0);
As you can see, I have forced c (t) and d (t) to never be greater than 0 at the same time with "const3".
My question is, how would this restriction be represented in an LP / MILP mathematical formulation?
Is it enough to add this new variable? :
$ y (t) leq c (t) + d (t) $
$ y (t) geq c (t) $
$ y (t) geq d (t) $
$ 0 leq and (t) leq M $ (M is the maximum value of both cod)
linear algebra – General formulation of the tensor product
I would like some clarification on how to define general tensor products. Coming from a physics background, tensor products are introduced only in the case of finitesized vector space for the sake of quantum mechanics of individual particles.
Unfortunately, even in the case of QM, we find Hilbert spaces of infinite dimensions and this is treated in a rather hectic way. Therefore, I would like to understand the tensor product in the general case. The basic definition presented to me is:
Suppose $ V, W $ They are vector spaces of finite dimensions. Then we define the space $ V otimes W $ as the linear map space $ L: V times W rightarrow mathbb {F} $ where $ mathbb {F} $ is the underlying field of $ V $ Y $ W $. Since we require elements of $ V otimes W $ To be linear, I suppose we need a similar definition with "linear" being replaced by "preserve the structure" for a general algebraic structure.
linear algebra – General formulation of the tensor product
I would like some clarification on how to define general tensor products. Coming from a physics background, tensor products are introduced only in the case of finitesized vector space for the sake of quantum mechanics of individual particles.
Unfortunately, even in the case of QM, we find Hilbert spaces of infinite dimensions and this is treated in a rather hectic way. Therefore, I would like to understand the tensor product in the general case. The basic definition presented to me is:
Suppose $ V, W $ They are vector spaces of finite dimensions. Then we define the space $ V otimes W $ as the linear map space $ L: V times W rightarrow mathbb {F} $ where $ mathbb {F} $ is the underlying field of $ V $ Y $ W $. Since we require elements of $ V otimes W $ To be linear, I suppose we need a similar definition with "linear" being replaced by "preserve the structure" for a general algebraic structure.
Theory of the ct category: understand the reason for the particular formulation of the definition of a specific reflector (as indicated in The Joy of Cats)
This question is essentially a followup to this question. But before entering the question, let me present the relevant definitions contained in The joy of cats.
Definition 1. Leave $ bf {X} $ be a category ONE specific category about $ bf {X} $ it's a pair $ ({ bf {A}}, U) $, where $ bf {A} $ it's a category and $ U: { bf {A}} a X $ It is a faithful functor.
Definition 2 Yes $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ , Then a concrete functor
$ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ it's a functor $ F: { bf {A}} a { bf {B}} $ with $ U = V circ F $. We denote such functor
by $ F: ({ bf {A}}, U) to ({ bf {B}}, V) $.Definition 3. Yes $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ , then a concrete functor of
$ F: ({ bf {A}}, U) to ({ bf {B}}, V) $ it is said to be a concrete isomorphism if $ F: { bf {A}} a { bf {B}} $ It is an isomorphism.Definition 4. Yes $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ Y $ E: mathbf {A} hookrightarrow mathbf {B} $ Be the inclusion functor. So $ ( mathbf {A}, U) $ it's called a concrete subcategory of $ ( mathbf {B}, V) $ Yes $ U = V circ E $.
Definition 5. Leave $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ such that $ ({ bf {A}}, U) $ It is a specific subcategory of $ ({ bf {B}}, V) $. So $ ({ bf {A}}, U) $ it is said to be a concretely reflective subcategory of $ ({ bf {B}}, V) $ Yes
(1) for each $ mathbf {B} $object $ B $ there is $ mathbf {B} $morphism $ r: B a A $ (where $ A $ is a $ mathbf {A} $object) such that for any $ mathbf {A} $object $ A & # 39; $ and any $ mathbf {B} $morphism $ f: B a A & # 39; $, there is only one $ mathbf {A} $morphism $ g $ such that $ g circ r = f $. These $ r $called $ mathbf {A} $reflection arrows for $ B $& # 39; s.
(2) for each one $ r $ we have $ V (B) = V (A) $ Y $ V (r) = text {id} _ {V (A)} $.
Definition 6. Leave $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ such that $ ({ bf {A}}, U) $ it is a concrete reflective subcategory of $ ({ bf {B}}, V) $. So he fuctor reflector so induced is called a concrete reflector.
Yes $ mathbf {A} $ Y $ mathbf {B} $ be two categories and $ mathbf {A} $ It is a subcategory of $ mathbf {B} $, so $ mathbf {A} $ it's called a reflective subcategory of $ mathbf {B} $ If (1) of the above you are satisfied. In this terminology Definition 5 just say yes $ ({ bf {A}}, U) $ Y $ ({ bf {B}}, V) $ are specific categories about $ bf {X} $ such that $ ({ bf {A}}, U) $ it is a specifically reflexive subcategory of $ ({ bf {B}}, V) $ then in particular $ mathbf {A} $ it is a reflexive subcategory of $ mathbf {B} $.
My attempt
Motivated by this response, I first tried to conceptualize the concrete reflector as a reflector that is a concrete functor. But unfortunately this is not the case. Then I tried to conceptualize the concrete reflector as a reflector that is a concrete functor that also retains the information that " $ mathbf {X} $objects of the domain and codomain of a $ mathbf {A} $reflection arrow for a $ mathbf {B} $object $ B $ it's the same. "But frankly, this is just reinterpreting the definition in a different language and, therefore, I'm not satisfied with this and I think there must be some deep reason to add this condition.
While looking for a motivation for the reason for the presentation (2), I came across J. Fiadeiro's book Software Engineering Categories. There it is written that,
Intuitively, so that the (co) reflection is
"concrete", that is, to be consistent with the classification that the underlying
Functor provides, we would like to stay within the same fiber. That is to say,
we would like the arrows of (co) reflection to be identities.
But I didn't understand this comment since I don't even have a vague and intuitive idea for a specific functor (the answers to the question I linked above only focus on the point of view of the concrete notion isomorphism)
Question
I am trying to understand the reason for adding (2). What is the motivation for this?
Derived from the full time in the Lagrangian formulation.
Here is the expression
My question is: should not the second terms be a sum, according to the product rule? Why do we simply multiply the derivatives of both thetas?
This is the formula used to go from step 1 to step 2 in the image above
Optimization – Linear mathematical formulation of a congestion game.
I need to implement the following congestion game in AMPL:
Leave $ J = {1,2,3,4 } $ be a set of jobs and leave $ M = {1,2,3 } Be a set of machines.
 Work 1 can be solved by machines 1 and 2.
 Work 2 can be solved by machines 1 and 3.
 Work 3 can be solved by machines 2 and 3.
 Work 4 can be solved by machines 2 and 3.
Each job uses a single machine.
The time required by each machine to solve a job depends on its congestion (namely, the amount of different jobs assigned to the machine) as follows:
 Machine 1: $ time_1 =[1,3,5,7]$
 Machine 2: $ time_2 =[2,4,6,8]$
 Machine 3: $ time_3 =[3,4,5,6]$
The element $ i $ vector $ time_m $ is equal to the time required by the machine $ m $ given a congestion equal to $ i $.
[e.g.] If machine 1 solves jobs 1 and 2, the congestion of the machine
1 is equal to 2, so the resolution of these works (1 and 2) requires
equal time to $ time_1 (2) = 3 $. (indexing starts from 1)
The objective is to solve all the works in such a way that the maximum time between the machines is minimal.
The problem with the implementation is that I need to use the variable "congestion" as an index and that is not allowed in AMPL.
Is there any way to implement this game as a linear problem in AMPL? What is the formulation of linear mathematical programming without using variables as indexes of the "time matrix" (the matrix has as rows) $ time_i $)?
Formulation of a linear system for a PDE with a Neumann boundary condition
The PDE with Dirichlet limit condition can be written as a linear system:
$ Au (x) = f (x); forall x in Omega $,
S t. $ u (x) = g (x); forall x in Gamma $.
This can be solved, for example, using the Jacobi method with a projection on the limit value in each iteration. In this case, the $ A $ It is independent of the domain ($ Omega $) and its limit ($ Gamma $). What will be the equivalent linear system based on projection for the Neumann boundary condition,
$ u ^ {& # 39;} (x) = g (x); forall x in Gamma $ ?
formulation of mixed integer programming for n jobs in m machines with previous restrictions
N jobs are required, there are m equally available machines available, each job takes $ t_i $, $ i in {1 ldots n} $ time in the chosen machine (this time can not be divided into parts). Some jobs can only be started when the previous ones are finished (priority binary matrix $ A_ {m times n} $ it is given in the entrance).
I need a set of, at most, linear inequalities (nonquadratic, etc.) in real / whole / binary numbers that feed the solver would provide the shortest maximum time margin on all machines.
What I tried was to make a binary output matrix. $ O_ {n times T} $, where $ T = sum_ {i = 1} ^ {n} t_i $ , so that each work is represented as $ t_i $ 1 represents the moment in which the work has been done and is restricted so that it never exceeds more than m jobs at a time. I am beginning to think that this is not the way to go, since the production matrix gives me the works divided into parts (which is illegal here), and I do not believe there is a linear inequality / inequality that does not allow this.
Linear programming – Formulation of mixed integers of polytopes?
Dice $ t $ different unlimited polyhedra $ P_1: A ^ {(1)} x ^ {(1)} leq b ^ {(1)}, dots, P_t: A ^ {(t)} x ^ {(t)} leq b ^ {(t)} $ we are looking for the representation of $ bigcup_ {i = 1} ^ tP_i $ (not his convex helmet) with mixed integer programming.

What is the standard way of doing it by programming mixed integers with the least number of additional integer variables?

When it is possible to do it with only $ O (t ^ alpha) $ additional whole variables where $ alpha in (0,1) $?
I found a review where they say we can do this with $ O ( log t) $ Whole variables if the polyhedra have a common recession cone.