Backtracking – How do explicit constraints depend on problem instance in backtracking?

I was reading the Chapter on Backtracking in Fundamentals of Computer Algorithms by Horowitz and Sahani, and I read the following line.

The explicit constraints depend on the particular instance I of the problem being solved.

I don’t understand why this statement is true? I have an understanding that the set S (to which all the elements of the n-tuple belong) would be the same for all instances of the problem, but this statement suggests that it depends on the problem instance.

optimization: convert logical propositional formulas into mathematical constraints

brief introduction

In all boolean linear programs (or more generally mixed integers), the constraints are represented as an array $ A $, a support vector $ b $ and is calculated by $ A ^ T x leq b $, where $ x $ it is a boolean vector that one wants to optimize in some way. Another way to formulate the problem is to say that one wants to select a set of elements so that a lot of logical formulas are satisfied while optimizing some function. In my environment, I have all the constraints that will soon become a list of propositional logical formulas. So in order to calculate and solve using some kind of ILP solver, I need to convert all logical formulas into mathematical constraints.

Direct conversion from logical formula constraints

The most direct way to convert a propositional logical formula into mathematical constraints is to first convert the formula to normal conjunctive form (CNF for short) and then from the CNF create a constraint for each clause y. For example, let's $ q $ be formulated as the logical formula $$ q = (a lor b) rightarrow c $$, so $ q $ converts to CNF $$ q_ {cnf} = (c lor neg a) wedge (c lor neg b) $$ Now, for each conjunction clause we will have a restriction and for each variable in each disjunction we will establish $ (1-x) $ if a variable $ x $ is denied and fair $ x $ otherwise:

$$
(1-a) + c> 0 wedge (1-b) + c> 0 Rightarrow \
c-a> -1 wedge c-b> -1 Rightarrow \
c-a geq 0 wedge c-b geq 0 Rightarrow \
a-c leq 0 wedge b-c leq 0
$$

that we will represent in a matrix $ A $ and a vector $ b $

$$
A =
begin {bmatrix}
1 and 0 and -1 \
0 and 1 and -1
end {bmatrix}
$$

$$
b =
begin {bmatrix}
0 and 0
end {bmatrix}
$$

where each column index in $ A $ represents each variable of $ a, b, c $, and now we can easily calculate and solve some optimization problems using all kinds of solvers.

Question

In the general case, a propositional logical formula becomes many mathematical constraints. For some cases, the formula could become a single constraint. For example, $ a wedge (b lor c) $ can be represented on a line as $ -2a – b – c leq -3 $ While $ (a wedge b) lor c $ cannot be represented by a restriction.

Is there a method to determine if a formula can be represented as a constraint or not? And at best, is there even a method of converting that one constraint if it exists or if there aren't many?

symbolic: addressing a variational problem with constraints

I need to minimize (preferably symbolically) functional with restrictions:

$$
F = int_ {0} ^ infty f (x) cos {x} dx , \
1) f (x) geq 0 \
2) int_ {0} ^ infty f (x) dx = 1 \
3) f (0) = 1 \
4) f & # 39; (x) leq 0 \
$$

Is there a way to solve a problem like this? Could this problem be solved in principle?

Thanks in advance

Since relvar constraints obviously need to be checked immediately, it follows that database constraints should also be checked immediately

The text below is from "Introduction to Database Systems, J. Date". I can't the bold part.
Why, since Relvar constraints obviously need to be checked immediately, database constraints need to be checked immediately too? Why should the database constraints follow the relvar constraints, please?

The previous edition of this book stated that relvar constraints were checked immediately, but database constraints were checked at the end of the transaction (a position many writers agree with, though they generally use different terminology). But the Interchangeability Principle (of base relvars and derivatives – see Chapter 9) implies that the same real-world constraint could be a relvar constraint with one design for the database and a database constraint with another! Since the relvar constraints obviously must be verified immediately, it follows that the database constraints must also be verified immediately. .

etl: do Data Warehouse standards allow foreign key constraints in a dimensional model?

There is no "law" or standard that prohibits FK constraints on a dimensional model. In fact, you already mentioned several reasons why it may make a lot of sense to add FK constraints to a star schema.

The main reason why FK might not be enabled for such a model is IMHO: ETL processes often take a long time, so performance can be important. A popular optimization technique is to disable any restrictions during loading, but enable them later, when the ETL process ends. Note that FK constraints will generally involve indexing, which is crucial when selecting model data, which generally occurs after the ETL process is completed.

But it really depends on the case. If you think you can meet your performance requirements with FK enabled, and it even helps you keep your data in better quality during the ETL process, please continue. If FKs really slow down your ETL process, then you'd better turn them off (but enable them later).

etl: do Data Warehouse standards allow foreign key constraints on the dimensional model?

Is it true that we never enable foreign key constraints on the dimensional model of the data warehouse?
If yes, what is the reason behind that?
According to my research:
Some experts say that in a dimensional model, FK will never be enabled.
This is etl's responsibility to ensure consistency and integrity.

The data integrity problem may appear, even though ETL is responsible enough through the proper dependency.

Examples:

Late Arriving Dimension from source
Few records could not pass data quality check and routed to error table.
My intermediate tables are not populated due to batch load failure and proper restart ability/recover ability steps are not followed.  Some one restarted last session to load data into fact while some of dimensions are yet to be populated after Dimension tables related  session failure.
  1. Primary key constraints will help me avoid duplicating the population of records if the data in the intermediate tables is processed once again because the load session on the target table is accidentally reactivated.

    What problem do you see when enabling the FK constraints in the dimensional model?

Thank you,
Sangeeta

database design: is it possible for an attribute to impose full participation constraints?

For a school food delivery app I'm developing, I've been told to use the most accurate ER model that captures as many constraints as possible. In particular, I have decided to enforce a given restriction

"The food in each order must be from a single restaurant"

using an aggregation of Orders to Restaurant-FoodItem relationship. However, as the application layer requires obtaining a restaurant_id associated with a Order, what I'm currently doing to get restaurant_id is to join from Orders to Restaurants, following the edges of the ER model.

Image

There was another alternative suggested by my teammate to just put restaurant_id as an attribute in Orders to enforce the restriction and allow easier and more efficient recovery of restaurant_id. May I know if the suggested alternative allows us to enforce the above restriction?

entities – Expected argument of type "string", "array" given in Symfony Component Validator Constraints LengthValidator-> validate ()

I created a custom field type module install in Drupal version 8.8.5, then I went to content type and added the my_custom_field field, then filled in the label and set the field settings and after showing this error.

Symfony Component Validator Exception UnexpectedTypeException: expected argument of type "string", "array" given in Symfony Component Validator Constraints LengthValidator-> validate () (line 37 of / opt / lampp / htdocs / drupal885 / vendor / symfony / validator / Constraints / LengthValidator.php).

linear programming with non-integer constraints

Suppose we have a linear vertex packing program for a hypergraph (V, E), with size $ n = sum_ {e in E} | e | $. We introduce a variable $ x_v $ for each vertex $ v in V $. The integral version is established as follows:

$$ max sum_ {v in V} x_v $$
$$ s.t. sum_ {v: v in e} x_v le 1, forall e in E $$
$$ x_v in {0.1 } $$

We have another version of this LP with non-integer restrictions as follows:

$$ max sum_ {v in V} x_v $$
$$ s.t. sum_ {v: v in e} x_v le 1, forall e in E $$
$$ x_v in {0, log_N 2, log_N 3, cdots, 1 } $$

Leave the optimal solution for these two LPs as $ OPT_1 $ and $ OPT_2 $.

Is there any good upper limit for $ frac {OPT_1} {OPT_2} $ in terms of n and N?

Determine the diameter of a circle with constraints.

I would like to analytically solve the following problem:

enter the image description here

  1. I have a circle with diameter D1 = 70mm.
  2. I have another circle with diameter D2 = Xmm, which is my unknown of this problem
  3. The center-to-center distance between the circles is 250 mm.
  4. The length of the "belt" (represented in blue line color) is always fixed at 800 mm

The circle identified by diameter D1 represents the driving pulley.
The circle identified by diameter D2 represents the driven pulley.
The blue line is always tangent to both circles (pulleys) and represents the belt.

Now when I change the diameter D1, I will get another value of diameter D2 and today I have to determine the solution with the help of a parametric CAD drawing.

Is there a way to solve it analytically (eg using an Excell sheet)?

Thanks in advance!