Second order elliptic PDE problem with boundary conditions whose solutions depend continuously on the initial data

Consider the following problem
$$begin{cases}
-Delta u+cu=f,&xinOmega\
u=g,&xinpartialOmega
end{cases}$$

where $Omegasubseteqmathbb R^n$ is open with regular boundary, $cgeq0$ is a constant, $fin L^2(Omega)$ and $g$ is the trace of a function $Gin H^1(Omega)$. If we consider $u$ a weak solution to this problem, and define $U=u-Gin H_0^1(Omega)$, it is easy to see that $U$ is a weak solution to the following problem
$$begin{cases}
-Delta U+cU=f+Delta G-cG,&xinOmega\
U=0,&xinpartialOmega
end{cases}$$

It is also easy to see that we can apply Lax-Milgram theorem with the bilinear form
$$B(u,v)=int_Omegaleft(sum_{i=1}^nu_{x_i}v_{x_i}+cuvright)$$
and the bounded linear functional
$$L_f(v)=int_Omega(f-cG)v-int_Omegasum_{i=1}^n G_{x_i}v_{x_i}$$
to conclude there exists a unique weak solution $U$ to the auxiliary problem defined above. If we define $u=U+Gin H^1(Omega)$, it is clear then that this function will be a solution to the original problem.

Now to the question: I would like to prove that this solution $u$ depends continuously on the initial data, that is, that there exists a constant $C>0$ such that
$$lVert urVert_{H^1(Omega)}leq C(lVert frVert_{L^2(Omega)}+lVert GrVert_{H^1(Omega)})$$
I feel that the work I have done to prove that $L_f$ is bounded should be relevant for our purposes, because
$$lVert urVert_{H^1(Omega)}leqlVert UrVert_{H^1(Omega)}+lVert GrVert_{H^1(Omega)}$$
and
$$lVert UrVert_{H^1(Omega)}leq C B(U,U)^{1/2}= C|L_f(U)|^{1/2}$$
The problem is that I don’t know how to manipulate $L_f(U)$ to obtain the result. I have managed to prove a completely useless inequality, for it involves the norm of $U$.

I would appreciate any kind of suggestion. Thanks in advance for your answers.

P.S. The problem is that a priori $Delta G$ doesn’t have to be in $L^2(Omega)$, which makes it hard to use the $H^2$ regularity of $U$ (which would solve the problem instantly).

P.S.S. Also posted this question in SE.

dns spoofing – Does subdomain DNS cache poisoning depend on the authoritative name server ignoring requests for non-existing domains?

I’m reading “Introduction to Computer Security”, Pearson New International Edition, 1st edition, by Goodrich and Tamassia.

On the subject of DNS cache poisoning, they mention that a “new” attack was discovered in 2008, so-called “subdomain DNS cache poisoning”. This is how that attack is supposed to play out:

  1. An attacker makes many requests to a name server for non-existing subdomains, say aaaa.example.com, aaab.example.com, aaac.example.com, etc.
  2. The book mentions that these subdomains don’t exist, and that, therefore, the target authoritative name server just ignores the requests.
  3. Simultaneously, the attacker issues spoofed responses to the requests made by the name server under attack, each with a guessed transaction ID (which is randomly chosen and unknown to the attacker).
  4. Because the target authoritative name server ignores requests for non-existing domains, the attacker has opportunity to issue a lot of spoofed responses, making it likely that she will guess the correct transaction ID.

The book was written in 2011, so something might have changed in the meantime. When I dig for a non-existing subdomain, e.g. aaaa.example.com, I get a NXDOMAIN response:

$ dig @a.iana-servers.net. aaaa.example.com. +norecurse

; <<>> DiG 9.16.16 <<>> @a.iana-servers.net. aaaa.example.com. +norecurse                                  
;; global options: +cmd                              
;; Got answer:            
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 20391                                                 
;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
# ... snip ...

I would assume that any non-authoritative name server would put this result in its negative cache (as it should according to RFC 2308, written in March 1998).

Was it previously common practice for name servers to ignore (= not send a reply to) requests for non-existing subdomains? Has that been replaced with the NXDOMAIN reply that I see today? Is conducting the attack as described above still possible?

❓ASK – Bitcoin transaction fees depend upon market volatility | Proxies123.com

My transaction was held up from the last few weeks is confirmed today only. I carried out the Transaction with low fees around 2 satoshi per byte. But the market volatility was high at that moment, so the bid for carrying out confirmation was high, due to which fees was also high.

But as of now, the fees is only 2 sat per byte. The reason behind is the unconfirmed transaction is very less now. So the fees also cooperatively low now. That’s why, the fees is directly linked to volatility.

IMG_20210530_091621.jpg

Can conditional type in typescript depend on a value of itself?

Can I implement such type in typescript?

type Someone = {
  who: string;
}

type Someone = Someone.who === "me" ? Someone & Me : Someone;

Solve equation with constraint that some variables can’t depend on other variables

Take the equation $f(x,y,z) = x y^2 + (1-x)(y-z)^2$. I want to rewrite this in the form $(y+a(x,z))^2 + b(x,z)$. This can easily be solved by hand, with $a(x,z) = – z(1-x)$ and $b(x,z) = x(1-x)z^2$. The actual problem I want to solve is more complicated than this and can’t as easily be done by hand, but the essence of the problem should be the same – I want Mathematica to solve the equation

$$x y^2 + (1-x)(y-z)^2 = (y+a)^2 + b$$

for $a$ and $b$, with the constraint that $a$ and $b$ can only be functions of $x$ and $z$. Is there any way of imposing this condition? Simply plugging in the equation into Solve,

Solve(x*y^2 + (1 - x) ((y + z)^2) == (y + a)^2 + b, {a, b})

obviously does not work, since the equation by itself is not sufficiently constrained. I tried replacing a with a(x,z) and likewise for b, but Mathematica didn’t know what to do with that and just treated a(x,z) as a variable.

differential equations – NDEigensystem solutions depend on how many solutions I ask for?

Background

I am using NDEigensystem to solve the following eigenvalue problem:

$$ left( begin{matrix} m&-ipartial_x \ -ipartial_x & -mend{matrix}right) left( begin{matrix} u_u(x) \ u_d(x) end{matrix} right)=lambda left( begin{matrix} u_u(x) \ u_d(x) end{matrix} right),qquad text{with} quad m = begin{cases} m = -10, , x leq 0 \ m = 10, , x>0end{cases} $$

I am specifically looking for solutions that vanish at $xrightarrow pm infty$.

Code Implementetion

My code is the following

mass1d(m1_, m2_, x_) := m2 UnitStep(x) - m1 UnitStep(-x)  (* mass function *)     

nSols = 10;(* number of solutions *)
{vals, sols} = NDEigensystem({

(* Diff Equation *)
-I ud'(x) + mass1d(10, 10, x)*uu(x),
-I  uu'(x) -mass1d(10, 10, x)*ud(x),

(* Boundary conditions*)
DirichletCondition(uu(x) == 0, True),
DirichletCondition(ud(x) == 0, True)

}, {uu(x), ud(x)}, {x, -3, 3}, nSols)(* SOLVER *) ;

Problem/Question

It is my understanding that NDEigensystem will return the solutions with the lowest (absolute value) eigenvalues. Hence, if I run my code asking for 5 solutions and then asking for 10 solutions, I would expect the 5 solutions in the first run also appear in the second run. This is not what happens. What I obtain is the following:
enter image description here

Aslo, when I ask for only 5 solutions I obtain the following warning that is not present when I ask for

Eigensystem::chnpdef: Warning: there is a possibility that the second matrix SparseArray(Automatic,<<2>>,{1,{{0,4,9,<<74>>,300,302},<<1>>},{0.08 +0. I,-0.01+0. I,0.02 +0. I,0.02 +0. I,-0.01+0. I,0.08 +0. I,<<290>>,0.16 +0. I,0.02 +0. I,0.02 +0. I,0.16 +0. I,0.02 +0. I,0.16 +0. I}}) in the first argument is not positive definite, which is necessary for the Arnoldi method to give accurate results.

Furhtermore, this problem has an analytical solution, which is $lambda = 0$ and functions $u_u(x) sim e^{-x}$ and $u_d(x) sim e^{-x}$, which tells me asking for 10 solutions gives the correct answer.

Why do I get this warning only when I ask for 5 solution, but not when I look for 10?

Complete code
Above I write the way am finding the solutions, below is the complete code just to show there is no difference in how am running both cases:

*(* ----- 5 SOLUTIONS ------ *)
nSols = 5;(* number of solutions *)
{vals5, sols5} = NDEigensystem({
   
   (* Diff Equation *)
   -I ud'(x) + mass1d(10, 10, x)*uu(x),
   -I  uu'(x) - mass1d(10, 10, x)*ud(x),
   
   (* Boundary conditions*)
   
   DirichletCondition(uu(x) == 0, True),
   DirichletCondition(ud(x) == 0, True)
   
   }, {uu(x), ud(x)}, {x, -3, 3}, nSols)(* SOLVER *) ;
nSols = 10;(* number of solutions *)

(* ----- 10 SOLUTIONS -----*)
{vals10, sols10} = NDEigensystem({
   
   (* Diff Equation *)
   -I ud'(x) + mass1d(10, 10, x)*uu(x),
   -I  uu'(x) - mass1d(10, 10, x)*ud(x),
   
   (* Boundary conditions*)
   DirichletCondition(uu(x) == 0, True),
   DirichletCondition(ud(x) == 0, True)
   
   }, {uu(x), ud(x)}, {x, -3, 3}, nSols)(* SOLVER *) ;

(* PLOTING *) 
GraphicsGrid(
 {{
   BarChart((ReIm /@ vals5)/Max({m1, m2}), 
    ChartLegends -> {"Real Part", "Imaginary Part"}, 
    ChartLabels -> {Range(nSols), None}, 
    PlotLabel -> "Asking for 5 solutions"),
   Plot(Abs(sols5((1)))^2 /. {x -> x0} // Evaluate, {x0, -l, l}, 
    PlotRange -> All, 
    PlotLabel -> "Asking for 5 solutions (1st result)", 
    PlotLegends -> {"!(*SubscriptBox((u), (u)))", 
      "!(*SubscriptBox((u), (d)))"})
   },
  {
   BarChart((ReIm /@ vals10)/Max({m1, m2}), 
    ChartLegends -> {"Real Part", "Imaginary Part"}, 
    ChartLabels -> {Range(nSols), None}, 
    PlotLabel -> "Asking for 10 solutions"),
   Plot(Abs(sols10((1)))^2 /. {x -> x0} // Evaluate, {x0, -l, l}, 
    PlotRange -> All, 
    PlotLabel -> "Asking for 10 solutions (1st result)", 
    PlotLegends -> {"!(*SubscriptBox((u), (u)))", 
      "!(*SubscriptBox((u), (d)))"})
   }
  }
 , ImageSize -> Large)*

woocommerce offtopic – How to create and change prices dynamically depend of selected variations on the fly?

I know there is a lot of plugins, paid or free and no one of them is fit to be really dynamic…

All plugins i found, for dynamic variations is working fine when you have few variations… (50…100)

But client need to add two very complex prodcts, with more then 23000 variations… as i can see its imposible to set prices manually…

I got good plugin for generate variations, but even if i will generate 23000 variations… i believe the perfomanse will be horrible if i will use standart woocommerce way.

So the idea is to create only few incomplete variations to see the price range and then, on the fly, using javascript, change prices dynamicaly before add the product to the cart.

I just wonder to know what is the best way to achieve this?

  • Shold i use Single product and manually add my own code to create my own variations?
  • Shold i use Variation product and disable standart to use my own variations calculations?

Or may be there is another way to do this?

logic – Must skolem functions depend on unused variables?

When converting formulas to CNF, we replace existentially quantified variables with Skolem functions that depend on surrounding universally quantified variables. For example, in

∀x ∃y p(x,y)

the value of y depends on the particular x, so y will be replaced by f(x).

What if the universally quantified variable was unused in the existentially quantified formula, i.e. is not a free variable in the body of the existential quantifier?

∀x ∃y p(y)

Can we simplify matters by ignoring x completely and just replacing y with a Skolem constant a?

c++ – Resolving lambdas that depend on each other

In the C++20 codebase that I work there are a couple of functions which have helper functions defined as lambdas. I can see the idea of locality for these helpers. By having them there in the function, one doesn’t have to care about them in other parts of the code.

As the code is proprietary, I cannot show the original; the following is a mock example. We have various vectors from the program and corresponding reference vectors. We want to compare actual and expected and then replace them with expected such that the program continues as planned after the checkpoint.

This is the code example:

#include <cmath>
#include <iostream>
#include <string>
#include <vector>

void checkpoint(std::vector<double> &actual,
                std::vector<double> const &expected, int const context_1,
                int const context_2, int const context_3) {
  auto report_deviation = (&)(std::string const &type) {
    std::cout << "Deviation in " << context_1 << " " << context_2 << " "
              << context_3 << " of type " << type << "." << std::endl;
  };

  auto compare_and_replace = (&)(double &actual, double const expected,
                                 double const tolerance,
                                 std::string const &type) {
    if (std::abs(actual - expected) > tolerance) {
      report_deviation(type);
    }
    actual = expected;
  };

  auto compare_and_replace_multi =
      (&)(std::vector<double> &actual, std::vector<double> const &expected,
          double const tolerance, std::string const &type) {
        for (int i = 0; i != actual.size(); ++i) {
          compare_and_replace(actual(i), expected(i), tolerance, type);
        }
      };

  compare_and_replace_multi(actual, expected, 0.1, "Widget");
}

My gut is unhappy with this code and I cannot put my finger on it. There are a couple of things that I could rationalize:

  • The nested closures increase the length of the function. In the aim of providing more cohesion, the original author has created something that feels more complex to me.
  • The lambdas capture with (&), which is the most general form possible. The amount of state captured isn’t specified or limited. Looking more closely the first report_deviation() only really needs the context variables. The compare_and_replace() only needs report_deviation() and then compare_and_replace_multi() only needs compare_and_replace(). In code I see a function and three functor classes with coupling among them. And I learned to minimize coupling over all other things.
  • Although there is so much closure capturing going on, the functions still take four arguments each. They pass it to the next function. In Clean Code Robert C. Martin states that three arguments are already a lot, and more than that is a clear sign for too much complexity. The code has many function with 20 parameters, so there is a pattern there.
  • I cannot test the comparison logic with mock data. I can only call the checkpoint() which then not only needs to have the data but also some context. In the actual example the checkpoint() doesn’t directly get the data but in some nested structure and unpacks it before calling compare_and_replace_multi(). This code therefore feels untestable. And at the moment there are no tests for it.

Clean Code, which I currently read, seems to be from a Java perspective. And that might account for a preference to put everything into classes. But I do see the point that shared function arguments could be state of a class. My colleague argues that a class would need to have some non-trivial invariants to justify making it a class. And here there apparently aren’t any cool invariants like std::vector has.

Still I find that one could extract a comparator class here that would be testable and independent of the other stuff.

#include <cmath>
#include <iostream>
#include <string>
#include <vector>

struct Context {
  int context_1;
  int context_2;
  int context_3;
};

class Comparator {
public:
  Comparator(double const tolerance, std::string const &type,
             Context const &context)
      : tolerance(tolerance), type(type), context(context) {}

  void compare_and_replace_multi(std::vector<double> &actual,
                                 std::vector<double> const &expected) {
    for (int i = 0; i != actual.size(); ++i) {
      compare_and_replace(actual(i), expected(i), tolerance, type);
    }
  }

  void compare_and_replace(double &actual, double const expected) {
    if (std::abs(actual - expected) > tolerance) {
      report(type);
    }
    actual = expected;
  };

  void report() {
    std::cout << "Deviation in " << context_1 << " " << context_2 << " "
              << context_3 << " of type " << type << "." << std::endl;
  }

private:
  double tolerance;
  std::string type;
  Context context;
};

std::vector<double> get_actual();
std::vector<double> get_expected();

void checkpoint(std::vector<double> &actual,
                std::vector<double> const &expected, int const context_1,
                int const context_2, int const context_3) {
  Context context = {context_1, context_2, context_3};
  Comparator comparator(0.1, "Widget", context);
  auto actual = get_actual(context_1);
  auto expected = get_expected(context_2);
  comparator.compare_and_replace_multi(actual, expected);
}

This would be a translation of the code into a class such that not much of the calling code would have to be changed. But the reporting actually is a different concern, so it should rather be this:

#include <cmath>
#include <iostream>
#include <string>
#include <vector>

class Comparator {
public:
  Comparator(double const tolerance, std::string const &type)
      : tolerance(tolerance), type(type) {}

  bool compare_and_replace_multi(std::vector<double> &actual,
                                 std::vector<double> const &expected) {
    bool has_deviation = false;
    for (int i = 0; i != actual.size(); ++i) {
      has_deviation |=
          compare_and_replace(actual(i), expected(i), tolerance, type);
    }
    return has_deviation;
  }

  bool compare_and_replace(double &actual, double const expected) {
    bool has_deviation = false;
    if (std::abs(actual - expected) > tolerance) {
      report(type);
      has_deviation = true
    }
    actual = expected;
    return has_deviation;
  };

private:
  double tolerance;
  std::string type;
  ;
};

struct Context {
  int context_1;
  int context_2;
  int context_3;
};

void report(Context const &context) {
  std::cout << "Deviation in " << context_1 << " " << context_2 << " "
            << context_3 << " of type " << type << "." << std::endl;
}

std::vector<double> get_actual();
std::vector<double> get_expected();

void checkpoint(std::vector<double> &actual,
                std::vector<double> const &expected, int const context_1,
                int const context_2, int const context_3) {
  Comparator comparator(0.1, "Widget", context);
  auto actual = get_actual(context_1);
  auto expected = get_expected(context_2);
  Context context = {context_1, context_2, context_3};
  if (comparator.compare_and_replace_multi(actual, expected)) {
    report(context);
  }
}

Is this a clear improvement over the previous code? Or am I approaching all this from the wrong angle?

c – Should functions depend on other functions?

According to Uncle Bob and others:

A function should do one thing only and do it well

Accordingly, if printError() only prints error and prints errors well, there is no benefit in reinventing the wheel and reimplement the same again in another context.

Moreover, if doSomething() would by itself do something and print errors, it would no longer do one thing.

Lastly, if you printError() prints error well, but you would find a way to print them even better, then improving printError() would immedialtely benefit all those other functions that depend on it.

Now, when designing an API, you have to carefully make the difference between improving a function and extending it. For instance you may find it interesting to explain the root cause of an error and provide advice to avoid it. This is no longer doing one thing but doing something more. The question is then if explainError() should start with calling printError() or if printError() and explainError() should be completely independent letting the choice of combining them to the using context.

So to summarize: yes, functions should depend on other functions as much as possible but certainly not more.