Often, when we try to solve some PDE problem, we first build a sequence of approximate solutions. To construct an exact solution, we need to show that a sequence (or at least some subsequence) of approximate solutions has a limit and that that limit is an exact solution of an original PDE problem. And sometimes to show that we have a convergent subsequence, we use a little lemma compactness. I will try to explain my question about the example. Let's say we have two problems with Cauchy:
$$
(1) hspace {0.5cm} u_t (x, t) + div (u (x, t)) = l cdot g (u) $$
$$ (2) hspace {0.5cm} u (x, 0) = u ^ l_0 (x),
$$
Y
$$
(3) hspace {0.5cm} u_t (x, t) + div (u (x, t)) = 0 $$
$$ (4) hspace {0.5cm} u (x, 0) = u_0 (x),
$$
where $ x in A subseteq mathbb {R} ^ d, d geq 1 $, $ t in [0,T]$, $ u in mathbb {R} ^ n, n geq 1 $ and g is a source term.
Think of $ (1) – (2) $ from the approximate problem that has solid solutions in the sense of PDE in some Banach space $ X $. The case that interests me the most is $ X equiv H ^ m $ where $ m> frac {d} {2} $ (Sobolev $ L ^ 2 $type of space). And think about $ (3) – (4) $ As an original problem we are trying to solve it. by $ (3) – (4) $ We know that you have solutions in space. $ Y $ Y $ Y neq X $.
I would like to know two things:

Is it possible, using a little compact slogan (Helly & # 39; s, Rellich, AubinLions, …) to show that, when $ l rightarrow 0 $solution of $ (1) – (2) $ converges to the solution of $ (3) – (4) $? And how compact would the motto be?
Most of the lemmas I know usually have the approximate solution in space. $ X $ And the original solution in the same space.

Would this problem be easier or not if the problem approximated? $ (1) – (2) $ It has solid solutions in the sense of PDE, and the original problem. $ (3) – (4) $ Do you have weak solutions in the sense of PDE?
Your thoughts would be great. And if anyone knows any reference in the literature that deals with this type of problem, write it down. Thanks for the help.