Intuition behind the general solution of the linear differential equation?

If y1, y2 and so on until yn are solutions of a linear differential equation of order n, it is a theorem that c1y1 + c2y2 + … + cnyn is the general solution for it. I have seen this in many places, but I do not understand how or why this is true. Could someone give me some insight into this result?

linear algebra: normalization of discrete data against a second variable

I have a recorded flow and temperature data inside a tube for a couple of hours. (See attached photo, please. The horizontal axis should represent time, but here I replace it with counters). Before starting the recording, I expected to have a constant amount of flow over time. When the recording ends, I noticed that the flow has varied over time in some way with the temperature. Now I suspect that changes in temperature cause variations in the flow. Therefore, I want to mathematically eliminate the effects of the temperature change of the flow measurement in some way that I have no idea how. Or, if possible, find some correction factors that compensate for the flow measured by temperature variations.
I'm sorry if my question is too basic, I've been away from math for 15 years.
Thanks in advance.

Variations of flow and temperature

linear algebra – What is the name of this convex cone?

For a non-negative vector $ { bf w} = (w_1, ldots, w_n) $,
leave $ C _ {bb w} $ denote the convex cone of non-negative vectors
$ { bf x} = (x_1, ldots, x_n) $ such that
$ n sum_ {i = 1} ^ n w_i x_i geq sum_ {i = 1} ^ n w_i cdot sum_ {i = 1} ^ n x_i : $.
In other words, the weighted average of $ x_1, ldots, x_n $
with weights $ w_1, ldots, w_n $ It is greater than or equal to the arithmetic average.
Is there a name for this cone?

Diagonalization of linear transformation with unknown vectors based

I have been working for a few hours with this problem, but I still can't fix it. The problem says the following:

Dice $ B = {V_1, V_2, V_3 } $ Y $ B & # 39; = {V_1, V_1 + V_2, -V_1-2V_2-V_3 } $ , basis of a vector space $ V $Y $ f: V maps to V $ a linear transformation such that $ M_ (BB) & # 39; = $$ begin {pmatrix} 5 & -2 & 2 \ 0 & 1 & a \ 0 & -1 & -4 end {pmatrix} $$ $ find, if possible, $ a in Re $ such that $ 2V_2 – V_3 $ It is an autovector.

What I did: I know that for a linear transformation to be diagonalizable, then the standard matrix associated with that transformation must also be diagonalizable. However, in this case, I am completely incapable of constructing the standard matrix, because I do not know the components of the V1, V2 and V3 vects that work as the basis for V. I have reviewed my bibliography, because I feel that there should be another way of find a diagonalization of an LT without resorting to the standard matrix, but I still haven't found anything. Could someone guide me in the right way to approach this exercise?
Thank you very much in advance.

linear algebra: a quadratic factor in two variables can be factored if the determinant is $ 0 $

(All real numbers)
Show that
$$ ax ^ 2 + 2hxy + by ^ 2 + 2gx + 2gy + c = 0 $$
can be factored as
$$ (a_1x + b_1y + c_1) (a_2x + b_2y + c_2) = 0 $$
iff
$$ begin {vmatrix} a & h & g \ h & b & f \ g & f & c end {vmatrix} = 0 $$


I have seen people who use this as a fact without any proof. For example, in this video (sorry, not in English) he declares it as a fact without any explanation, which is really frustrating. I wonder if there is a way to make sense of this using linear algebra or calculation. We greatly appreciate any help. Thank you!

linear algebra: to test a matrix is ​​PSD

This question arises from the external product test Cholesky Factorization.

If the matrix
$$
M = begin {pmatrix}
alpha & vec {q} ^ T \
vec {q} & N
end {pmatrix}
$$

it is positive semi-definite with $ alpha> 0 $, then the matrix
$$
A: = N- frac {1} { alpha} vec {q} vec {q} ^ T
$$

It is also positive semi-defined.

I have shown that the matrix $ A $ It's symmetric, which is easy, but I don't know how to prove it's PSD. Some clue?

linear algebra: consider a linear transformation L: R ^ 3 -> R ^ 3 with eigenvalues, -1,0,1 with eigenvectors respectively v1, v2, v3.

Thank you for contributing a response to Mathematics Stack Exchange!

  • Please make sure answer the question. Provide details and share your research!

But avoid

  • Ask for help, clarifications or respond to other answers.
  • Make statements based on opinion; Support them with references or personal experience.

Use MathJax to format equations. Reference MathJax.

For more information, see our tips on how to write excellent answers.

linear algebra – Is there literature on the study of "own matrices"?

Starting with a disclaimer: I will not be able to describe this with the correct terminology because I am trying to find literature that I am not sure of its existence. I will try to explain what I mean as I go along.

Let's say I have a rank 4 (tensor sense range, 4 indexes) & # 39; matrix & # 39; $ M_ {ij, kl} $ for which the analog vector is a matrix in the ordinary sense: $ A_ {ij} $ (I am writing in index notation). The multiplication between these two objects is as follows
$$ (M * A) _ {ij} = sum_ {kl} M_ {ij, kl} A_ {kl} $$

Have these objects been studied a lot? For example, is there a notion of an own matrix that satisfies
$$ sum_ {kl} M_ {ij, kl} A_ {kl} = lambda A_ {ij}? $$
And these own matrices have good properties, such as covering any notion of space $ M $ describe? Is there a notion of determinant?

Sorry if this is explained badly.

linear algebra: characterize all 2×2 matrices with real values ​​that have their own values ​​λ1 = c and λ2 = −c, for c> 0.

Characterize all 2×2 matrices with real values ​​that have their own values ​​λ1 = c and λ2 = −c, for c> 0. Use your result to generate a matrix that has its own values ​​-1 and 1 and does not contain any zero element.

Where do I start with this? I know how to calculate eigenvalues ​​/ vectors and everything, but am I finding the matrix from which these eigenvalues ​​come as matrix A of (A-λI) x = 0? Or am I finding λI?

stochastic processes: variance of a random variable obtained from a linear transformation

Edit: I needed to review this question as suggested.

Suppose there are $ N $ Realizations of the Gaussian process denoted as vectors $ mathbf {z} _ {j} in mathbb {R} ^ {n} $ for $ j = 1, ldots, N $. Leave $ and $ be a random variable such that $ y = sum_ {j = 1} ^ {N} ( mathbf {B} mathbf {z} _ {j}) (i) $
where $ mathbf {B} $ It is a unitary matrix. What is the variance of $ y2?

Explanation: Boldface represents the vector or matrix. $ ( mathbf {B} mathbf {x}) (i) $ represents the $ i $-th vector entry $ mathbf {B} mathbf {x} $.