linear algebra: how many 4 by 4 real matrices A, up to the similarity, satisfy (A-I) ($ A ^ 2 + I) ^ 2 $ = 0

So I know that the minimum polynomial m (x) must be divided $ (x-1) (x ^ 2 + 1) ^ 2 $. As we are working with 4 for 4 real matrices,

$ m (x) = (x-1) $ or $ m (x) = x ^ 2 + 1 $ or $ m (x) = (x ^ 2 + 1) ^ 2 $ or $ m (x) = (x-1) (x ^ 2 + 1) $.

Did I just do a case-by-case analysis or is there a way to determine which of the four really is the minimum polynomial?

In addition, if it is case by case, we would have:

(I) m (x) = $ (x-1) $ corresponds to A being the 4 by 4 identity matrix

(II) $ m (x) = (x ^ 2 + 1) ^ 2 $ corresponds to the companion matrix of $ x ^ 4 + 2x ^ 2 + 1 $

(III) $ m (x) = (x ^ 2 + 1) $ means that we have two blocks each with a complementary matrix corresponding to $ x ^ 2 + 1 $

(IV) $ m (x) = (x-1) (x ^ 2 + 1) $ means that we have a block with companion matrix $ x ^ 3 – x ^ 2 + x -1 $ and another block corresponding to $ x-1 $

I'm in the correct way?

linear numerical algebra – dominant eigenvector for extra large scattered matrix

My question is about an old problem. I want to calculate the dominant eigenvector of an extra large square matrix. The matrix is ​​a dispersed transition matrix (A) for a Markov chain. I read some articles about the power method or the Arnoldi iteration; however, in both cases, calculating A.x (x is the first approximation) even in the first iteration results in many & # 39; 0 & # 39; or & # 39; Nan & # 39; unwanted (I used double precision). I wonder if there is any other toolbox or method to approximate the first dominant eigenvector for a square matrix with more than 200,000 rows. All the elements of the matrix are positive, but the matrix is ​​not symmetric.

linear algebra – Nonlinear pendulum solution

I was looking for a solution in the nonlinear pendulum equation and I was wondering why we multiply the equation by dθ / dt when we can integrate directly. I didn't find an answer on the internet except

& # 39; & # 39; Equation (1), although simple in appearance, is quite difficult to solve due to the nonlinearity of the term sinθ. To get the exact solution of the equation. (1), this equation is multiplied by dθ / dt & # 39; & # 39;

Can anyone explain to me why we are doing this? Am I missing something?

gui – Assignment of decibel range at linear audible intervals

We use dBvalue = 20 * log10( volumeAsPercent ) where volumeAsPercent It is in the range of (0.001 .. 1).

(That's a bit annoying because the real engine, OpenAL, uses 0..1 for its gain, so we have to convert that again using volumeAsPercent = 2^(dBvalue/6).)

I have not conducted extensive tests on this, personally, so I cannot confirm that setting the value to 10% more makes me hear 10% louder.

Here are the curves drawn.

linear algebra – Matrix square root operator standard versus original

If I have a non-symmetric matrix whose operator standard is $ leq 1 $ and the square root, its operator norm remains below $ 1 $?

More formally, I want to know if there is always at least one square root for which it is the case, even if it is not true for all of them and under reasonable assumptions that guarantee that there is a square root.

Specifically, suppose I have a non-symmetric square matrix $ A $ with $ | A | _2 leq 1 $ where $ | cdot | $ 2 denotes the norm of the operator (maximum singular value). Is there always a square matrix $ B $ such that $ B ^ 2 = A $ and $ | B | _2 leq 1 $?

I am willing to assume $ A $ It is diagonalizable, although not unitarily diagonalizable. I am also willing to assume $ A $ it's the way $ A & # 39; + Delta $ by some small random disturbance $ Delta $ and matrix $ A & # 39; $. Without at least one of these assumptions, the square root may not exist at all.

I would be interested in a test / counterexample under any non-empty subset of these assumptions. For example, a counterexample that shows that the statement can be false when we assume $ A $ It is diagonalizable would be useful to me.

Linear programming: why does the maximum match algorithm fall into the category of padding reduction algorithms?

I understand that "maximum coincidence" (or "maximum transverse") are algorithms to preorder the matrix to increase numerical stability. In Timothy Davis's book Direct methods for dispersed linear systems, 2006, however, this algorithm is included in Chapter 7, which is entitled "Fill Reduction Order". In his most recent article. A survey of direct methods for dispersed linear systems, 2016, the maximum match was also placed in section 8, which is entitled "Fill Reduction Orders".

Until now, I had the impression that rearrangement algorithms can be classified into 3 classes:

  • for numerical stability: maximum transverse, etc.
  • to reduce filling: AMD, etc.
  • to reduce work or increase parallelism: BTF, BBD, etc.

I have trouble understanding how to put the 3 previous classes in a single category called fill reduction …

Linear numerical algebra: matrix equations motivated by the generalization of decomposition $ QR $

The following problem is motivated by considering a certain generalization of $ QR $ decomposition of a matrix.

Leave $ A, B in M_n ( mathbb {R}) $. Can we always find $ Q_1, Q_2 in M_n ( mathbb {R}) $ orthogonal and $ R_1, R_2 in M_n ( mathbb {R}) $ upper triangular such that

begin {equation}
A = Q_1 R_1 + Q_2 R_2 mbox {y} B = Q_1 Q_2 + Q_2 R_1?
end {equation}

Algorithms: upper (or lower) envelope of some linear functions

1) Given some single variable line functions, is there an algorithm with the least complexity to find the upper (or lower) envelope of them? It means we must find all breakpoints. I searched and found that there is an algorithm with $ O (n log (n)) $ complexity. Nevertheless. I need some references

2) Can complexity be reduced if the tangent of all lines is positive?

linear algebra: delimitation of the spectral space of a simple symmetric matrix

I have a seemingly innocent linear algebra problem that I cannot solve and I hope you can offer us an idea. Here is the description: Let $ mathbf {a} = (a_1, a_2, dots, a_d) ^ {T} $ be a positive probability vector, $ that is $ $ Vert mathbf {a} Vert_1 = 1 $ and $ a_i> 0 $ for all $ i $. Leave matrix $ A $ It is defined as follows: $$ A = textrm {diag} ( mathbf {a}) – mathbf {a} mathbf {a} ^ {T} $$ where $ textrm {diag} ( mathbf {a}) $ means the diagonal matrix with the $ i $being the diagonal entrance $ a_i $. It is simple to show that $ mathbf {1} _d $, the all-in-one dimension vector $ d $, is a vector of $ A $ of own value $ 0 $. And Gershgorin's circle theorem also shows that all $ A $The eigenvalues ​​are greater than or equal to $ 0 $. My question is:

What is the smallest eigenvalue of $ A $ That is not zero?

I made the calculation when $ d = $ 3 and I realized that there may not be a simple analytical formula and, therefore, a good lower limit is also appreciated.

Thank you very much!

linear programming: determine the approximation factor in a greedy algorithm

Suppose we have n plates of food associated with a cost c, and we have guests so that each of them has a certain number of preferences.
We want to choose a menu so that we minimize the cost and at least one preference is satisfied for each guest.

I implemented an easy greedy algorithm that orders each element with respect to its $ (n ^ o satisfied people) / (cost) $ For example, if element "a" satisfies 3 people if they choose it and it costs 10, their relationship would be $ 3/10 $. I chose each element in a non-increasing order. Until I run out of people to meet.

How do I find the approximation factor for this algorithm? I think it should be around 2, since it is very similar to a greedy approach to the backpack problem, but I have no idea how to prove it.