## linear algebra – the matrix of the billinear form: $T = e ^ 1 oplus e ^ 2 – e ^ 2 oplus e ^ 1 + 2e ^ 2 oplus e ^ 2$

Leave $$B = ((1,2) ^ T, (1,3) ^ T)$$ be the base of $$V = Bbb R ^ 2$$.

Find the dual base $$B ^ * = (e_1, e_2)$$

Find the matrix of the billinear form: $$T = e ^ 1 oplus e ^ 2 – e ^ 2 oplus e ^ 1 + 2e ^ 2 oplus e ^ 2$$ with respect to
the canonical basis
.

Well, I would usually write canonical bases as $$K = ( epsilon ^ 1, epsilon ^ 2)$$ and its double base would be $$K ^ * = ( epsilon_1, epsilon_2) = ((1,0) ^ T, (0,1) ^ T)$$, I think.

The conditions for dual vectors / covectors are:

$$e_i (e ^ i) = 1$$ Y $$e_i (e ^ j) = 0$$

So we can easily obtain the dual base as $$B ^ * = ((3, -1) ^ T, (- 2,1) ^ T)$$ and the condition is met.

But I'm not sure about the $$[T]_K$$ expression. Probably I would only put the coefficients of that billing form given in the matrix, obtaining a test like this:

$$[T]_X = bigl ( begin {smallmatrix} 0 & 1 \ -2 & 1 end {smallmatrix} bigr)$$

But it is $$X = B$$ or $$X = K$$, if we are talking about the bases? Because when I see any problem regarding the canonical form, I would suggest that it is fair $$[T]_K$$, but we have the main base defined $$B$$, and the $$T$$ is expressed through $$e ^ i$$do not $$epsilon ^ i$$.

What if $$[T]_B = bigl ( begin {smallmatrix} 0 & 1 \ -2 & 1 end {smallmatrix} bigr)$$how could i get $$[T]_K$$?

My assumption is $$[T]_K = (A ^ -1) ^ T[T]_B (A) ^ T$$

Is it correct please or I do not understand it at all?

## Linear algebra – Equivalence ratio in matrices

To consider $$M_ {n times n}$$ the $$n times n$$ matrices on some field $$F$$. Define an equivalence relation $$A sim B$$ if there is an invertible $$C$$ such that $$A = CB$$. What are the equivalence classes of $$sim$$ in $$M_ {n times n}$$?

This question has to do with the stepped form, I think. My intuition is that the equivalence classes are only the matrices of a particular rank, although I am not sure of this. Clearly, each matrix is ​​in the same equivalence class as its stepped form, so we can consider only matrices in a staggered way, where to go from here?

## Precalculus of algebra: how to calculate the area of ​​the triangle with perpendicular lines

1. In the following figure, the lines have slopes of 3 and 5. The lines intersect at (10,15). How far is it between the x-intercepts of the lines?

The equation of the line with slope of 3 is $$y = 3x + b$$ and with the point $$(10,15)$$ is $$15 = 3 (10) + b$$ then the equation is $$y = 3x-15$$

Similarly, the slope is 5. Then the equation is $$y = 5x-35$$

The x interceptions are $$x = 5$$ for $$y = 3x-15$$ Y $$x = 7$$ for $$y = 5x-35$$. The distance between the two 2. Is this correct?

1. In the following figure, the two lines are perpendicular, and intersect in $$(6.8)$$. The intersections in and of the lines have a sum of zero. Find the area of ​​the shaded region.

the equations of the lines are $$y = mx + b$$ Y $$y = – ( frac {1} {m}) x-b$$ since the intersections in and are opposite each other. If I reconnect the point, I have:

$$8 = 6m + b$$ Y $$8 = – ( frac {1} {m}) (6) -b$$

If I put these two equations together, I would have

$$6m + b = – ( frac {1} {m}) (6) -b$$

$$6m = – frac {6} {m}$$

$$6m ^ 2 = -6$$

$$6m ^ 2 + 6 = 0$$

$$6 (m ^ 2 + 1) = 0$$

Here I am stuck because I have a negative root, so I'm pretty sure I tried this problem badly.

## Linear algebra – Decomposition of a matrix in a specific form

Can we prove that for any real value? $$d times d$$ matrix $$A$$, $$A$$ It can be broken down into finite product of such matrices.

$$A = prod_ {i = 1} ^ n (I + R_i)$$

where $$I$$ is the identity matrix and $$rank (R_i) = 1$$.

As far as I know, if there $$LDU$$ or $$LU$$ decomposition to $$A$$, then we can easily find out each $$R_i$$.

## Precalculus of algebra: interrelated constraints through linear combinations

Dice $$x$$ Y $$and$$ Real variables are such that:

$$left | x right | le alpha, left | and right | le alpha,$$

where $$alpha$$ It is a positive constant.
I want to determine the limits of $$u, v$$ where $$u, v$$ are determined by a combination of $$x, y$$ as follows:
begin {align} & u text {} = text {} x text {} – text {} y, \ & v text {} = text {} y. \ end {align}
My approach is:
begin {align} & -2 alpha le u = x text {} – text {} and le 2 alpha, \ & – alpha le v text {} = text {} and le alpha. \ end {align}
Therefore, the limits of $$u, v$$ are :
$$left | u right | le 2 , left | v right | le alpha,$$

I tried to verify my result by transforming the original equation into:
begin {align} & x text {} = text {} u text {} + text {} v, \ & y text {} = text {} v. \ end {align}

Then using the derived restrictions, we can conclude
$$left | x right | le 3 , left | and right | le alpha,$$

which is different from the original statement.

## textbooks of calculus, linear algebra and probability and statistics for nonmathematical careers

What are the most popular textbooks of calculus, linear algebra and probability and statistics for majors that are not mathematics in the United States?

I ran into problems trying to simplify this:

Dice
$$H (s) = frac {0.9661s ^ 4} {s ^ 4 + 8.824S ^ 3 + 105.6s + 254.2}$$

now subsitute $$s = ( frac {1-z ^ {- 1}} {1 + z ^ {- 1}}$$ inside $$H (s)$$ Arrive $$G (z)$$:

$$G (z) = frac {0.9661 ( frac {1-z ^ {- 1}} {1 + z ^ {- 1}}} {{ frac {1-z ^ {- 1}} { 1 + z ^ {- 1}) + 8.824 ( frac {1-z ^ {-1} z ^ {1}) + 44.86 ( frac {1-z ^ {-1} } {1 + z ^ {- 1}} + 105.6 ( frac {1-z ^ {- 1}} {1 + z ^ {- 1}} + 254.2}$$

It is because of a problem in the theory of the signal in which a transformation is made to obtain the digital transfer function from the domain s, the answer according to my guide is:

$$G (z) = frac {0.003-0.012z ^ {- 1} + 0.018z ^ {- 2} -0.012z ^ {- 3} + 0.003z ^ {- 4}} {1 + 2.808z ^ { -1} + 3.294z ^ {- 2} + 1.1857z ^ {- 3} + 0.421z ^ {-4}}$$

But for my life, it seems that I can not get the same values, I also tried to code a simple MatLab program using the expand function, but it does not return the answer in the way I need it.

Help would be greatly appreciated, especially if someone can recommend what to do in Matlab!

## Abstract algebra: assuming that R is a communicative ring with identity elements 1 and 2 without identity a ≠ b satisfies a2 = a and b2 = b. show your items [R] finite.

Thank you for contributing an answer to the Mathematics Stack Exchange!

But avoid

• Make statements based on opinion; Support them with references or personal experience.

Use MathJax to format equations. MathJax reference.

## abstract algebra – Find an element of order 3 in (Z / 14) and (Z / 42Z)

• Find an element of order 3 in (Z / 14Z)

The elements in (Z / 14Z) * are: 1, 3,5,9,11,13
I have raised them all to power 3 and I have found that 13 is an element of order 3

• Find an element of order 3 in (Z / 42Z)

I can do the same to find the item of order 3, but there are 12 items and it takes time by hand

Is there another way to do it? I know that (Z / 42Z) * is isomorphic to (Z / 14Z) * × (Z / 3Z)

## Precalculus algebra – Order to prove inequalities

at some point when I'm trying inequalities
Example
For positive a, b, c, d and $$abcd = 16$$
Show that $$( frac {a} {b}) ^ 3 + ( frac {b} {c}) ^ {3} + ( frac {c} {d}) ^ 3 + ( frac {d} {a }) ^ 3 + 4 geq a + b + c + d$$

My test But I can not finish …

First, L.H.S we use the power, the inequalities that we will obtain.

$$LHS geq ( frac { frac {a} {b} + frac {b} {c} + frac {c} {d} + frac {d} {a}} {16}) ^ { 3} +4$$
And R.H.S. I multiply with $$frac {2} { sqrt[4]{abcd}}$$ Now the inequality is homogeneous, I will ignore the condition abcd = 16

We have to show that $$( frac { frac {a} {b} + frac {b} {c} + frac {c} {d} + frac {d} {a}} {16}) ^ {3} + 4 geq frac {2 (a + b + c + d)} { sqrt[4]{abcd}}$$

From A.M.-G.M; $$sum_ {cyc} frac {a} {b} geq 4$$…. so at the end

I have $$4 sqrt[4]{abcd} geq a + b + c + d$$ But following with wrong inverse sign ….

So, is there a trick about what I should do first to prove the inequality? A.M-G.M or Cauchy ….. that is
Thank you !