textbooks of calculus, linear algebra and probability and statistics for nonmathematical careers

What are the most popular textbooks of calculus, linear algebra and probability and statistics for majors that are not mathematics in the United States?
Also, what about other common math courses for undergraduate non-mathematics students?

Stochastic calculus: How do the increments of the integral of Ito come together?

Given a Brownian movement $ {W_t } _ {t in[0;T]$ and a continuous process, adapted and square-integrable (bounded if desired) $ { sigma_t } _ {t in[0;T]$ Y $ varepsilon> 0 $, I want to prove that there is a $ delta> 0 $ such that

For all $ s in [0;T]$ and all $ M en mathcal F_s $, is

$$
mathbb E bigg (1_M max_ {s le t le (s + delta) wedge T} bigg | int_s ^ t sigma_u mathrm dW_u bigg | bigg)
le varepsilon.
$$

by $ sigma equiv 1 $, this is easy because we only consider
$$
mathbb E Big (1_M max_ {s le t le (s + delta) wedge T} | W_t – W_s | Big)
$$

for which we have a limit due to the distribution of the maximum $ W $ and the increases in Brownian motion are independent of the past.

Is there anything similar for arbitrary Ito integrals (or those that satisfy some assumptions)?

Propositional calculus – Enderton logic integrity test theorem, $ Gamma coupon Theta coupon Lambda $ gratiability

I am reading the proof of the Integrity Theorem of "A Mathematical Introduction to Logic" by Enderton. I have problems to see how the following highlighted sentence is fulfilled (excerpt on page 137).

Leave $ Lambda $ Be the set of logical axioms for expanded language. As $ Gamma cup Theta $ is consistent, there is no formula $ beta $ such that $ Gamma cup Theta coupon Lambda $ tautologically implies both $ beta $ Y $ neg beta $. (This is by Theorem 24B, here the compactness theorem of sentential logic is used). Hence, there is a real task. $ v $ For the set of all the main formulas that satisfy. $ Gamma cup Theta coupon Lambda $.

I have tried to reason "by contrapositive". That is, suppose a set of formulas (sentential) $ Sigma $ It is unsatisfiable. Then, vacuously, each task really satisfies. $ Sigma $ It will also satisfy any formula at all. Thus $ Sigma $ Tautologically implies any formula. In particular, for any given formula. $ beta $, $ Sigma $ tautologically implies both $ beta $ Y $ neg beta $.

Is my reasoning correct?

lambda calculus – Tailrecursive definition for a function

In an exam I did, we were asked to provide a recursive definition of a recursive function. I failed miserably and the solution provided does not make any sense to me. If someone could explain that, it would be very useful for my reunion. The solution provided is the following:


The functions are given $ f, g, h in mathbb {N} rightarrow mathbb {Z} $ with $ f.0 = 20, g.0 = 37, h.0 = 13 $ and to $ n> 0 $:

$$
f.n = 3 * g.n-7 * h.n \
g.n = n ^ 2-h. (n-1) \
h.n = f. (n-1) + g. (n-1)
$$


For the tailrecursive version of $ f $ specify

$$
psi in mathbb {Z} rightarrow mathbb {Z} rightarrow mathbb {Z} rightarrow mathbb {Z} rightarrow mathbb {N} rightarrow mathbb {Z} \
psi .a.b.c.d.n = a * g.n + b * h.n + c * (n + 1) ^ 2 + d
$$

Such that $ f.n = psi .3. (- 7) .0.0.n $

So

$$
psi .a.b.c.d.0 = 37 * a + 13 * b + c + d
$$

Y

$$
psi .a.b.c.d. (n + 1) \
= text { {spec }} \
a * ((n + 1) ^ 2 + h.n) + b * (4 * g.n-7 * h.n) + c * ((n + 1) ^ 2 + 2 * n + 3 + d \
= text { {arithmetic }} \
4 * b * g.n + (a-7 * b) * h.n + (a + c) * (n + 1) ^ 2 + d + c * (2 * n + 3) \
= text { {Construction hypothesis }} \
psi. (4 * b). (a-7 * b). (a + c) (d + c * (2 * n + 3)). n
$$


The main problem I have is how they came to the specification, since I can follow the actual calculation. If anyone has any idea how the specification was obtained, I think he would be able to better understand the answer.

Thanks in advance.

calculus – Non-polynomial, non-sinusoidal expansion of an arbitrary function

Given a family of functions, what functions can be derived as a linear combination of these functions? Or rather, how to determine the "integrity" of a group of functions in terms of their ability to be linearly combined in an arbitrary function?

For example, is it possible to express an arbitrary simple curve around the origin as the sum of ellipses? $ r ( theta; b, e) = frac {b} { sqrt {1- (e cos { theta}) ^ 2}} $?

Differential calculus – difference of functions with vectorial values ​​using Jacobian

Given a vector value function: $ mathbf {y} = f ( mathbf {x}) $.

How to express the difference $ f ( mathbf {x} _1) -f ( mathbf {x} _2) $ in terms of Jacobian $ J $

where $ J_i = frac {dy_i} {d mathbf {x}} $ or $ J = frac {d mathbf {y}} {d mathbf {x}} $, with $ y_i $ Being the $ i ^ {th} $ vector entrance $ mathbf {y} $.


I tried using linearization using Taylor's expansion in a small neighborhood like $ f ( mathbf {x + delta}) = f ( mathbf {x}) + f ^ {& # 39;} ( mathbf {x}) delta $

but it seems that the solution should involve the definition of some function and its antideivide.

Would any help be appreciated?