lambda calculus – Tailrecursive definition for a function

In an exam I did, we were asked to provide a recursive definition of a recursive function. I failed miserably and the solution provided does not make any sense to me. If someone could explain that, it would be very useful for my reunion. The solution provided is the following:


The functions are given $ f, g, h in mathbb {N} rightarrow mathbb {Z} $ with $ f.0 = 20, g.0 = 37, h.0 = 13 $ and to $ n> 0 $:

$$
f.n = 3 * g.n-7 * h.n \
g.n = n ^ 2-h. (n-1) \
h.n = f. (n-1) + g. (n-1)
$$


For the tailrecursive version of $ f $ specify

$$
psi in mathbb {Z} rightarrow mathbb {Z} rightarrow mathbb {Z} rightarrow mathbb {Z} rightarrow mathbb {N} rightarrow mathbb {Z} \
psi .a.b.c.d.n = a * g.n + b * h.n + c * (n + 1) ^ 2 + d
$$

Such that $ f.n = psi .3. (- 7) .0.0.n $

So

$$
psi .a.b.c.d.0 = 37 * a + 13 * b + c + d
$$

Y

$$
psi .a.b.c.d. (n + 1) \
= text { {spec }} \
a * ((n + 1) ^ 2 + h.n) + b * (4 * g.n-7 * h.n) + c * ((n + 1) ^ 2 + 2 * n + 3 + d \
= text { {arithmetic }} \
4 * b * g.n + (a-7 * b) * h.n + (a + c) * (n + 1) ^ 2 + d + c * (2 * n + 3) \
= text { {Construction hypothesis }} \
psi. (4 * b). (a-7 * b). (a + c) (d + c * (2 * n + 3)). n
$$


The main problem I have is how they came to the specification, since I can follow the actual calculation. If anyone has any idea how the specification was obtained, I think he would be able to better understand the answer.

Thanks in advance.

calculus – Non-polynomial, non-sinusoidal expansion of an arbitrary function

Given a family of functions, what functions can be derived as a linear combination of these functions? Or rather, how to determine the "integrity" of a group of functions in terms of their ability to be linearly combined in an arbitrary function?

For example, is it possible to express an arbitrary simple curve around the origin as the sum of ellipses? $ r ( theta; b, e) = frac {b} { sqrt {1- (e cos { theta}) ^ 2}} $?

Differential calculus – difference of functions with vectorial values ​​using Jacobian

Given a vector value function: $ mathbf {y} = f ( mathbf {x}) $.

How to express the difference $ f ( mathbf {x} _1) -f ( mathbf {x} _2) $ in terms of Jacobian $ J $

where $ J_i = frac {dy_i} {d mathbf {x}} $ or $ J = frac {d mathbf {y}} {d mathbf {x}} $, with $ y_i $ Being the $ i ^ {th} $ vector entrance $ mathbf {y} $.


I tried using linearization using Taylor's expansion in a small neighborhood like $ f ( mathbf {x + delta}) = f ( mathbf {x}) + f ^ {& # 39;} ( mathbf {x}) delta $

but it seems that the solution should involve the definition of some function and its antideivide.

Would any help be appreciated?