There is a source of information in the alphabet of the information source. $ A = {a, b, c } $ represented by the following state transition diagram

the $ i $-The output of this information source is represented by a random variable $ X_i $. It is known that the user is now in a state. $ s_1 $. In this state, let's go $ H (X_i mid s_1) $ denotes entropy by observing the following symbol $ X_i $, find the value of $ H (X_i mid s_1) $, entropy of this information source, Calculate $ H (X_i mid X_ {i-1}) $ Y $ H (X_i) $ respectively. Assume $ i $ it's pretty big

How can I find $ H (X_i | s1)? $ I know that $ H (X_i mid s_1) = – sum_ {i} p left (x_i, s_1 right) cdot log_b ! Left (p left (x_i | s_1 right) right) = – sum_ {i} p left (x_i, s_1 right) cdot log_b ! left ( frac {p left (x_i, s_1 right)} {p left (s_1 right)} right) PS

but I do not know $ p (s_1). $

$ A = begin {pmatrix} 0.25 & 0.75 & 0 \ 0.5 & 0 & 0.5 \ 0 & 0.7 & 0.3 end {pmatrix}. $

From the matrix I can know that $ p (s_1 | s_1) = 0.25 $, etc

but what is the probability of $ s_1 $?

and how can I calculate $ H (X_i | X_ {i-1}) $? Is this stationary distribution too?