likelihood – Parameter estimation using LogLikelihood


I’m trying to understand Likelihood methods with parameter estimation in mind. To this end I am trying to construct small examples that I can play with. Let’s say I have some data which I know (or suspect follows the function)
$$f(x) = (x + x_0)^{2}$$
and I want to find out the value of the parameter $x_{0}$ and the associated error using likelihood methods.

Let us then make some pretend experimental data:

f(x0_, x_) := (x + x0)^2
  
ExperimentData = Table({x, f(-1.123, x) + RandomVariate(NormalDistribution(0, 0.25))}, {x, 0, 3, 0.1});

Then let us construct some test data where I “guess” my parameter $x_{0}$. I replace $x_{0}$ with the parameter $theta$ to represent my test value:

TestData = 
Table(
        {(Theta), Table({x, f((Theta), x)}, {x, 0, 3, 0.1 })},
        {(Theta), 0.5, 1.6, 0.1}
     );

How can I use LogLikelihood to make to make a parameter estimation of $x_{0}$. Using my TestData? The motivation is if I cannot construct a pure function, for example if I generate my test data from a numeric intergeneration.

My approach so far is to maximise the log-likelihood of the “residuals”

X = ExperimentData((All, 2));
MLLTest = 
  Table(
        (Theta) = TestData((i, 1));        
        F = TestData((i, 2))((All, 2));
        MLL = 
    FindMaximum(
      LogLikelihood(NormalDistribution((Mu), (Sigma)), 
       X - F), {{(Mu), 0}, {(Sigma), 0.25}})((1));
        {(Theta), MLL},
        {i , 1, Length(TestData)}
    );

Then if I plot the Maximum Log-Likelihood as a function of my guess parameter $theta$.

However this is clearly wrong, so I think I misunderstand something about the Log-Likeihood in this context.