I’ll be referring as agent to the opponent controlled by the game. And agent controller to the code that, well, controls the agent.
You can have an abstraction for your agent controller, and still allow continuous movement. To do that, you would have code to translate between those representations.
That is, the agent controller would understand the game in abstract way. One that makes sense from the point of view of the agent. You would translate the parts of the world that are relevant to the agent controller to that representation. Then the agent controller makes its decision. And that decision needs to be translated to the world representation, and executed (playing animations, moving physics objects, and so on).
For the abstract representation, I suggest you start with a polar system, expressed in angle (relative bearing) and distance (from the agent). And yes, those are still too many possible values, so make them discrete.
For example, you will represent distance as “near”, “medium” and “far”, that would translate to ranges of distance. And represent the angle as “front”, “rear”, “port”, “starboard” (or however you want to slice them and name them).
Since those are ranges, to translate, you check on what range does the distance and position fall into, and give that to the agent controller.
By doing that, the agent controller logic can work with a finite, discrete, and well defined set of position information.
The agent controller would have an internal state, and you would call into it when needed… And it gives you a command.
I suggest to make your commands as descriptive as you can. They could express things such as “Punch”, “Move away”, or “Keep distance”, and so on. And then we need to translate that.
How much distance does it has to move exactly, and how much does it has to rotate to punch the opponent, and what animation to play… Those aren’t concerns of the agent controller. The agent controller does not have the needed information to them, anyway. Instead, you would have code that does the right thing depending on the command.
By the way, the commands need not to be strings. You can, for example, have a custom command class to represent them, so you can include parameters. You may even create a class for each possible command, and put in that class the logic to execute it. Given that we are talking Godot, I’d make a node for each command, and put on that node a function to execute the command. Then you can get the node by name from the scene tree and call the function on it. Plus, you can package that logic along with the agent in a scene.
There are things you need to report to the agent controller. Such as success in executing the actions. Action executed by other agents (or the player) on it. Collisions, etc.
You can imagine that the agent controller issues a “Taunt” command, and that triggers and animation, when completed you tell the agent controller that the taunt completed. Or perhaps, the player punches the agent, and you notify that to the agent controller.
Then the agent controller updates its internal state, and gives you new commands.
You could have more complex commands… Er… Can’t think of something for a boxing game. But imagine it is a military shooter, and there is a “Go to cover” command. To execute it, there is another system that figures out where is cover and how to get there, then the agent moves and plays corresponding animations, and you also tell the agent controller when that is completed.
So you want the agent to get close to the opponent. It will move. But you also need to play animations. You would have a direction for the motion, and from that you pick the corresponding animation (or blend of animations) to play.
Then, when the agent is in range, you want the agent to punch. Again, you are going to play the corresponding animation. Perhaps that could mean using inverse kinematics to make sure it makes contact. In fact, if you care enough about foot positioning, you can use inverse kinematics to tell the rig “this foot go there, and this foot go there”.