EDIT: The question originally asked about running untrusted code in general, not about the specific scenario laid out above. For something like that, you usually want a language that provides an API sandbox (I think Lua can be used for this?) which provides enough functionality for game logic or mods, but doesn’t allow open access to the file system, creating other processes, etc.
You could also sandbox the whole game (with an OS-level process privilege sandbox, such as by developing it as a modern Windows “app” or using the MacOS sandbox), with a special location (readable by the sandbox) where the user can drop mods, etc. That way, even if the game gets compromised by a malicious mod or similar, it cannot impact the OS much (if at all), although it could still attack the game and anything it has access to. Games generally don’t need a lot of OS permissions – maybe network connectivity, somewhere to put save files, all stuff that the OS-provided sandboxes handle easily – and are actually kind of an ideal use case for these app sandboxes.
Generically speaking, the term you’re looking for is “sandboxing” (as in, a place this kiddies can make a mess without affecting anything else). Sandboxing is a kind of hard problem but it’s also very useful, so there are a few different places it’s commonly used.
Process-based sandboxes are pretty common these days, and all modern OSes have (at least some) support for them. The Windows, Mac, iOS, and Android app stores all provide sandboxed apps. Linux provides sandboxing functionality that is used by things like Docker (and Chrome on Linux). FreeBSD (and derivatives) have “Jails”, etc. There’s a bunch of ways to do it. A relatively-simple one can be created just using user permissions and ACLs; you create a new user account for the sandbox, give it access to nothing at all by default (which is tricky, since normally there’s a lot of stuff that’s world-readable, at least), and then grant that user account access to the things the sandboxed code is allowed to touch. A process launched as that user will have very limited access to the system, until/unless they find a way out.
Unfortunately, creating a secure sandbox tends to be somewhat platform-specific, and tricky for each individual platform. I’ve personally reviewed, and found breaches for, sandboxes used by multiple Big Software Companies’ products (you’ve heard of them, might even have them open right now). The app store sandboxes model – which give the developer fairly little control over what can be done, in exchange for the OS handling all the sandbox creation and enforcement – is appealing and if you’re writing for Mac or recent Windows I recommend considering it.
Another kind of sandbox, available on any modern desktop OS but pretty expensive to run, is a virtual machine (VM) sandbox. Using any major VM platform (VMWare, VirtualBox, Hyper-V, whatever), you can create a VM that has little or no access to the host OS. This is the standard way cloud computing providers work; from Amazon’s perspective your tiny EC2 instance is running untrusted code, yet has to share hardware with other mutually-untrusted users to be cost-effective, and VMs are used to do this. It’s also a way to run potentially-malicious code, because the host OS can watch what the VM does but the VM cannot control the host.