Design of the scaling web service with backup, failover, monitoring and load balancing


I am planning a large-scale web service that can be accessed by a maximum of 5000 people at the same time and the total number of users would be 30k or something in that dimension.

Now, with that large size we need backups, if possible we should also do a load balancing and monitor the servers. My current experience is with Apache web servers and sites that have a maximum of 5000 people in a month or even a year.

To the web service: I am still determining what is the best framework to use. The database will surely be something SQL-ish, PostgreSQL should normally be able to handle that amount of users. And for the web service itself, I thought of Node.js with Express or, alternatively, on a Django server.

Now to the interesting part: server administration. I am currently a fan of Amazon Web Services. I do not know your experience about it? But currently I have no problems and especially the availability time is excellent.

Should I use Docker? I could imagine that it is great for the implementation, since there should be almost no errors. I also want to have a minimum update time to be able to implement a docker container on a production failover server, then run a monitoring service on it, make sure everything works, and then I could point the active server to the server Failover. And then, the next time the other server gets my new failover server. * However * I never implemented such a thing and I do not know how to achieve it. The next big step would be to introduce the load balance. I saw that AWS offers something like that. But could I also dynamically configure new servers on the fly? Let's say that the monitoring tool (any recommendation?) Says that Server A (the asset) is experiencing a great load, could we automatically configure a new server (through AWS, pay only as usage) and more that are introduced in a charge? balance-matrix?

But to be realistic: is this really possible? Or could a server handle those 5000 people simultaneously? The actual server load must be high, mostly read access and now and then transactions with inserts.

And for the backup part: how is this implemented in production? Daily backups for one and mirrored databases that synchronize the changes automatically?

How much space and memory and processor power would you need? With AWS, the update should not be that problem, but the price plays an important role. I want to know in advance an approximation. If there are other servers that are not AWS with the same high availability, let me know. But I do not want to buy a root server that I have to manage only by myself.