architecture – Design help for web application that will run as separate instances with different content

I’ve built an RSVP web application with a React front end, Node.js backend, NGINX webserver, SQL database and hosted on a DigitalOcean Droplet with Ubuntu. The issue is, every time I want to launch the application with different content (i.e. different events, locations, dates, etc.), I need to manually duplicate the code, update the content, create new A names on DigitalOcean, and change the Database name. I currently have the information stored in a separate JSON file that is being read by the application, which reduces the overhead when creating a new instance of the application, but I would like to automate as much of it as possible, and even create an admin page so that others can launch a new instance, and the process isn’t entirely reliant on me (and make my life easier haha). Each instance will be accessed through a subdomain for a domain I already own (i.e. instance1.example.com, instance2.example.com, etc.). I was considering modifying the application as such:

  • Dockerize the front end
    • Docker configuration will receive a path to that specific instance’s source of truth
    • All other requirements, packages, dependencies will be identical as application itself is never changing, just the content being displayed (only text, no images/media, or functionality will be modified)
  • Expose an API on my server that will do the following:
    • Duplicate existing NGINX templates under sites-available and soft link it to sites-enabled
    • Run Certbot to acquire SSL certificate for new subdomain (given as part of request body)
    • Restart NGINX
    • If possible (still need to do more research), add additional DNS records to DigitalOcean domain
    • Run SQL scripts to generate new DB tables based on request body values
  • Reconfigure existing APIs to take in DB name so that multiple instances can leverage existing API
  • Build an admin page
    • Define website content and subdomain name
    • Once information is verified and submitted, new API on server will be called to trigger the creation and deployment of the new instance
  • The database server itself will be a single, non-dockerized service, with new databases (and within, tables) being instantiated dynamically
  • The Node.js server will be a single, non-dockerized service, only modified slightly to handle and reroute traffic to multiple DBs as needed

I also am not expecting any crazy amount of traffic at any given moment, realistically I am probably looking at a couple of dozen or so requests to the server every day, and that would be a very busy day.

I understand this is probably a very basic design/architecture problem, but please note that I am relatively new to software design as a whole and in the very early stages of my software development career. I understand there may be a lot of problems/inefficiencies with my approach. Please provide any feedback and detailed explanations as possible, and feel free to suggest any alternative solutions or concepts I may be missing – I’m not just looking to get this over with but build an understanding of quality software design principles that will carry me through my career.

software engineering – How to design a character damage system with ECS architecture?

I am developing an game with ECS architecture and trying to design a character damage system.

Does the following design fits into ECS conception and will it be extendable in future?

  • I have an Actor component with health percent attribute and ActorsDamage system.

  • I also have an inventory system, so actors can use some food to increase their health.

  • When actors use inventory items, their Lua handlers executes and generates ActorHit event.

  • ActorDamage system receives this event and increases or decreases the actor’s health.

  • Other game systems link actors’ AI system or actors’ keyboard control systems can also trigger hit events.

So, is this an appropriate design conception, or is there something I need to correct or improve?

architecture – How to boost reusability and extendability in a MVVM .Net application?

I’m a software engineer that primarily writes C code but now and then makes .Net applications for fun. This is a question about the fun part. Over a duration of about 15 years, I have used and expanded my own framework and now I am stuck. I would love feedback from someone who has experience in writing complex applications in a high-level language!

Okay, remember, this is about fun! I am fighting the hard fight with C and enjoy playing around in the world of C# which is why I am trying to facilitate an architecture with my framework that provides the following properties:

  • the application logic is assembled from reusable parts
  • these parts can be addressed, replaced and extended using plugins
  • design time support (i.e. the UI is fed with data in the designe time)

I am using WPF and the MVVM pattern. I created a set of “services” which provide encapsulated functionality (logging, localization, themes, settings etc.). They implement interfaces which are used by MEF (https://docs.microsoft.com/de-de/dotnet/framework/mef/) to access/instantiate their instances.

This is the list of needed assemblies for an app:

  • app design time assembly (design time implementations of app services, optional)
  • framework design time assembly (design time implementations of framework services, optional)
  • multiple possible extension assemblies (replace or add app and framework services, optional)
  • extension app assembly (UI, app service interfaces)
  • main app assembly (UI, app services)
  • framework assembly (framework services and their interfaces)

I am using a filtered MEF container to load exports from the assemblies in the listed order, while dropping subsequent exports of the same export type identity. Design time assemblies are only used when running in the VS/Blend Designer.

Most of what I want is already there. I can reference the extension app assembly and the framework assembly to write extension assemblies for the app. These extensions can interact with all used services, replace them and introduce new ones. There is also a service that manages the panes of the docking UI, so extensions can introduce new visuals.

The ViewModels are manually assigned to the views and use Constructor Injection to get hold of all needed services.

Most services manage a list of models that provide further functionality. These models are accessed primarily by the UI (via ViewModel>Service>Model) and contain a large part of the actual logic.

Now to the problem:
I have to move all my models into the extension app assembly as they are referenced by the service interfaces. This seems wrong. Also, I loose access to some important properties from the main app assembly. I feel like I made a wrong turn somewhere.

A) I could embrace the anemic domain model and reduce my models to mere data containers. That would move the logic back to the services, but I think this is a horrible idea (in addition to the obvious reasons, it feels like WPF was totally made for domain-driven design).

B) I could make interfaces or abstract classes for all models. That is a lot of work! It would also make writing services less fun (type covariance issues).

I am doing this for fun and would accept drastical changes, if the result is better 🙂

How should I handle the models in my setup?
Thanks in advance, I know this is quite a text!

enter image description here

architecture – What types of documents should I include in a legacy software documentation package?

TL;DR: What would you look for in a collection of documentation when taking over an app to be run a larger scale than originally designed for?

I’ve been tasked with writing documentation for an application I’ve never worked on. It’s legacy code that was written by an intern some years ago so it’s not very well architected or documented within itself so “good code should explain itself” is out of the question.

What I need to do is compile some collection of documents that will help a team of experienced developers take this app and “manage” it on a much larger scale than originally designed for. I thought of breaking this down into four documents.

A user guide: Not being done by me.

A README: a document to go along with the source code that is your run-of-the-mill readme including things like instillation instructions and a high level overview of how things work.

A Help & Maintenance Guide: A document that hopes to cover the issues this team might run into while making updates to the app… I’m not sure where to begin this document other than a list of things I ran into when trying to set this app up/make some changes to it. Any advice on how to properly write this would be greatly appreciated.

A Systems Design Document: A more in-depth document that describes how the different systems interact with each other. What are the nice to have and required things to include in this type of document?

Is there anything I should include/not-included? How would you go about creating this type of collection?

Additional notes: I am myself an intern and have never successfully written really good documentation so I’m looking for advice from those who have.

computer architecture – Doubt in pipeline forwarding in mips

I recently learnt about pipelining in mips and was trying to solve some problems but I got stuck at a problem which involves pipeline forwarding or bypassing. I Googled and found the exact same question and its solution here

It is the second problem where we have to answer some questions based on the given assembly code. I understood the a)part asking number of instructions executed but I don’t understand part b) where we need to answer How many cycles will it take to execute on the fully bypassed MIPS processor? It says that there is a stall between sw and lw instruction but I don’t understand why? Can we not have the value of word to be stored in memory for sw instruction by the end of M stage of lw by forwarding from M to M of sw because the word stored in $t1 would be available in M stage, right? Why is there a need for a stall ? I don’t understand this. I am quite new to computer architecture , the learning curve seems quite steep to me and so this might turn out to be stupid question and forgive me if my way of asking is not correct 🙁

c# – Event Driven Architecture how should channels be used

I’m using event driven architecture, to perform realtime signal proccessing and to provide independent metrics.

enter image description here

I decided to use a redis cluster to act a cache and a message bus.

I’m a bit confused on the best route for the architecture. Each node has other nodes which subscribe to it’s values. Some of the nodes the information is important and should be written to the DB while others should just be passed along to the next node in the chain (since the value at the time isn’t important enough to store globally).

  1. How much overhead is involved when using channels?
    Should I use a single channel and parse everything from there
    OR Would it be better to use a channel for each node?

  2. Should every event use redis? e.g some parts of the communication don’t need to be stored e.g (Time,Price,Quantity) come from the exchange I don’t see why I would need to publish them to be persisted, However for simplicity I don’t want to have manage multiple code paths

unity – The Architecture of a Scrollable Retro Menu Navigation System

I’ve been trying to research on this topic for quite a while and found barely any useful information.

What I want to achieve:

Replicating a scrollable menu navigation in Unity, like shown here.
It should include pointing navigation arrows like shown in the video, a cursor that indicates the current selected index and the description of that selected entry.

What I tried so far:

Unity’s built in UI scroll rect feature doesn’t really fit to what I want to accomplish, considering that the menu shouldn’t have a scroll bar, meaning keyboard inputs only to navigate.
This post helped me to make the groundwork basically.

if (entries == 10)
        {
            if (Input.GetKeyDown(KeyCode.DownArrow) && allowMoving)
            { //Input telling it to go up or down.
                selectedOption += 1;
                if (selectedOption > numberOfOptions) //If at end of list go back to top
                {
                    StartCoroutine(MoveCursorToTopOrBottomOfTheList(-36));
                    StartCoroutine(ScrollItemList(-36));
                    descriptionBox.text = itemDescription(0);
                    arrowTop.gameObject.SetActive(false);
                    arrowDown.gameObject.SetActive(true);
                    selectedOption = 1;
                }

                switch (selectedOption) //Set the visual indicator for which option you are on.
                {
                    case 1:
                        descriptionBox.text = itemDescription(0);
                        if (rectTransform.localPosition.y == 3)
                        {
                            StartCoroutine(MoveCursor(0));
                        }

                        break;

                    case 2:
                        descriptionBox.text = itemDescription(1);
                        if (rectTransform.localPosition.y == 3)
                        {
                            StartCoroutine(MoveCursor(1));
                        }
                        break;

                    case 3:
                        descriptionBox.text = itemDescription(2);
                        if (rectTransform.localPosition.y == 3)
                        {
                            StartCoroutine(MoveCursor(2));
                        }

                        break;

                    case 4:
                        descriptionBox.text = itemDescription(3);
                        if (rectTransform.localPosition.y == 3)
                        {
                            StartCoroutine(MoveCursor(3));
                        }

                        break;

                    case 5:
                        descriptionBox.text = itemDescription(4);
                        if (rectTransform.localPosition.y == 3)
                        {
                            StartCoroutine(MoveCursor(4));
                        }

                        break;

                    case 6:
                        descriptionBox.text = itemDescription(5);
                        if (rectTransform.localPosition.y == 3)
                        {
                            StartCoroutine(MoveCursor(5));
                        }
                        else if (rectTransform.localPosition.y == 15)
                        {
                            StartCoroutine(MoveCursor(5));
                        }

                        break;

                    case 7:
                        descriptionBox.text = itemDescription(6);
                        arrowTop.gameObject.SetActive(true);
                        if (rectTransform.localPosition.y == 3)
                        {
                            StartCoroutine(ScrollItemList());
                        }
                        else if (rectTransform.localPosition.y == 15)
                        {
                            StartCoroutine(ScrollItemList());
                        }
                        else if (rectTransform.localPosition.y == 27)
                        {
                            StartCoroutine(ScrollItemList());
                        }

                        break;

                    case 8:
                        descriptionBox.text = itemDescription(6);
                        if (rectTransform.localPosition.y == 15)
                        {
                            StartCoroutine(ScrollItemList(12));
                        }
                        else if (rectTransform.localPosition.y == 27)
                        {
                            StartCoroutine(ScrollItemList(12));
                        }
                        else if (rectTransform.localPosition.y == 39)
                        {
                            StartCoroutine(MoveCursor(7));
                        }
                        break;

                    case 9:
                        descriptionBox.text = itemDescription(8);
                        arrowDown.gameObject.SetActive(false);
                        if (rectTransform.localPosition.y == 27)
                        {
                            StartCoroutine(ScrollItemList());
                        }
                        else if (rectTransform.localPosition.y == 39)
                        {
                            StartCoroutine(MoveCursor(8));
                        }
                        break;

                    case 10:
                        descriptionBox.text = itemDescription(9);
                        if (rectTransform.localPosition.y == 39)
                        {
                            StartCoroutine(MoveCursor(9));
                        }

                        break;

                    case 11:
                        descriptionBox.text = itemDescription(10);
                        if (rectTransform.localPosition.y == 39)
                        {
                            StartCoroutine(MoveCursor(10));
                        }

                        break;
                }
            }

            if (Input.GetKeyDown(KeyCode.UpArrow) && allowMoving)
            { //Input telling it to go up or down
              // etc
            }

Now there are several problems with this approach.
As you can see, I’m calling Coroutines every time I want to scroll the (masked) Rect Transform.
This isn’t optimal at all.

But the biggest problem with this approach is that I have to repeat those steps for every possible entry count (1-20 in my case). Meaning I have to figure out when to deactivate the arrows, when to scroll and when to move the cursor for each total entry count. This is far beyond optimal.

Now onto my question:

I’m certain that the game developers back then used a different approach.
I really don’t think that they hardcoded this stuff for every total entry count but had a system/architecture to figure this out instead.
I’m not sure what they did, or how the architecture behind it looks like.

Any help on that would be very much appreciated.

Feel free to ask if you don’t understand something!

architecture – Direct communication between message-based bounded context

My project comprises of serveral bounded context which communicate with each other with RabbitMQ

The front end of the project is written in C# and the backend is in Java.

The RabbitMQ communication happens with JSON payload. The backend of the application defines these payloads as JSON-schema files that are used to autogenerate java classes. The same JSON-schema files are used for autogenerating CSharp files for the front end. RabbitMQ handlers are written on both sides(frontend and backend) to handle/reponse to payload queries.

The communication between frontend and backend needs to happen over RabbitMQ as both the applications run on different language platform.

Now, the bounded contexts defined in the backend also uses RabbitMQ to communicate with each other. This causes 2 problems

  1. the intercommunication between the bounded context may fail even having completely tested, as the bounded context are tested independently of each other.
  2. If there is a change, all the three things, publisher, JSON-schema and Consumer needs to be changed.

I want to develop a library which will bypass communication with RabbitMQ wherever possible and make a Java direct call between publisher and consumer just like any other library. The RabbitMQ call shall only be used in case direct call is not possible.

I need some suggestions as an headstart to start towards solving this problem.

database design – Service architecture for social media with global users

We are building a social media application,akin to twitter or facebook, on AWS where a user can say make a post which would then be surfaced to his followers/friends who can then like or comment.

I wonder what should be the software architecture design to support cases where say author of post was in Europe but has followers in other geographies(say USA). In this case, how could I minimise latency(my users would be on an interactive mobile app, so i don’t want them to notice so sonmething in 100s of ms for p99) for read/write operations on the post across geographies.
Some things that I have in mind:

  • Always serve read/write from local service and db replica. This would mean some kind of replication across all regions to keep them in eventual sync. This can lead to conflicts which I believe is a beast of problem to handle.
  • Keep multiple service as in above case in each geography, but keep the db geographically sharded so that posts created by used in say Europe is stored in Europe. This cannot solve all cases but would solve majority.
  • Keep just 1 service geography say Europe and keep an API gateway in each region where we have user and this gateway just routes it to the service using AWS dedicated network link which has low latency than the WAN.
    Does anybody share some insights on this?

computer architecture – Interesting Speedup & Amdhal’s law problem

I have found a problem on my Computer Architecture textbook which I have some issues with:

We have a process which spends its time in the following way:

  • 50% of the time, it executes common arithmetic (non-floating point) instructions
  • 10% of the time, it executes floating point instructions
  • 40% of the time, it executes a function which has 4816896 instructions, which is 60% of the total of instructions

After improving the function’s algorithm, we end up with 983040 instructions executed on the function instead.

Assuming that each instruction is executed in one cycle, it asks about the performance improvement after this instruction number reduction.

By calculating the speedUp in CPI before and after in the improved functions (4,9), and then using Amdhal’s law, we get an increase of 46,7% in performance.

After this, it asks to calculate the speedUp again, but this time by checking the increase in MFLOPs.

How could this be done if we don’t know anything about execution times?