Clickable Email Signature and Icon Designing for $20

Clickable Email Signature and Icon Designing

I shall create a Professional Clickable Email Signature & also Design beautiful Icon for your business.You shall be satisfied with the quality of my work.I am a hardworking professional and i believe in long term business associations.


python – Designing an URL Shortener

Recently I was assigned a URL shortener design problem while interviewing with a well known organization. Here is the exact problem statement:

  1. Build a simple URL shortener service that will accept a URL as an argument over a REST API and return a shortened URL as a result.

  2. The URL and shortened URL should be stored in memory by applicaon.

    a. (BONUS) Instead of in memory, store these things in a text file.

  3. If I again ask for the same URL, it should give me the same URL as it gave before instead of generating a new one.

  4. (BONUS) Put this application in a Docker image by writing a Dockerfile and provide the docker image link along with the source code link.

The problem looked simpler and I did all bonus points as well but still didn’t hear back from them (not even the feedback).

Here is my approach:

  • As they have asked for multiple storage mechanisms, I created a factory for key-value stores and implemented memory and file based stores.
  • I could think of two approaches for shortening the URLs: 1) Hashing 2) Base 62 of a counter. Base 62 seemed proper approach but they have added a requirement that for same long URL, the same shortened URL should be returned. As Base 62 works with an autoincrement counter, achieving this requirement without extra memory/storage is not possible. So I went with hashing the long URLs (I know there are chances of collisions, I mentioned this as a trade-off).

Here is my base key-value store:

from abc import ABC, abstractmethod

class KVStoreBase(ABC):
    Base key-value store class that provides abstract methods required for
    any storage mechanism used to store shortened URLs.

    def __init__(self, store):

        :param store: Object of any storage mechanism.
        """ = store

    def __getitem__(self, key: str) -> str:

    def __setitem__(self, key: str, value: str) -> None:

    def __contains__(self, key: str) -> bool:

import io

from .kvstore_base import KVStoreBase

class FileStore(KVStoreBase):
    Memory store, which uses 'dict' data structure to
    store the key-value pairs.

    def __init__(self):
        super().__init__(open("db.txt", "a+"))

    def __getitem__(self, key: str) -> str:
        Get value for the given key from dict
        :param key: Key for which the value is to be retrieved.
        :return: value
        # iterate over file and search for given key.
            # move pointer to initial position in file
  , io.SEEK_SET)
            for line in
                suffix, long_url = line.split()
                if suffix == key:
                    return long_url
        except Exception as err:
        return ""

    def __setitem__(self, key: str, value: str) -> None:
        Set value for the given key into dict.
        :param key: Key to be added.
        :param value: Value corresponding to the key.
        :return: None
        # move pointer to the end of the file for writing., io.SEEK_END)
        if != 0:
  "n{key} {value}")
  "{key} {value}")

    def __contains__(self, key: str) -> bool:
        Check whether the given key is present in the dict.
        :param key: Key whose presence is to be checked.
        :return: True if key is present, False otherwise.
        return True if self.__getitem__(key) else False

    def __del__(self) -> None:
        Free file resource.

from .memory_store import MemoryStore
from .file_store import FileStore
from .kvstore_base import KVStoreBase

class Factory:
    KV Store factory to instantiate storage objects.

    kvstore_map: dict = {"MEMORY": MemoryStore, "FILE": FileStore}

    def get_instance(instance_type: str) -> KVStoreBase:
        Instantiate given KV store class dynamically.
        :param instance_type: Type of KV store to be instantiated.
        :return: KV store object.
            return Factory.kvstore_map(instance_type)()
        except KeyError:
            raise Exception("Invalid instance requested.")

I used FastAPI to create endpoints, here is the two main routes (one for shortening the URL and one for redirecting):


import hashlib
from urllib.parse import urlparse

from fastapi import APIRouter, HTTPException, Request

from ..utils.store_connector import StoreConnector
from ..models.urls import Url
from starlette.responses import RedirectResponse

router = APIRouter()
store = StoreConnector().store

def _get_base_url(endpoint_url: str) -> str:
    Extract base url from any endpoint URL.
    :param endpoint_url: Endpoint URL from which the base URL is to be extracted.
    :return: Base URL
    return f"{urlparse(endpoint_url).scheme}://{urlparse(endpoint_url).hostname}:{urlparse(endpoint_url).port}""/shorten", tags=("URLs"))
async def shorten(url_obj: Url, request: Request) -> Url:
    Shorten the given long URL.
    :param request: request object
    :param url_obj: URL object
    :return: shortened URL.
    suffix = hashlib.sha256(url_obj.url.encode("utf-8")).hexdigest()(:8)
    if suffix not in store:
        # store short-url-suffix: long-url into data store.
        store(suffix) = url_obj.url

    return Url(url=f"{_get_base_url(request.url_for('shorten'))}/{suffix}")

@router.get("/{suffix}", tags=("URLs"))
async def redirect(suffix: str) -> RedirectResponse:
    Redirect to long URL for the given URL ID.
    :param suffix: URL ID for the corresponding long URL.
    :return: Long URL.
    long_url = store(suffix)
    if long_url:
        # return permanent redirect so that browsers store this in their cache.
        response = RedirectResponse(url=long_url, status_code=301)
        return response
    raise HTTPException(status_code=404, detail="Short URL not found.")

I exposed the storage mechanism to docker environment variable and created a storage connector class, which calls the factory based on the user configuration.

import os
import sys
from sys import stderr

from .singleton import Singleton
from ..kvstores.kvstore_factory import Factory
from ..kvstores.kvstore_base import KVStoreBase

class StoreConnector(metaclass=Singleton):
    Key Value store singleton class that can be used across all modules.

    def __init__(self):
            store_type = os.environ.get("STORE_TYPE", "MEMORY")
            self._store = Factory.get_instance(store_type)
        except KeyError as ex:
            print(ex, file=stderr)
            # one of the required environment variable is not set
                "One of the required environment variables is not set",
        except Exception as ex:
            print(ex, file=stderr)

    def store(self) -> KVStoreBase:
        return self._store

Here is my directory structure:

├── Dockerfile
├── requirements.txt
├── src
│   ├──
│   ├── kvstores
│   │   ├──
│   │   ├──
│   │   ├──
│   │   ├──
│   │   └──
│   ├──
│   ├── models
│   │   ├──
│   │   └──
│   ├── routes
│   │   ├──
│   │   └──
│   └── utils
│       ├──
│       └──
└── tests

In the instructions, they emphasized the following points:

  • Readability of code
  • Tests – Unit tests definitely and more if you can think of
  • A good structure to your code and well written file & variable names etc.

So, which of those points my code lacks?

Cost Of Website Designing In India?

Cost of Website Designing In India As Per Market! Web Design & Build Cost of website designing in India, The website is both a magazine and a storefront. It helps visitors see you in a virtual space and think about specific beliefs and brands. Websites are a good tool for every…

architecture – Designing persistence in an ECS world subdivided into chunks

I’m designing a kind of simple open world with ECS. The whole world is too large to be loaded at once, so I load and unload chunks according to player’s position. Nothing fancy, pretty much standard nowadays.

In this world, I have entities that I need to update frequently. There are a couple of thousands of them. These entities share some components and a tag to retrieve them, but can refer to different archetypes. These entities are placed by hand, before runtime. No entity of this kind is created on runtime.

I need to update these entities every five minutes. I do not need them to update in a single frame, it can be scheduled over multiple frames (so performance is not exactly critical here but as always, the more performant the better).

  1. I need to update every entity, even if not loaded according to player’s position
  2. I need to save and load the state of these entities

Currently, the position of these entities is guaranteed to remain the same and it is guaranteed too that two entities can’t share a same position. However, it may change.

Solution A

I create a singleton “manager” that holds a container. Keys are the position of the entities. I iterate trough the container to update the values, so I don’t need to load entities.

I update the entities currently loaded with the new values retrieved from the manager.


  • Fast
  • Easy to work with
  • No need to load entities (and meshes and so on)
  • Trivial to serialize and save
  • Do not require additional work on the world creation


  • Not flexible as the position is now a key (could be a problem if I want the system to evolve)
  • May cause issues when parallelizing
  • Having positions as keys is probably a very bad idea
  • Defeats the principles of ECS as data is not “owned” by the entities, but is centralized

Solution A (alternative):

This time, with a script I execute before runtime, I set an ID to each entity in my world as part of a component. I use this ID to store the data in my container (which is now a simple array and not some sort of a dictionary).


  • Very fast
  • Easy to work with
  • No need to load entities (and meshes and so on)
  • Trivial to serialize and save


  • Does require additional work on the world creation
  • Still not very flexible (but better than positions)
  • May cause issues when parallelizing
  • Still defeats the principles of ECS as data is not “owned” by the entities, but is centralized

Solution B

This time, all the data remains in the entities. I have to separate my chunks into two parts: a part with the entities I need to update (let’s call that an update-chunk), a part with the rest of my entities (a regular-chunk).

I first have to create a save for my entities: I load every update-chunk, I update the entities, I serialize them then unload them.

Then, for each update: I deserialize my entities, I update the entities, I serialize them again.

When a chunk is loaded: I deserialize my entities and I load the regular-chunk only.


  • I’m using ECS, yeah!
  • … Nothing much


  • Ridiculously inconvenient and complicated
  • I need to serialize over and over again
  • Probably very, very bad performance
  • I save a lot of data I already “saved” in my base configuration (so many duplicates)
  • Difficult to set up as I have to create twice as much chunks
  • I have to load meshes/textures associated with the entities even if I do not need to render them (I can’t change this behavior in the framework I use, or I need to code another system and this is getting far too complex)
  • Probably more but that’s way enough to disqualify this solution

I’m pretty much convinced I’m doing something wrong and that I missed something really important.

Solution A may perfectly work (after all, I have no need to absolutely use ECS for everything) but it bothers me that I can’t figure out how to solve this problem with ECS.

database design – Designing complex cross table data integrity checks

I am trying to start working on an application, and while I have some code the time has come to work on my database model as well. In the next sections I will describe what my application should be doing, followed by what I expect of my database model, and in the end I am going to present to you my attempt thus far to solve my problem. I hope your experience will come in handy.

This is a simple application for creating appointments for services with specific providers. For example the application would enable you to create an appointment for haircut at 10:00 to be done by your favorite hairdresser John Doe.

Now here comes the catch. Namely each of the service providers, in this case our Mr. John Doe, could be at different location at different times and it may be incorrect to book an appointment with John Doe in one of our subsidiaries, if according to his work plan he is scheduled to be somewhere completely else.

These considerations have lead to me observe that I have following basic entities in my database:

  • Service ( represents the thing you can buy as a service, as said example would be a haircut )
  • Provider ( our John Doe )
  • Location ( location of the subsidiary )
  • Appointment ( the actual representation of an appointment for a service )

With these entities the following constraints represent for me “valid data” ( and do correct me if this should not be done in database ) :

  • A provider can only be at single location at specific time
  • A provider can only have a single appointment booked at specific time
  • A single appointment is completely provided by single provider ( you will see relevance of this constraint, when I present my solution which I have made up to now )
  • A single appointment is completely provided at single location
  • We can book an appointment with a specific provider at specific location only if the provider is scheduled to be at this location at this time.

I have managed to get most of the constraints implemented, but some always are seemingly not satisfiable with simple tools. I have decided that in order to simplify things, and avoid usage time intervals only specific time entries are allowed to be made in the database. Namely time intervals for bookings and locations are represented by collections of timestamps, which always have to be multiples of 5 minutes and must have 0 seconds. Here is my model up to now:

create table service (
    id serial primary key,
    name varchar(255) not null,
create table provider (
    id serial primary key,
    name varchar(128) not null,
create table location (
    id serial primary key,
    name varchar(255) not null,
-- This table specifies the exact times at which a provider is at a specific location
create table provider_location_time(
    id serial primary key,
    provider_id integer not null,
    location_id integer not null,
    time_block timestamp not null check (cast(extract(minute from time_block) as integer) % 5 = 0 and cast(extract(second from time_block) as integer) = 0),
    foreign key (provider_id) references provider(id),
    foreign key (location_id) references location(id),
    unique(provider_id, time_block) -- this constraint ensures that provider is only at single location at specific time

create table appointment(
    id serial primary key,
    service_id integer not null

-- Specifies the times which are reserved for single appointment
create table appointment_time(
    id serial primary key,
    appointment_id integer not null,
    provider_location_time_id integer not null, -- usage of this foreign key ensures that the appointment can only be made at locations and times where a provider is available as described in fifth constraint listed above
    foreign key (appointment_id) references appointment(id),
    foreign key (provider_location_time_id) references provider_location_time(id),
    unique(provider_location_time_id) -- ensures that single time slot available for provider at location is booked only for single appointment. Avoids double booking of providers by accident

Now given the model above I can not ensure the third constraint namely that:

Single appointment is completely provided by single provider

And due to inability to ensure the constraint above I can not ensure that an appointment is to be always completed at single specific location, since I can attach an appointment to arbitrary provider-location-time combinations.

Can this be solved by better design or are triggers my only option?

design – I’m designing a python api to play a singleplayer game. should I use statemachine with different states?

I’m designing a python api to play a singleplayer card collection game and my prototype includes a state machine which uses pyautogui’s locate function to recognize and update current game state through clicking the game directly. I am planning to use statemachine packet from pypi. Ideally this Api would be hooked up to a Neural network.

After reading WHY DEVELOPERS NEVER USE STATE MACHINES, I got worried if an overcomplicated STATE MACHINES will be ton of a hassle to maintain since my statemachine starts with 40 is states and most of them needs to be interconnected through function, and if all of the States requires specific functions to move from each other, it would be over couples of hundred functions for each action. Is there a way to avoid this specific problem? Am I even looking in the correct direction? If Statemachines are horrible, what are some alternatives to using statemachines?

class CS(StateMachine):
    home = State(
    dock = State(
    supply = State(
    refit = State(
    bathtub = State(
    factory = State(

    sortiePage1 = State(u
    sortiePage2 = State(u
    sortiePage3 = State(u
    sortiePage4 = State(u
    sortiePage5 = State(u
    sortiePage6 = State(u
    sortiePage7 = State(u
    expeditionPage1 = State(u
    expeditionPage2 = State(u
    expeditionPage3 = State(u
    expeditionPage4 = State(u
    expeditionPage5 = State(u
    expeditionPage6 = State(u
    expeditionPage7 = State(u

java – Designing a phonebook menu bar, filter and sorting order buttons included

I designed a phonebook application using JavaFX (trying to follow Material Design by Google, as you can see in the screenshot).
The code below is the code that handles the ButtonBar next to the search bar, but I’m new to JavaFX and I want to know if this is really poorly written and organized.

package gui;

import data.Contact;
import data.Database;
import javafx.collections.ObservableList;
import javafx.scene.control.MenuButton;
import javafx.scene.control.RadioMenuItem;
import javafx.scene.control.SeparatorMenuItem;
import javafx.scene.control.ToggleGroup;
import javafx.scene.image.Image;
import javafx.scene.image.ImageView;
import javafx.scene.layout.AnchorPane;
import model.ContactListOrderManager;
import model.ListOrder;


public class ListButtonsController {
    private final SimpleObjectProperty<MenuButton> orderButton;
    private final SimpleObjectProperty<MenuButton> filterButton;
    private final SimpleObjectProperty<OrderButtonController> orderButtonController;
    private final SimpleObjectProperty<ObservableList<Contact>> contactList;

    public ListButtonsController(MenuButton orderButton, MenuButton filterButton,
                                 ObservableList<Contact> contactList) {
        this.orderButton = new SimpleObjectProperty<>(orderButton);
        this.filterButton = new SimpleObjectProperty<>(filterButton);
        var orderButtonControl = new OrderButtonController(this);
        this.orderButtonController = new SimpleObjectProperty<>(orderButtonControl);
        this.contactList = new SimpleObjectProperty<>(contactList);

    protected MenuButton getOrderButton() {
        return orderButton.get();

    protected MenuButton getFilterButton() {
        return filterButton.get();

    private OrderButtonController getOrderButtonController() {
        return orderButtonController.get();

    public ObservableList<Contact> getContactList() {
        return contactList.get();

    public void listButtonsSetup() throws IOException {
        AnchorPane.setLeftAnchor(getOrderButton(), 290d);
        AnchorPane.setTopAnchor(getOrderButton(), 1.5);
        AnchorPane.setLeftAnchor(getFilterButton(), 240d);
        AnchorPane.setTopAnchor(getFilterButton(), 1.5);

    private void setButtonsIcons() throws IOException {
        try (var filterInput = new FileInputStream("icons/filter_black_24dp.png");
             var orderInput = new FileInputStream("icons/sort_black_24dp.png")) {
            getOrderButton().setGraphic(new ImageView(new Image(orderInput)));
            getFilterButton().setGraphic(new ImageView(new Image(filterInput)));

    private void setButtonsStyle() {


class OrderButtonController {
    private final ListButtonsController generalController;
    private final MenuButton orderButton;

    protected OrderButtonController(ListButtonsController controller) {
        this.generalController = controller;
        this.orderButton = generalController.getOrderButton();

    protected void orderButtonSetup() {
        var ordinatorGroup = new ToggleGroup();
        var alphabeticalOrderItem = new RadioMenuItem("Alphabetically");
        var separator = new SeparatorMenuItem();
        var orderGroup = new ToggleGroup();
        var ascendingRadioItem = new RadioMenuItem("Ascending");
        var descendingRadioItem = new RadioMenuItem("Descending");
        orderGroup.getToggles().addAll(ascendingRadioItem, descendingRadioItem);

        descendingRadioItem.setOnAction(actionEvent ->
                        ListOrder.getOrdering((RadioMenuItem) orderGroup.getSelectedToggle()),
        ascendingRadioItem.setOnAction(actionEvent ->
                        ListOrder.getOrdering((RadioMenuItem) orderGroup.getSelectedToggle()),
        alphabeticalOrderItem.setOnAction(actionEvent ->
                        ListOrder.ALPHABETIC, descendingRadioItem.isSelected()));

        setMenusStyle(alphabeticalOrderItem, ascendingRadioItem, descendingRadioItem);
        orderButton.getItems().addAll(alphabeticalOrderItem, separator, ascendingRadioItem, descendingRadioItem);

    private void setMenusStyle(RadioMenuItem... menus) {
        for (RadioMenuItem menu : menus) {


multithreading – designing high throughout system for storing hundred thousands incoming records per second in SQL server database

In our company we have a requirement where we would like to store hundred thousands incoming records per seconds. we currently a pub-sub model for processing many records(100/sec) from many system(~1000) for listening to incoming events. now the problem is storing these records as fast as they come,(i.e. with minimum delay)

I have python3 code written which leverages existing framework and store events in the database. I used asyncio along with Threads for running code in parallel, I was thinking this would help to tackle the the delay I am seeing in inserting the records for a single system, but the gains are minimal as if I increase the load to 6 systems or more I am seeing that there is a increased delay over the period of time. I investigated the system performance

with TOP I am seeing CPU % used is well above 100%

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 711944 dvmt-ev+  20   0 2258788 226220  17800 S 107.7   1.4  84:41.20 python

is there a way I can maintain consistent throughput with increased loads ?

EDIT this is latest top command result.

top - 17:20:43 up 90 days, 19:48,  1 user,  load average: 1.17, 1.00, 0.68
Tasks: 200 total,   1 running, 199 sleeping,   0 stopped,   0 zombie
%Cpu0  : 11.6 us,  4.3 sy,  0.0 ni, 82.7 id,  0.0 wa,  0.3 hi,  1.0 si,  0.0 st
%Cpu1  : 11.4 us,  4.7 sy,  0.0 ni, 82.3 id,  0.0 wa,  0.3 hi,  1.3 si,  0.0 st
%Cpu2  : 10.4 us,  5.0 sy,  0.0 ni, 83.3 id,  0.0 wa,  0.3 hi,  1.0 si,  0.0 st
%Cpu3  : 13.0 us,  4.7 sy,  0.0 ni, 80.7 id,  0.0 wa,  0.7 hi,  1.0 si,  0.0 st
%Cpu4  : 11.8 us,  3.7 sy,  0.0 ni, 83.5 id,  0.0 wa,  0.3 hi,  0.7 si,  0.0 st
%Cpu5  : 11.5 us,  3.7 sy,  0.0 ni, 84.4 id,  0.0 wa,  0.3 hi,  0.0 si,  0.0 st
%Cpu6  : 16.8 us,  4.7 sy,  0.0 ni, 77.1 id,  0.0 wa,  0.3 hi,  1.0 si,  0.0 st
%Cpu7  : 10.1 us,  4.0 sy,  0.0 ni, 84.6 id,  0.0 wa,  0.3 hi,  1.0 si,  0.0 st
MiB Mem :  15848.1 total,   9050.0 free,   1595.5 used,   5202.6 buff/cache
MiB Swap:   8192.0 total,   8100.5 free,     91.5 used.  13098.2 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
1876629 dvmt-ev+  20   0 2386288 293820  17880 S 148.8   1.8   6:24.34 python
    976 root      20   0  390256   2184   1440 S   0.3   0.0  32:25.65 NetworkManager
4119967 dvmt-to+  20   0 1205460  59196  17972 S   0.3   0.4   1:47.69 python
      1 root      20   0  247384   7852   4520 S   0.0   0.0  14:40.79 systemd
      2 root      20   0       0      0      0 S   0.0   0.0   0:07.67 kthreadd

licensing – Designing a simple ‘site license’ system for a software license

The aim is to have a system that would allow any user on any client machine of a home network to be able to use my software. Activations outside of this network would not be allowed.

The base for this is web API which holds various license keys. You can do the usual queries of this API – verify, create, update etc. There’s facility to have number of activations and max activations etc. Idea being the user is sent an license key to activate their product.

The simple process would be the user chooses “activate product”, types in their key, the software checks with the API whether this is valid and within the remaining number of activations and the API returns relevant info. You can send info to the API (any custom string) and possibly store it server-side.

The real challenge is the constraint of not allowing activations outside of a home network. The things I’ve considered:

  1. Sending the public IP to the API and storing it there and comparing any other activations. Problem being dynamic IPs + changes of ISP??

  2. Handling this within the home network. For example, if the first activation is on Client A, then when a new installation attempts activation on Client B, the software can check in some shared location for a file/key. I was originally thinking %appdata% or the registry but my cursory reading reveals that there is not ‘automatic syncing’ of such across a home network.

  3. I wonder about any registered copy of the software producing an encrypted file on demand that contains some kind of unique identifier of the home network which can then be imported into another client to activate it. The import would check the network identifier against it’s own calculation of this and if they match, activation is approved. This would also get around any problems of identifiers changing due to network hardware or infrastructure changes, as the ‘key file’ would be contemporaneous and thus relevant to both machines.

I’m wanting to keep it simple. I’m not precious about piracy, and realise no system is water tight. Just enough to discourage people sharing keys whilst also allowing them to set up multiple machines with the software in their own home.

I would appreciate any views + suggestions.

Designing for many third-party REST API integrations

Not sure if this is the right forum for this, but I’ve come across a need for this pattern a few times lately, and I would love to get some opinions on options for how to handle it.

Say I have an app that takes data from an arbitrary number of third-party applications and wants to parse it into a unifying format:

    "field_1": "value_1",
    "field_2": "value_2"

So I might have integration 1 return a response that looks like

    "different_field_1_name": "desired_field_1_value",
    "field_2": "desired_field_2_value"

And integration 2 return a response that looks like

    "different_field_1_name_again": "desired_field_1_value",
    "different_field_2_name": "desired_field_2_value",
    "unneeded_field": "unneeded_value"


Ideally, adding new integrations should be as painless as possible. But every time I’ve had to do something like this, things like authentication, response formats, weird API quirks, etc. always make it feel like I’m starting from scratch with new integrations. Does anybody have examples of open-source applications that I can look at that do this sort of thing well? Or is there an obvious pattern I’m missing here that makes this easier?