amazon s3 – Concurrent Writes to the same S3 file – Updates from multiple s3 buckets to target S3 bucket file using Lambda S3 Trigger

My use-case is to design a system to keep reading events (content) from source S3 Buckets and merge with the file present in the target S3 Bucket continuously.

For instance,

Bucket 1 belongs to A: Any Update –> Trigger Lambda L —> Read file from Bucket 3 and merge the newly uploaded content and write back.

Bucket 2 belongs to B: Any Update –> Trigger Lambda L —> Read file from Bucket 3 and merge the newly uploaded content and write back.

Bucket 3 belongs to C: Any Update –> Trigger Lambda L —> Read file from Bucket 3 and merge the newly uploaded content and write back.

These events can be happened at the same time and the lambda should keep on updating the same file in Bucket 3.

How can we guarantee consistency of data in the Bucket 3?

I can think of the following approaches:

1- Lambda concurrency limit to 1 : One write operation on Bucket 3 at a time

2- Implement locking mechanism on target Bucket 3.

3- Schedule a CRON job to trigger lambda at a particular time and read all the source buckets and merge the newly uploaded content with the existing one and then write to the target s3 bucket.

Any recommendations?

multithreading – What is the best way to decouple and define the orchestration (coordination) of concurrent tasks?

I have a data synchronisation concurrent algorithm. It does the following: get data and files from server, send data and files to server, save them to database / filesystem. Imagine the system like this:

  1. You have 1000 functions. Each one does some atomic operation. For instance, fetch latest objects of type X and insert them into DB; upload this file of type Y and so on. Each function is independent and can act on its own, it does not communicate with or affect other functions. On the other hand, none of them is a pure function, because they all use theese common resources (fetching data from the server, puting data on DB, saving files on filesystem)
  2. You have a single entry point for the sychronization mechanism. The outside of the sync system can start the sync, say, by doing a Sync.start() call. Also, the sync has a single exit point. The sync can finish with either success, either failure (if any of those functions from (1) fail, the whole sync will fail). The ouside of the sync system can subscribe to onSyncSuccess / onSyncError events.
  3. You have this black box in the middle of the system. This could be, for instance, a single threaded algorithm calling those 1000 functions from (1). But I made it concurrent.

Now consider this. This concurrent algorithm right now is rigid because the way in which the functions are called is hardcoded. If I want to take a bunch of functions from (1) that right now are executing sequentially, and if I want to make them execute parallel, it would be impossible without refactoring the whole class hierarchy.

I was thinking about the concept of direct acyclic graphs, and I made my own domain-specific language in Kotlin to define such task graphs. Now I could write the whole orchestration declaratively like this:

notifySyncWasStarted()
runSequentialy {
    task { doTask1() }
    runInParallel {
        task { doTask2() }
        task { doTask3() }
    }
    task { doTask4() }
}
notifySyncWasStopped()

So first task1 gets executed, then task2 and 3 in the same time, then task4. By keeping this graph in a single file, I could easily modify the way tasks are executed. For instance, I could easily swap tasks:

notifySyncWasStarted()
runSequentialy {
    runInParallel {
        task { doTask4() }
        task { doTask2() }
    }
    task { doTask3() }
    task { doTask1() }
}
notifySyncWasStopped()

Here, (task 4 and 2) gets executed, then 3, then 1. This works by using the fork-join paradigm, I create threads then join them into the parent thread.

In contrast, right now, the algorithm is spread around multiple classes, each of them was designed to run the tasks in a specific manner. Changing how tasks are ran would mean to refactor the classes and how they communicate to each other.

The question is: What is the best way to decouple and define the orchestration (coordination) of concurrent tasks? So that this orchestration could be easily changed in the future? Is my solution optimal or the way to go (direct acyclic graphs, fork-join, plus a domain specific language)? Or maybe there are some other design patterns that do the same thing?

Concurrent connections calculated in Mongodb Atlas

I’m considering using mongodb for a mobile backend database, and I’m slightly confused at something I haven’t found an answer to in the documentation. Are the concurrent connections for each tier in Atlas calculated on a per cluster basis, or per shard within each cluster? How does sharding affect the limits on the number of reading/writing operations per second? Have a feeling I’m misunderstanding something crucial since haven’t found questions related to this online at all.

combinatorics – How to Find Total number of Possible Concurrent Schedules in Transactions?

My friend come up with this idea:- Let T be the total number of transactions, the Total number of possible schedules will be (n + m + k)! /( n! m! k!). While the number of concurrent schedules will be ((n + m + k)!/ (n! m! k!) – T!) .

Question:- Consider 3 transactions T1,T2 and T3 having 2,3 and 4 operations respectively. Find the number of concurrent schedules?

My Answer:- Total no. of concurrent schedules = 9C2 * 7C3 * 4C4 = 1260.

I want to know my friend is correct or me ?
should we also count serial schedules in concurrent schedules ?
or Something else ? Please clear my doubt !!!!!!!!!!

design – Concurrent Licensing Implementation

First time poster – apologies if I am missing key information.


I am working on a project composed of two applications which rely on a USB license supporting concurrent licensing (e.g. only 10 users can access component x at the same time, between the 2 applications).

The USB license supports a mutex for accessing read/write functionality, but I am struggling on coming up with a model that would allow the two applications to communicate with one another to determine how many users are logged in total between the two applications.

In other words, if the USB dictates 10 users max, and application #1 has 2 users logged and application #2 has 8 users logged in, how can both applications know that an 11th user should not be allowed to be logged in?

The best I have come up with is to use a Mutex for an integer representing current users on a computer. When a user attempts to log into either application, that integer is checked. If user is allowed to login (because current # of users logged in < # of users licensed), then either application will allow this by modifying the mutex by incrementing it by 1.

And, when a user logs off the application, the application will decrement the mutex by 1 (releasing a “seat”).

But, my concern then is if an application encounters an exception or a user force-quits an application that prevents it from calling the decrement, how should a “ghost occupied seat” be handled? (e.g. 5 out of 10 users were logged in, application crashed, mutex still says that 5 users are logged in when there’s actually none).

c – Is my concurrent tcp server correct?

So i’ve made a concurrent server that transferes files with socket programming by using fork(), and made a client to connect to a known localhost and ask for a txt. that is in a file. Then the server sends the file to the client that will be then saved locally. This is for a project at college and i’ve been getting feedback that my fork() function in my server isn’t correct and it shouldn’t work, but for me it works just fine? Please have a look.

Server:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <arpa/inet.h>

#define SIZE 1024

void send_file(FILE *fp, int sockfd){
  int n;
  char data(SIZE) = {0};

  while(fgets(data, SIZE, fp) != NULL) {
    if (send(sockfd, data, sizeof(data), 0) < 0) {
      perror("(-)Error in sending file.");
      exit(1);
    }
    bzero(data, SIZE);
  }
  return;
}

int main(){
  // ip address and port
  char *ip = "127.0.0.1";
  int port  = 8888;

  // variables and structures
  int e;
  int sockfd, new_sock;
  struct sockaddr_in server_addr, new_addr;
  socklen_t addr_size;
  char buffer(SIZE);
  pid_t childpid;

  FILE *fp;
  char *filepath;

  // 1. creating the socket
  sockfd = socket(AF_INET, SOCK_STREAM, 0);
  if(sockfd < 0){
    perror("(-)Error in socket");
    exit(1);
  }
  printf("(+)Server socket created.n");

  // 2. writing the data in the structure
  server_addr.sin_family = AF_INET;
  server_addr.sin_addr.s_addr = inet_addr(ip);
  server_addr.sin_port = port;

  // 3. binding the ip address with the port
  addr_size = sizeof(server_addr);
  e = bind(sockfd, (struct sockaddr*)&server_addr, addr_size);
  if (e < 0){
    perror("(-)Error in bind");
    exit(1);
  }
  printf("(+)Binding successfull.n");

  // 4. listening to the clients
  e = listen(sockfd, 10);
  if (e < 0) {
    perror("(-)Error in listen");
    exit(1);
  }
  printf("(+)Listening...n");

  // 5. accepting the client connection.

  while (1){
    addr_size = sizeof(new_addr);
    new_sock = accept(sockfd, (struct sockaddr*)&new_addr, &addr_size);
    if (new_sock < 0){
      perror("(-)Error in accpet");
      exit(1);
    }
    printf("Connection accepted from %s:%dn", inet_ntoa(new_addr.sin_addr), ntohs(new_addr.sin_port));

    childpid = fork();
    if (childpid == 0){
      close(sockfd);

      while(1){
        recv(new_sock, buffer, SIZE, 0);

        if (strcmp(buffer, "LIST") == 0){
          // send the list of filenames.
          bzero(buffer, SIZE);
          strcpy(buffer, "data.txtnhello.txt");
          send(new_sock, buffer, SIZE, 0);
          bzero(buffer, SIZE);
        }

        else if (strcmp(buffer, "QUIT") == 0){
          // connection disconnected.
          printf("Connection disconnected from %s:%dn", inet_ntoa(new_addr.sin_addr), ntohs(new_addr.sin_port));
          break;
        }

        else {
          // received the filename, send the file data.
          if (strcmp(buffer, "data.txt") == 0){
            filepath = "server_files/data.txt";
            fp = fopen(filepath, "r");
            send_file(fp, new_sock);
          }

          else if (strcmp(buffer, "hello.txt") == 0) {
            filepath = "server_files/hello.txt";
            fp = fopen(filepath, "r");
            send_file(fp, new_sock);
          }

          bzero(buffer, SIZE);
          send(new_sock, "", 1, 0);
          bzero(buffer, SIZE);
        }

      }
    }

  }

}

Client:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <arpa/inet.h>

#define SIZE 1024

void remove_char(char *s, int c){
  /* This function is used to remove a character from the character array. */
  int j, n = strlen(s);
  for (int i=j=0; i<n; i++)
    if (s(i) != c){
      s(j++) = s(i);
    }
    s(j) = '';
}

void write_file(char *filepath, int sockfd){
  int n;
  FILE *fp;
  char buffer(SIZE);

  fp = fopen(filepath, "w");
  if (fp == NULL) {
    perror("(-)Error in creating file");
    exit(1);
  }

  while (1) {
    n = recv(sockfd, buffer, SIZE, 0);
    if (n == 1) {
      break;
      return;
    }

    fprintf(fp, "%s", buffer);
    fflush(fp);
    bzero(buffer, SIZE);
  }
  return;
}

int main(){
  // ip address and port
  char *ip = "127.0.0.1";
  int port = 8888;

  // variables and structures,
  int e;
  int sockfd;
  struct sockaddr_in server_addr;
  char buffer(SIZE);
  char *filepath;

  // 1. creating the socket
  sockfd = socket(AF_INET, SOCK_STREAM, 0);
  if(sockfd < 0){
    perror("(-)Error in socket");
    exit(1);
  }
  printf("(+)Client socket created.n");

  // 2. writing the data in the structure
  server_addr.sin_family = AF_INET;
  server_addr.sin_addr.s_addr = inet_addr(ip);
  server_addr.sin_port = port;

  // 3. connect to the server
  e = connect(sockfd, (struct sockaddr*)&server_addr, sizeof(server_addr));
  if(sockfd < 0){
    perror("(-)Error in connect");
    exit(1);
  }
  printf("(+)Connected to the servern");

  printf("n");
  printf("List of the commands.n");
  printf("LIST - list all the files.n");
  printf("LOAD - download the file.n");
  printf("       LOAD <path>n");
  printf("QUIT - disconnect from the server.n");

  while(1){
    fflush(stdout);
    printf("> ");
    fgets(buffer, SIZE, stdin);

    if (strlen(buffer) > 1){
      char *token1 = strtok(buffer, " ");
      char *token2 = strtok(NULL, " ");
      remove_char(token1, 'n');

      if (strcmp(token1, "LIST") == 0) {
        // list all the file of the server.
        send(sockfd, buffer, SIZE, 0);
        recv(sockfd, buffer, SIZE, 0);
        printf("%sn", buffer);
      }

      else if (strcmp(token1, "LOAD") == 0) {
        if (token2 == NULL) {
          printf("(-)Specify the correct filename.n");
        } else {
          // save the data of the file received from the server.
          remove_char(token2, 'n');
          send(sockfd, token2, SIZE, 0);

          if (strcmp(token2, "data.txt") == 0){
            filepath = "client_files/data.txt";
            write_file(filepath, sockfd);
            printf("(+)File saved.n");
          }

          else if (strcmp(token2, "hello.txt") == 0) {
            filepath = "client_files/hello.txt";
            write_file(filepath, sockfd);
            printf("(+)File saved.n");
          }

          else {
            printf("Incorrect pathn");
          }
        }
      }

      else if (strcmp(token1, "QUIT") == 0) {
        // disconnect from the server.
        printf("(+)Disconnected from the server.n");
        send(sockfd, token1, SIZE, 0);
        break;
      }

      else {
        printf("(-)Invalid commandn");
      }
    }

    bzero(buffer, SIZE);
  }

}

node.js – Can I manage thousands of concurrent connections with a non-Node stack?

I’m developing a social network with Django (Python) + Postgres SQL. My site will have a chat feature so that users can communicate to each other in real-time, and the communication will be only from user-to-user (so there won’t be chatrooms with more than two people).

Let’s say that in the future my social network has ten millions of registered users (I know, I know, but for the sake of my question let’s assume that this happens) and an average of 20,000 chats open between users at the same time 24/7.

Assuming that I run my app on the Cloud (Digital Ocean, AWS or whatever) with a traffic balancer, can I expect my Django + SQL app to run seamlessly or should I use Node JS + noSQL to scale my app without so much pain as it grows?

I heard that the ME*N stack is meant for these kind of use cases (real-time applications with thousands of concurrent connections), but I already developed around 25% of my app in Django + Postgres and I get discouraged to think that I will probably have to re-do everything again from scratch. But on the other hand, I heard that some other big websites such as Instagram have been developed using Django, so I don’t know what to think.

I’m aware that it’s possible to connect Django with MongoDB, but I still have the problem with managing the big amount of concurrent real-time connections… Plus, I will use React heavily on the front-end and it might be easier to couple it with Node than with Django.

What is the best decision here?

google sheets – Synchronize time among several concurrent users

I’m using Google Sheets to come up with a quick Jeopardy-style game with an improvised buzzer. All players type in the timestamp shortcut (Ctrl+Shift+0) into a cell and the lowest value is presumed to buzz in first, but we’ve realized that some people are a few seconds, even minutes, ahead (or behind). We’ve diagnosed that it’s related to each person’s time set in each’s OS.

Is there a way to sync times through the Sheets app? Or at the very least determine who input first?

I’ve tried to remedy with a series of nested array formula with if and count statements, but to no avail. Any help would be appreciated. Otherwise I’m going to set my OS’s time to 1899 and smoke everyone.

Google Sheets: Synchronize time among several concurrent users

Ok, this is definitely a non-essential problem, but I’m using Google Sheets to come up with a quick Jeopardy-style game with an improvised buzzer. All players type in the timestamp shortcut (Ctrl+Shift+0) into a cell and the lowest value is presumed to buzz in first, but we’ve realized that some people are a few seconds, even minutes, ahead (or behind). We’ve diagnosed that it’s related to each person’s time set in each’s OS.

Is there a way to sync times through the Sheets app? Or at the very least determine who input firs?

I’ve tried to remedy with a series of nested array formula with if and count statements, but to no avail. Any help would be appreciated. Otherwise I’m going to set my OS’s time to 1899 and smoke everyone.

-Rudy

mysql – Slow Database with concurrent queries, Table cache hitrate 0% [Newbie! Need help :)]

I’m a bit of a newbie when it comes to dba work so please take it easy!

I have gone through countless threads trying to improve the performance of my database, looking for someone who could give me some pointers on changes I may need to do for my variables that may squeeze more performance.
Or if I have misconfigured anything.

Here are the details
Server specs: E3 1270 V6 w/32gb Ram
OS: Windows Server 2016
Mysql Version: 8.0.19 – MySQL Community Server – GPL

MYSQL Config
https://ybin.me/p/b99f994ad62c27ad#zvwS1+XGP6ZIZtKYdiMOySg+aYR85Qp3ciTvFr6q4mE=

Show Global Status (during high usage):

https://ybin.me/p/b74fae252e807749#F9+88tWsVo/hqHjKlnlnG1gEUwj7vlonLlYrUxXnThg=

SHOW ENGINE INNODB STATUS (During High Usage)

InnoDB      
=====================================
2020-05-17 16:57:16 0xa6c INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 7 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 103908 srv_active, 0 srv_shutdown, 300397 srv_idle
srv_master_thread log flush and writes: 0
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 42855
OS WAIT ARRAY INFO: signal count 37923
RW-shared spins 3053, rounds 3116, OS waits 59
RW-excl spins 6909, rounds 59455, OS waits 1320
RW-sx spins 99, rounds 1306, OS waits 25
Spin rounds per wait: 1.02 RW-shared, 8.61 RW-excl, 13.19 RW-sx
------------------------
LATEST DETECTED DEADLOCK
------------------------
2020-05-17 13:52:00 0x2904
*** (1) TRANSACTION:
TRANSACTION 12175899, ACTIVE 0 sec starting index read
mysql tables in use 3, locked 3
LOCK WAIT 4 lock struct(s), heap size 1136, 3 row lock(s)
MySQL thread id 3340, OS thread handle 3660, query id 2398670 localhost 127.0.0.1 root updating
UPDATE user_inventory SET count = 22 WHERE identifier = 'steam:11000013f8eef2a' AND item = 'bandage'

*** (1) HOLDS THE LOCK(S):
RECORD LOCKS space id 584 page no 84 n bits 336 index PRIMARY of table `essentialmode`.`user_inventory` trx id 12175899 lock_mode X locks rec but not gap
Record lock, heap no 69 PHYSICAL RECORD: n_fields 6; compact format; info bits 0
 0: len 4; hex 8039a49f; asc  9  ;;
 1: len 6; hex 000000b9849a; asc       ;;
 2: len 7; hex 02000001ad0151; asc       Q;;
 3: len 21; hex 737465616d3a313130303030313366386565663261; asc steam:11000013f8eef2a;;
 4: len 7; hex 62616e64616765; asc bandage;;
 5: len 4; hex 8000002b; asc    +;;


*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 584 page no 2445 n bits 792 index item of table `essentialmode`.`user_inventory` trx id 12175899 lock_mode X waiting
Record lock, heap no 561 PHYSICAL RECORD: n_fields 2; compact format; info bits 0
 0: len 7; hex 62616e64616765; asc bandage;;
 1: len 4; hex 800002e8; asc     ;;


*** (2) TRANSACTION:
TRANSACTION 12175886, ACTIVE 0 sec fetching rows
mysql tables in use 3, locked 3
LOCK WAIT 60 lock struct(s), heap size 8400, 2334 row lock(s)
MySQL thread id 3336, OS thread handle 10296, query id 2398648 localhost 127.0.0.1 root updating
UPDATE user_inventory SET count = 4 WHERE identifier = 'steam:11000010e1c050e' AND item = 'bandage'

*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 584 page no 2445 n bits 792 index item of table `essentialmode`.`user_inventory` trx id 12175886 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
 0: len 8; hex 73757072656d756d; asc supremum;;

Record lock, heap no 561 PHYSICAL RECORD: n_fields 2; compact format; info bits 0
 0: len 7; hex 62616e64616765; asc bandage;;
 1: len 4; hex 800002e8; asc     ;;


*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 584 page no 84 n bits 336 index PRIMARY of table `essentialmode`.`user_inventory` trx id 12175886 lock_mode X locks rec but not gap waiting
Record lock, heap no 69 PHYSICAL RECORD: n_fields 6; compact format; info bits 0
 0: len 4; hex 8039a49f; asc  9  ;;
 1: len 6; hex 000000b9849a; asc       ;;
 2: len 7; hex 02000001ad0151; asc       Q;;
 3: len 21; hex 737465616d3a313130303030313366386565663261; asc steam:11000013f8eef2a;;
 4: len 7; hex 62616e64616765; asc bandage;;
 5: len 4; hex 8000002b; asc    +;;

*** WE ROLL BACK TRANSACTION (1)
------------
TRANSACTIONS
------------
Trx id counter 12207888
Purge done for trx's n:o < 12207888 undo n:o < 0 state: running but idle
History list length 19
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 283609840283040, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 283609840282192, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 283609840281344, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
--------
FILE I/O
--------
I/O thread 0 state: wait Windows aio (insert buffer thread)
I/O thread 1 state: wait Windows aio (log thread)
I/O thread 2 state: wait Windows aio (read thread)
I/O thread 3 state: wait Windows aio (read thread)
I/O thread 4 state: wait Windows aio (read thread)
I/O thread 5 state: wait Windows aio (read thread)
I/O thread 6 state: wait Windows aio (read thread)
I/O thread 7 state: wait Windows aio (read thread)
I/O thread 8 state: wait Windows aio (read thread)
I/O thread 9 state: wait Windows aio (read thread)
I/O thread 10 state: wait Windows aio (read thread)
I/O thread 11 state: wait Windows aio (read thread)
I/O thread 12 state: wait Windows aio (read thread)
I/O thread 13 state: wait Windows aio (read thread)
I/O thread 14 state: wait Windows aio (read thread)
I/O thread 15 state: wait Windows aio (read thread)
I/O thread 16 state: wait Windows aio (read thread)
I/O thread 17 state: wait Windows aio (read thread)
I/O thread 18 state: wait Windows aio (read thread)
I/O thread 19 state: wait Windows aio (read thread)
I/O thread 20 state: wait Windows aio (read thread)
I/O thread 21 state: wait Windows aio (read thread)
I/O thread 22 state: wait Windows aio (read thread)
I/O thread 23 state: wait Windows aio (read thread)
I/O thread 24 state: wait Windows aio (read thread)
I/O thread 25 state: wait Windows aio (read thread)
I/O thread 26 state: wait Windows aio (read thread)
I/O thread 27 state: wait Windows aio (read thread)
I/O thread 28 state: wait Windows aio (read thread)
I/O thread 29 state: wait Windows aio (read thread)
I/O thread 30 state: wait Windows aio (read thread)
I/O thread 31 state: wait Windows aio (read thread)
I/O thread 32 state: wait Windows aio (read thread)
I/O thread 33 state: wait Windows aio (read thread)
I/O thread 34 state: wait Windows aio (read thread)
I/O thread 35 state: wait Windows aio (read thread)
I/O thread 36 state: wait Windows aio (read thread)
I/O thread 37 state: wait Windows aio (read thread)
I/O thread 38 state: wait Windows aio (read thread)
I/O thread 39 state: wait Windows aio (read thread)
I/O thread 40 state: wait Windows aio (read thread)
I/O thread 41 state: wait Windows aio (read thread)
I/O thread 42 state: wait Windows aio (read thread)
I/O thread 43 state: wait Windows aio (read thread)
I/O thread 44 state: wait Windows aio (read thread)
I/O thread 45 state: wait Windows aio (read thread)
I/O thread 46 state: wait Windows aio (read thread)
I/O thread 47 state: wait Windows aio (read thread)
I/O thread 48 state: wait Windows aio (read thread)
I/O thread 49 state: wait Windows aio (read thread)
I/O thread 50 state: wait Windows aio (read thread)
I/O thread 51 state: wait Windows aio (read thread)
I/O thread 52 state: wait Windows aio (read thread)
I/O thread 53 state: wait Windows aio (read thread)
I/O thread 54 state: wait Windows aio (read thread)
I/O thread 55 state: wait Windows aio (read thread)
I/O thread 56 state: wait Windows aio (read thread)
I/O thread 57 state: wait Windows aio (read thread)
I/O thread 58 state: wait Windows aio (read thread)
I/O thread 59 state: wait Windows aio (read thread)
I/O thread 60 state: wait Windows aio (read thread)
I/O thread 61 state: wait Windows aio (read thread)
I/O thread 62 state: wait Windows aio (read thread)
I/O thread 63 state: wait Windows aio (read thread)
I/O thread 64 state: wait Windows aio (read thread)
I/O thread 65 state: wait Windows aio (read thread)
I/O thread 66 state: wait Windows aio (write thread)
I/O thread 67 state: wait Windows aio (write thread)
I/O thread 68 state: wait Windows aio (write thread)
I/O thread 69 state: wait Windows aio (write thread)
I/O thread 70 state: wait Windows aio (write thread)
I/O thread 71 state: wait Windows aio (write thread)
I/O thread 72 state: wait Windows aio (write thread)
I/O thread 73 state: wait Windows aio (write thread)
I/O thread 74 state: wait Windows aio (write thread)
I/O thread 75 state: wait Windows aio (write thread)
I/O thread 76 state: wait Windows aio (write thread)
I/O thread 77 state: wait Windows aio (write thread)
I/O thread 78 state: wait Windows aio (write thread)
I/O thread 79 state: wait Windows aio (write thread)
I/O thread 80 state: wait Windows aio (write thread)
I/O thread 81 state: wait Windows aio (write thread)
I/O thread 82 state: wait Windows aio (write thread)
I/O thread 83 state: wait Windows aio (write thread)
I/O thread 84 state: wait Windows aio (write thread)
I/O thread 85 state: wait Windows aio (write thread)
I/O thread 86 state: wait Windows aio (write thread)
I/O thread 87 state: wait Windows aio (write thread)
I/O thread 88 state: wait Windows aio (write thread)
I/O thread 89 state: wait Windows aio (write thread)
I/O thread 90 state: wait Windows aio (write thread)
I/O thread 91 state: wait Windows aio (write thread)
I/O thread 92 state: wait Windows aio (write thread)
I/O thread 93 state: wait Windows aio (write thread)
I/O thread 94 state: wait Windows aio (write thread)
I/O thread 95 state: wait Windows aio (write thread)
I/O thread 96 state: wait Windows aio (write thread)
I/O thread 97 state: wait Windows aio (write thread)
I/O thread 98 state: wait Windows aio (write thread)
I/O thread 99 state: wait Windows aio (write thread)
I/O thread 100 state: wait Windows aio (write thread)
I/O thread 101 state: wait Windows aio (write thread)
I/O thread 102 state: wait Windows aio (write thread)
I/O thread 103 state: wait Windows aio (write thread)
I/O thread 104 state: wait Windows aio (write thread)
I/O thread 105 state: wait Windows aio (write thread)
I/O thread 106 state: wait Windows aio (write thread)
I/O thread 107 state: wait Windows aio (write thread)
I/O thread 108 state: wait Windows aio (write thread)
I/O thread 109 state: wait Windows aio (write thread)
I/O thread 110 state: wait Windows aio (write thread)
I/O thread 111 state: wait Windows aio (write thread)
I/O thread 112 state: wait Windows aio (write thread)
I/O thread 113 state: wait Windows aio (write thread)
I/O thread 114 state: wait Windows aio (write thread)
I/O thread 115 state: wait Windows aio (write thread)
I/O thread 116 state: wait Windows aio (write thread)
I/O thread 117 state: wait Windows aio (write thread)
I/O thread 118 state: wait Windows aio (write thread)
I/O thread 119 state: wait Windows aio (write thread)
I/O thread 120 state: wait Windows aio (write thread)
I/O thread 121 state: wait Windows aio (write thread)
I/O thread 122 state: wait Windows aio (write thread)
I/O thread 123 state: wait Windows aio (write thread)
I/O thread 124 state: wait Windows aio (write thread)
I/O thread 125 state: wait Windows aio (write thread)
I/O thread 126 state: wait Windows aio (write thread)
I/O thread 127 state: wait Windows aio (write thread)
I/O thread 128 state: wait Windows aio (write thread)
I/O thread 129 state: wait Windows aio (write thread)
Pending normal aio reads: (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) , aio writes: (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) ,
 ibuf aio reads:, log i/o's:, sync i/o's:
Pending flushes (fsync) log: 0; buffer pool: 0
25639 OS file reads, 5252794 OS file writes, 2142145 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 30, seg size 32, 78 merges
merged operations:
 insert 89, delete mark 3, delete 0
discarded operations:
 insert 0, delete mark 0, delete 0
Hash table size 4425293, node heap has 577 buffer(s)
Hash table size 4425293, node heap has 7 buffer(s)
Hash table size 4425293, node heap has 3 buffer(s)
Hash table size 4425293, node heap has 42 buffer(s)
Hash table size 4425293, node heap has 25 buffer(s)
Hash table size 4425293, node heap has 8 buffer(s)
Hash table size 4425293, node heap has 117 buffer(s)
Hash table size 4425293, node heap has 268 buffer(s)
63.71 hash searches/s, 4.29 non-hash searches/s
---
LOG
---
Log sequence number          8204611911
Log buffer assigned up to    8204611911
Log buffer completed up to   8204611911
Log written up to            8204611911
Log flushed up to            8204611911
Added dirty pages up to      8204611911
Pages flushed up to          8204611911
Last checkpoint at           8204611911
2113740 log i/o's done, 0.00 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 17582522368
Dictionary memory allocated 1618261
Buffer pool size   1048576
Free buffers       1010227
Database pages     37302
Old database pages 13779
Modified db pages  0
Pending reads      0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 1961, not young 12509
0.00 youngs/s, 0.00 non-youngs/s
Pages read 25245, created 12063, written 2341874
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 37302, unzip_LRU len: 0
I/O sum(0):cur(0), unzip sum(0):cur(0)
----------------------
INDIVIDUAL BUFFER POOL INFO
----------------------
---BUFFER POOL 0
Buffer pool size   131072
Free buffers       126195
Database pages     4738
Old database pages 1766
Modified db pages  0
Pending reads      0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 47, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 3177, created 1561, written 390400
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 4738, unzip_LRU len: 0
I/O sum(0):cur(0), unzip sum(0):cur(0)
---BUFFER POOL 1
Buffer pool size   131072
Free buffers       126202
Database pages     4747
Old database pages 1732
Modified db pages  0
Pending reads      0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 57, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 3144, created 1603, written 377393
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 4747, unzip_LRU len: 0
I/O sum(0):cur(0), unzip sum(0):cur(0)
---BUFFER POOL 2
Buffer pool size   131072
Free buffers       126359
Database pages     4584
Old database pages 1682
Modified db pages  0
Pending reads      0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 1596, not young 12509
0.00 youngs/s, 0.00 non-youngs/s
Pages read 3159, created 1431, written 286695
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
No buffer pool page gets since the last printout
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 4584, unzip_LRU len: 0
I/O sum(0):cur(0), unzip sum(0):cur(0)
---BUFFER POOL 3
Buffer pool size   131072
Free buffers       126376
Database pages     4563
Old database pages 1679
Modified db pages  0
Pending reads      0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 23, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 3060, created 1503, written 217282
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
No buffer pool page gets since the last printout
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 4563, unzip_LRU len: 0
I/O sum(0):cur(0), unzip sum(0):cur(0)
---BUFFER POOL 4
Buffer pool size   131072
Free buffers       126427
Database pages     4515
Old database pages 1683
Modified db pages  0
Pending reads      0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 39, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 3113, created 1402, written 296757
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 4515, unzip_LRU len: 0
I/O sum(0):cur(0), unzip sum(0):cur(0)
---BUFFER POOL 5
Buffer pool size   131072
Free buffers       126346
Database pages     4604
Old database pages 1681
Modified db pages  0
Pending reads      0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 48, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 3120, created 1484, written 220370
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 4604, unzip_LRU len: 0
I/O sum(0):cur(0), unzip sum(0):cur(0)
---BUFFER POOL 6
Buffer pool size   131072
Free buffers       126053
Database pages     4883
Old database pages 1815
Modified db pages  0
Pending reads      0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 104, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 3276, created 1607, written 259821
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 4883, unzip_LRU len: 0
I/O sum(0):cur(0), unzip sum(0):cur(0)
---BUFFER POOL 7
Buffer pool size   131072
Free buffers       126269
Database pages     4668
Old database pages 1741
Modified db pages  0
Pending reads      0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 47, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 3196, created 1472, written 293156
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 4668, unzip_LRU len: 0
I/O sum(0):cur(0), unzip sum(0):cur(0)
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
0 read views open inside InnoDB
Process ID=9652, Main thread ID=00000000000028B4 , state=sleeping
Number of rows inserted 425605, updated 1126794, deleted 24758, read 3734292042
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
Number of system rows inserted 1841, updated 1056, deleted 1656, read 1362568
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 69.28 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================

MYSQL Tuner Tips
https://i.gyazo.com/107f94f2e69fafe58d6f2125ff3c9ccc.png

I would highly appreciate any input on how we can make the database happier!
If I’m missing any information, let me know happy to provide any extra info!