IIS 10 not showing worker process Current Requests

I have enabled Monitor and Tracing by adding Windows Server 2016 Roles. But IIS not showing me worker process Current Requests when I request a webpage
enter image description here

How can I view current requests coming in IIS for App pool?

Why is my Python thread pool faster than my Go worker pool?

I have recently been digging into understanding Golang concurrency, in particular the use of channels and worker pools. I wanted to compare performance between Go and Python (as many have done) because I have mostly read that Go outperforms Python with regard to concurrency. So I wrote two programs to scan an AWS account’s S3 buckets and report back the total size. I performed this on an account that had more the 75 buckets totaling more than a few TB of data.

I was surprised to find that my Python implementation was nearly 2x faster than my Go implementation. This confuses me based on all the benchmarks and literature I have read. This leads me to believe that I did not implement my Go code correctly. While watching both programs run I noticed that the Go implementation only used up to 15% of my CPU while Python used >85%. Am I missing an important step with Go or am I missing something in my implementation? Thanks in advance!

Python Code:

'''
Get the size of all objects in all buckets in S3
'''
import os
import sys
import boto3
import concurrent.futures

def get_s3_bucket_sizes(aws_access_key_id, aws_secret_access_key, aws_session_token=None):

    s3client = boto3.client('s3')

    # Create the dictionary which will be indexed by the bucket's
    # name and has an S3Bucket object as its contents
    buckets = {}

    total_size = 0.0

    #
    # Start gathering data...
    #

    # Get all of the buckets in the account
    _buckets = s3client.list_buckets()

    cnt = 1
    with concurrent.futures.ThreadPoolExecutor(max_workers=50) as executor:
        future_bucket_to_scan = {executor.submit(get_bucket_objects, s3client, bucket): bucket for bucket in _buckets("Buckets")}

        for future in concurrent.futures.as_completed(future_bucket_to_scan):
            bucket_object = future_bucket_to_scan(future)

            try:
                ret = future.result()
            except Exception as exc:
                print('ERROR: %s' % (str(exc)))
            else:
                total_size += ret

    print(total_size)

def get_bucket_objects(s3client, bucket):

    name = bucket("Name")

    # Get all of the objects in the bucket
    lsbuckets = s3client.list_objects(Bucket=name)

    size = 0
    while True:
        if "Contents" not in lsbuckets.keys():
            break

        for content in lsbuckets("Contents"):            
            size += content("Size")

        break

    return size

#
# Main
#
if __name__=='__main__':
    get_s3_bucket_sizes(os.environ.get("AWS_ACCESS_KEY_ID"), os.environ.get("AWS_SECRET_ACCESS_KEY"))

Go Code:

package main

import (
    "fmt"
    "sync"

    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/aws/awserr"
    "github.com/aws/aws-sdk-go/aws/session"
    "github.com/aws/aws-sdk-go/service/s3"
)

type S3_Bucket_Response struct {
    bucket string
    count  int64
    size   int64
    err    error
}

type S3_Bucket_Request struct {
    bucket string
    region string
}

func get_bucket_objects_async(wg *sync.WaitGroup, requests chan S3_Bucket_Request, responses chan S3_Bucket_Response) {

    var size  int64
    var count int64

    for request := range requests {
        bucket := request.bucket
        region := request.region

        // Create a new response
        response := new(S3_Bucket_Response)
        response.bucket = bucket

        sess, err := session.NewSession(&aws.Config{
            Region: aws.String(region), 
        })

        s3conn := s3.New(sess)

        resp, err := s3conn.ListObjectsV2(&s3.ListObjectsV2Input{
            Bucket: aws.String(bucket),
        })

        if err != nil {
            if awsErr, ok := err.(awserr.Error); ok {

                switch awsErr.Code() {
                case "NoSuchBucket":
                    response.err = fmt.Errorf("Bucket: (%s) is NoSuchBucket.  Must be in process of deleting.", bucket)
                case "AccessDenied":
                    response.err = fmt.Errorf("Bucket: (%s) is AccessDenied.  You should really be running this with full Admin Privaleges", bucket)
                }
            } else {
                response.err = fmt.Errorf("Listing Objects Unhandled Error: %s ", err)
            }

            responses <- *response
            continue
        } 

        contents := resp.Contents
        size      = 0
        count     = 0

        for i:=0; i<len(contents); i++ {
            size  += *contents(i).Size
            count += 1
        }

        response.size  = size
        response.count = count

        responses <- *response
    }

    wg.Done()
}

func main() {

    var err  error
    var size int64
    var resp *s3.ListBucketsOutput
    var wg sync.WaitGroup

    sess, _ := session.NewSession()
    s3conn  := s3.New(sess)

    // Get account bucket listing
    if resp, err = s3conn.ListBuckets(&s3.ListBucketsInput{});err != nil {
        fmt.Println("Error listing buckets: %s", err)
        return 
    }

    buckets := resp.Buckets
    size = 0

    // Create the buffered channels
    requests  := make(chan S3_Bucket_Request , len(buckets))
    responses := make(chan S3_Bucket_Response, len(buckets))

    for i := range buckets {

        bucket := *buckets(i).Name

        resp2, err := s3conn.GetBucketLocation(&s3.GetBucketLocationInput{                                                           
            Bucket: aws.String(bucket),                                                                                                       
        })         

        if err != nil {
            fmt.Printf("Could not get bucket location for bucket (%s): %s", bucket, err)
            continue
        }

        wg.Add(1)
        go get_bucket_objects_async(&wg, requests, responses)

        region := "us-east-1"
        if resp2.LocationConstraint != nil {
            region = *resp2.LocationConstraint
        }

        request := new(S3_Bucket_Request)
        request.bucket = bucket
        request.region = region

        requests <- *request        
    }

    // Close requests channel and wait for responses
    close(requests)
    wg.Wait()
    close(responses)

    cnt := 1
    // Process the results as they come in
    for response := range responses {

        fmt.Printf("Bucket: (%s) complete!  Buckets remaining: %dn", response.bucket, len(buckets)-cnt)

        // Did the bucket request have errors?
        if response.err != nil {
            fmt.Println(response.err)
            continue
        }

        cnt  += 1
        size += response.size
    }

    fmt.Println(size)
    return 
}


```

Are worker ants allowed to unionize?

No, they are slaves inside their own nests. In fact there are some ants that would raid another nest and carry off the ant larvae back to their own nests to serve as slaves. It is cheaper to kidnap ants from another colony than it is to raise your own workers from eggs that you laid.

multithreading: Unity worker system: worker threads are not used in time rewind system

I'm trying to do a simple rewind mechanic. It's working, but it slows down as the number of objects in the game increases. I decided to give the Job system a try. My idea is to interpolate b / w positions and rotations for N game objects in parallel. It certainly runs faster than single-thread running.
but when I see the profiler the work threads are down,

Here is the screenshot,

enter the image description here

What I can understand is not a real multithreading, but a change of context. Can anyone explain a little what is going on? Thank you.

In the master-worker architecture, should a worker have only one teacher?

  1. In the master-worker architecture,

    • Should a worker be created by his own teacher?
    • Should a worker have only one teacher, but not more than one?
  2. When a client requests some resource from a server, is there a
    master-worker relationship between client and server?

  3. In a teacher-worker relationship, do a teacher and their workers also have a client-server relationship?

Thank you.

In the master-worker architecture, should a worker have only one teacher?

  1. In the master-worker architecture,

    • Should a worker be created by his own teacher?
    • Should a worker have only one teacher, but not more than one?
  2. When a client requests some resource from a server, is there a
    master-worker relationship between client and server?

  3. In a teacher-worker relationship, do a teacher and their workers also have a client-server relationship?

Thank you.

android: is it possible to get a LifecycleOwner from a worker (or related class)?

I know that Serivces, Activities and Fragments are LifecycleOwners, but I can't seem to find a way to get a LifecycleOwner from a Worker. Is that possible?

Context: I am migrating some tasks that used to run Activities and Services, so that Worker runs them as part of the WorkManager framework. Some of this code provides a LifecycleOwner through "this", but in a Worker I no longer have those references.

How to return the department head of the department with the worker who has served the longest?

Scenario / Problem
I have a table that stores the identification of the workers, when they start working in a department and when they leave that department. This can be seen in SQL Fiddle: http://www.sqlfiddle.com/#!9/d0c982/1/0

The following code will create the table and insert the test data.

CREATE TABLE `Worker_Department` (
  `Worker_ID` Integer NOT NULL ,
  `Department_ID` Integer NOT NULL,
  `Position` Text NOT NULL,
`Start_Date` datetime NOT NULL,
`Leave_Date` datetime
);

  INSERT INTO `Worker_Department`(`Worker_ID`,`Department_ID`,`Position`,`Start_Date`)
  VALUES
  (10,100,'Leader','1980-11-11'),
  (20,200,'Leader','1980-11-11');


  INSERT INTO
`Worker_Department`(`Worker_ID`,`Department_ID`,`Position`,`Start_Date`,`Leave_Date`)
  VALUES
  (30,200,'Administrator','1980-11-11', '2014-02-02'),
  (40,200,'Receptionist','1975-11-11', '2014-02-02');

  INSERT INTO `Worker_Department`(`Worker_ID`,`Department_ID`,`Position`,`Start_Date`)
  VALUES
  (50,300,'Administrator','2014-02-02'),

  (30,100,'Administrator','2014-02-02');

Code (SQL):

I need to write a query that gets the worker with the longest service time currently in a department (a null end date). However, those with the "Leader" position are not eligible to be the longest portion. From the results of this consultation, I will have to find the department leader in the department in which the worker with more time is currently working.

Expected result
Looking at the test data provided:

  • Worker 40 may not be the employee with more service time since he no longer works for the company.
  • Workers 10 and 20 cannot be the employee with the longest service time since they both have the position of leader (However, this will make them eligible to be the department leader based on who is the employee with the most service time) .
  • Worker 30 is the employee with the longest service time because there is a greater difference between the current date and the oldest start date compared to worker 50.
  • Worker 30 currently works in department 100. This means that Worker 10 is the department leader in the department with the oldest worker

The result of the query would be something like
The | Worker ID
| —————
The | 10
SEMrush

If this table was linked as a foreign key to another table, the selection could be modified to include details about that leader (name, phone, address).

Current progress
The following query will show the workers who are currently working (those without a license date), the department in which they work and their role in that department.

SELECT Worker_ID, Department_ID, Position FROM Worker_Department
   WHERE position != 'Leader' AND leave_date is null

Code (SQL):

The consultation below will return the difference between the current date and the minimum start date of the worker. However, this includes those who do not have a null (non-current) end date.

SELECT Worker_ID, DATEDIFF(Now(), Min(Start_Date)) as NowSubMin FROM Worker_Department
WHERE position != 'Leader'
GROUP BY Worker_ID

Code (SQL):

Finally, both queries have been used to create the following query that is designed to return to workers currently working together with the difference in date between the current date and their first start date.

SELECT Worker_ID, DATEDIFF(Now(), Min(Start_Date)) as NowSubMin FROM Worker_Department
WHERE position != 'Leader'
GROUP BY Worker_ID

HAVING Worker_ID IN (SELECT Worker_ID FROM Worker_Department
   WHERE position != 'Leader' AND leave_date is null)

Code (SQL):

ubuntu – Nginx reaches worker connection limit

I am managing an Nginx server that occasionally drops a lot of connections, after inspecting the records I can see that the worker_connections the limit is being reached, p.

2019/12/06 08:37:09 (alert) 14517#14517: 25000 worker_connections are not enough

I can see from my metrics that Nginx starts losing connections after reaching 20k active connections despite the fact that there are 32 (automatic) workers, each of whom should be able to handle 25,000 connections (if my understanding is correct).

Things I have tried:

  • Increase worker connection limit
  • Converted multi_accept in
  • Epoll enabled: I could see that most of the load was handled by 2-3 work processes and others were practically not used, although this did not seem to really help the situation, since the PID shown in the error log is almost always the same process.

I am really perplexed with what else to try, if anyone has any suggestions on what might be causing it, it would be greatly appreciated.

Other relevant information:

  • 32 CPU, 64 GB of memory
  • Ubuntu v18.04.3
  • The connections are being represented upstream
  • Nginx has an average of around 400 RPS