fiber – Pulling multiple cables through conduit – pull one at a time, or all at once?

I’ve got a length of 2″ conduit (225 feet) that I need to pull several lengths of Ethernet and fiber cables through. There’s already one length of fiber running down the conduit. What’s the best way to get the new ones through the conduit? I haven’t been able to find any specific guidance. I’ve pulled single lengths of cable through conduit before, but haven’t had to do several in the same pipe. So, big bucket of cable lubricant in hand, do I:

  • A: Attach all the cables to the same pull head and pull multiple cables at the same time as a bundle.
  • B: Pull one cable and a pull string/rope together first, then repeat for each additional cable (so 5 cables = 5 pulls)

Can I use flow to update multiple items?

Is it possible to use flow to update multiple items?
I would like it to loop through items and update all the Fields.

Thanks in advance

jwt – Multiple user specific APIs with a single Authentication Server

I’m currently in need of some clarification for an authentication/overall strategy. First I will describe the use case and then the questions that arise for me.

Use Case

I want to have a single docker container consisting of an API and a database for each group of users. So for the sake of this example let’s say we have three docker containers A, B and C and three users for every container (A1, A2, A3, B1, B2, …). Each of these users should only have access to their corresponding API. So e.g. user B1 can only read/write with API B.

The APIs should be consumed with multiple SPA FrontEnd Apps. To not have to log in on each app I want to implement a SSO flow with a single authentication Server. My thought was to let the user log in with the authentication server that responds with a jwt token (access and refresh) with the unique username in the payload. So on every request to an API Gateway that routes the user to the correct API (e.g. A1 -> A), the user sends the access token. The API then makes a request to the Authentication Server to verify the correctness of the jwt. If that‘s successful the API can log in the user with the specified username (because it also has a database entry of this unique user) over a remote user backend for example. This way if the routing or anything would go wrong the access token would be verified by the Authentication Server but the user A could not be logged into API B because there‘s no user with that name in the database of API B. The remote user header could also not be tempered with because every malicious request that sets this header would be prefixed with HTTP_.

Questions

  1. Is this even a secure/feasible authentication/authorization flow?

  2. Is there any default strategy for a use case like this (oidc?)?

  3. How do I safely store access/refresh tokens? Refresh token in a httpOnly cookie and the Short lived Access Token in-Memory of the Browser with a WebWorker or with Private static fields?

  4. Any possible obvious attack vectors?

python 3.x – Multiple HTTP requests with threading and queues

Im working on a I/O bounds application where I want to learn how to use threading property as well as queues to minimize the CPU usage as well as RAM resources. My plan was to use threading queues to do it and this is what I have done so far

# System modules
import time
from queue import Queue
from threading import Thread
from loguru import logger
import requests

feed_urls = (
    'http://www.foxnews.com/',
    'http://www.cnn.com/',
    'http://europe.wsj.com/',
)

# Set up some global variables
num_fetch_threads = 5
queue_exploring = Queue()
queue_monitoring = Queue()
save_length = {}


def get_requests(url):
    return len(requests.get(url).text)


def send_notifcation(queue: Queue):
    while True:
        url = queue.get()
        logger.info(f'Sending notifications: {url}')
        # FIXME Send notifications
        queue.task_done()


def explore_links(queue: Queue):
    """This is the worker thread function.
    It processes items in the queue one after
    another.  These daemon threads go into an
    infinite loop, and only exit when
    the main thread ends.
    """
    while True:
        url = queue.get()
        get_data_length = get_requests(url)

        if save_length(url) != get_data_length:
            logger.info(f"New changes on the webpage found! -> {url}")
            queue_monitoring.put(url)

        logger.info(f"No new changes in the page found! -> {url}")
        time.sleep(60)

        # Add back to queue
        queue.put(url)
        queue.task_done()


for i in range(num_fetch_threads):
    worker_one = Thread(target=explore_links, args=(queue_exploring,))
    worker_two = Thread(target=send_notifcation, args=(queue_monitoring,))

    worker_one.setDaemon(True)
    worker_two.setDaemon(True)
    worker_one.start()
    worker_two.start()


def main():
    logger.info('*** Main thread waiting ***')
    for url in feed_urls:
        response = get_requests(url)
        save_length(url) = response
        queue_exploring.put(url)

    queue_exploring.join()
    queue_monitoring.join()
    logger.info('*** Done ***')


if __name__ == '__main__':
    main()

The idea is that we want forever loop and see if a webpage has done any changes and if they have done any changes, we want to be notified that there has been a change. Simple as that. However here I do use multiple threads as well as queues, one for monitoring to see if there has been a change in the requests and the second one is to send notification

google – Is it better for SEO to use generic meta data that is the same for multiple pages or to omit the meta data?

Our company has developed software to create and publish posts automatically on a WordPress site. Since the posts are published automatically and the images are also saved automatically, is it good SEO practice to use generic data for each post?

For example, if I have this data for every image:

        title: 'My awesome image',
        alt_text: 'an image of something awesome',
        caption: 'This is the caption text',
        description: 'More explanatory information'.

Is this wrong? What would be the right thing to do if I don’t have the budget to write custom data for each image?
Is it better for SEO if we don’t post generic data and instead remove the fields?

node.js – Serverside multiple documents (txt, doc, pdf, png, xls) converting into pdf

I am trying to create a service that converts an array of uploaded data: Url‘encoded files into a single PDF file (and saves it into a temp folder).

But I have not found any working solution to convert data: Url files into PDF. I have already explored the PDFkit, it is nice, but it seems it cannot convert files (the main goal is to create new ones).

I am running service on CentOS and using Node.js.

java – Concurrency: Server that deduplicates 9-digit numbers from multiple clients and writes them to a log

Problem Statement:

Write a server (“Application”) in Java that opens a socket and restricts input to at most 5 concurrent clients. Clients will connect to the Application and write any number of 9 digit numbers, and then close the connection. The Application must write a de-duplicated list of these numbers to a log file in no particular order.

Primary Considerations

        • The Application should work correctly as defined below in Requirements.
        • The overall structure of the Application should be simple.
        • The code of the Application should be descriptive and easy to read, and the build method and runtime parameters must be well-described and work.
        • The design should be resilient with regard to data loss.
        • The Application should be optimized for maximum throughput, weighed along with the other Primary Considerations and Requirements below.

Requirements

        1. The Application must accept input from at most 5 concurrent clients on TCP/IP port 4000.
        2. Input lines presented to the Application via its socket must either be composed of exactly nine decimal digits (e.g.: 314159265 or 007007009) immediately followed by a server-native newline sequence; or a termination sequence as detailed in #9, below.
        3. Numbers presented to the Application must include leading zeros as necessary to ensure they are each 9 decimal digits.
        4. The log file, to be named "numbers.log”, must be created anew and/or cleared when the Application starts.
        5. Only numbers may be written to the log file. Each number must be followed by a server-native newline sequence.
        6. No duplicate numbers may be written to the log file.
        7. Any data that does not conform to a valid line of input should be discarded and the client connection terminated immediately and without comment.
        8. Every 10 seconds, the Application must print a report to standard output:
                • The difference since the last report of the count of new unique numbers that have been received.
                • The difference since the last report of the count of new duplicate numbers that have been received.
                • The total number of unique numbers received for this run of the Application.
                • Example text for #8: Received 50 unique numbers, 2 duplicates. Unique total: 567231
        9.If any connected client writes a single line with only the word "terminate" followed by a server-native newline sequence, the Application must disconnect all clients and perform a clean shutdown as quickly as possible.
        10. Clearly state all of the assumptions you made in completing the Application along with any instructions on how to set up and run it in a README file.

Notes

        • You may write tests at your own discretion. Tests are useful to ensure your Application passes Primary Consideration A.
        • You may use common libraries in your project such as Apache Commons and Google Guava, particularly if their use helps improve Application simplicity and readability. However the use of large frameworks, such as Akka, is prohibited.
        • Your Application may not for any part of its operation use or require the use of external systems, for example Apache Kafka or Redis.
        • At your discretion, leading zeroes present in the input may be stripped—or not used—when writing output to the log or console.
        • Robust implementations of the Application typically handle more than 2M numbers per 10-second reporting period on a modern MacBook Pro laptop (e.g.: 16 GiB of RAM and a 2.5 GHz Intel i7 processor).

My implementation:

Application.java:

public class Application {
    static final int PORT = 4000;

    public static void main(String() args) {
        Server server = new Server();
        System.out.println("Starting server");
        server.start(PORT);
    }
}

Server.java:

import java.io.*;
import java.net.ServerSocket;
import java.net.Socket;
import java.net.SocketException;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.regex.Pattern;

public class Server {
    private static final int MAX_CLIENTS = 5;
    private ServerSocket serverSocket;
    private ExecutorService executorService = Executors.newFixedThreadPool(MAX_CLIENTS);

    private FileWriter fw;
    private BufferedWriter bw;
    BlockingQueue blockingQueue = new LinkedBlockingQueue();


    public void start(int port) {
        try {
            serverSocket = new ServerSocket(port);
        } catch (IOException e) {
            e.printStackTrace();
        }

        try {
            fw = new FileWriter("numbers.log", false);
        } catch (IOException e) {
            e.printStackTrace();
        }
        bw = new BufferedWriter(fw);
        LogWriterTask logWriterTask = new LogWriterTask(blockingQueue, bw);
        logWriterTask.start();


        while (true) {
            try {
                ClientHandler clientHandler = new ClientHandler(serverSocket.accept(), blockingQueue);
                executorService.submit(clientHandler);
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
    }

    private class ClientHandler extends Thread {
        private Socket clientSocket;
        private BufferedReader in;
        BlockingQueue<Integer> blockingQueue;
        private Pattern numberPattern = Pattern.compile("\d{9}");

        public ClientHandler(Socket socket, BlockingQueue<Integer> blockingQueue) {
            this.clientSocket = socket;
            this.blockingQueue = blockingQueue;
        }

        @Override
        public void run() {

            try {
                in = new BufferedReader(
                        new InputStreamReader(clientSocket.getInputStream()));

                String inputLine = "";

                System.out.println("Client connection started");
                while (true) {

                    try {
                        inputLine = in.readLine();
                        if (inputLine == null) {
                            break;
                        }
                        if (inputLine.equals("terminate")) {    //Disconnect all clients
                            System.exit(0);
                        };
                    } catch (SocketException e) {
                        e.printStackTrace();
                        stopClient();
                        break;
                    }

                    int num = processNumber(inputLine);
                    if (num == -1) {
                        stopClient();
                        break;
                    }
                    blockingQueue.add(num);
                }
            } catch (IOException e) {
                e.printStackTrace();
            } finally {
                stopClient();
            }
        }


        public void stopClient() {
            try {
                in.close();
                clientSocket.close();
                System.out.println("Client connection closed");
            } catch (IOException e) {
                e.printStackTrace();
            }
        }

        private int processNumber(String inputLine) {
            if (!numberPattern.matcher(inputLine).matches()) return -1; //Invalid format, terminate client

            int num;

            try {
                num = Integer.parseInt(inputLine);
            } catch(NumberFormatException e) {  //Invalid format, terminate client
                num = -1;
            }

            return num;
        }
    }


}

LogWriterTask.java:

import java.io.BufferedWriter;
import java.io.IOException;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.BlockingQueue;

public class LogWriterTask extends Thread {
    private final static int MAX_NINE_DIGIT_RANGE = 1000000000;
    private final static int SUMMARY_WAIT_PERIOD = 10000;
    private int uniqueCount = 0;
    private int duplicateCount = 0;
    private int uniqueTotal = 0;
    private BlockingQueue<Integer> blockingQueue;
    private BufferedWriter bw;
    private int() uniqueNums = new int(MAX_NINE_DIGIT_RANGE);

    public LogWriterTask(BlockingQueue<Integer> blockingQueue, BufferedWriter bw) {
        this.bw = bw;
        this.blockingQueue = blockingQueue;
        Timer timer = new Timer();
        timer.schedule(new SummaryTask(), 0, SUMMARY_WAIT_PERIOD);
    }

    @Override
    public void run() {
        while (true) {
            while (!blockingQueue.isEmpty()) {
                int num = blockingQueue.poll();
                if (uniqueNums(num) == 0) {
                    try {
                        uniqueCount++;
                        uniqueTotal++;
                        bw.write(String.format("%09d", num));
                        bw.newLine();
                        bw.flush();
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                } else {
                    duplicateCount++;
                }

                uniqueNums(num)++;
            }
        }
    }

    class SummaryTask extends TimerTask {
        @Override
        public void run() {
            System.out.printf("Received %d unique numbers, %d duplicates. Unique total: %dn", uniqueCount, duplicateCount, uniqueTotal);
            uniqueCount = 0;
            duplicateCount = 0;
        }
    }
}

Client.java:

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.net.Socket;

public class Client {
    private Socket clientSocket;
    private PrintWriter out;
    private BufferedReader in;

    public void startConnection(String ip, int port) {
        try {
            clientSocket = new Socket(ip, port);
            out = new PrintWriter(clientSocket.getOutputStream(), true);
            in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
        } catch (IOException e) {
            System.err.println("Error when initializing connection");
        }

    }

    public void sendMessage(String msg) {
        try {
            out.println(msg);
            return;
        } catch (Exception e) {
            return;
        }
    }

    public void stopConnection() {
        try {
            in.close();
            out.close();
            clientSocket.close();
        } catch (IOException e) {
            System.err.println("error when closing");
        }

    }
```

sql server – Parameter Sniffing and Multiple Plans in Cache

We have a multi-tenanted database. FirmID is the partition key and we have lots of different firms.

I am running into a parameter sniffing issue and I am having a heck of a time getting around it.

I would rather not use any (Options) on the query.

My latest thought was to change the name of the parameter I am using for the firm. In the snippet below you will see that instead of using @FirmID I called it @Firm611 where 611 is the actual firm of the ID. This will give me a unique query for every firm.

select
    c.ID (_cid),
    c.Name (Name)  
from vwClaims c with(nolock)  
where c.FirmID=@Firm611       
and (c.Name is not null and c.Name!='')
    
select
    c.ID (_cid),
    c.Name (Name)
from vwClaims c with(nolock)
where c.FirmID=@Firm625
and (c.Name is not null and c.Name!='')

After running Brent Ozar’s sp_BlitzCache, I found that it is just compiling down to the same query and causing duplicate cache entries:

Query Plans associated with the same Query Hash

My question is am I reading that result right? Even though I am changing the parameter name, is it really still using the same plan and parameter sniffing?

views – How to apply a sort criteria to multiple displays without erase each individual criteria?

sorry if is a stupid question and for my english, but here I go:

I have a view with multiple displays (don’t ask me why) and I have some sort criterias applied to all the displays and one applied individually to each display. Now I have to add one more sort criteria to all displays but if a I select to apply to all it just erase the individual one. My question is: there is a way to workaround it? Cause it will be a hella burden to apply to each display individually again.

Ps.: I’m just a final user, I don’t have access to the site’s code.