Difference in WiFi performance across windows 10 dell machines, work and personal

I’d like to know where to start to understand why my work issued Dell Latitude 5280 Windows 10 machine seems to be much “happier” connecting to my home WiFi connection when compared to my newer Dell Vostro machine also running Windows 10.

To clarify what I mean by “happier”, my personal Dell machine often struggles to connect to my WiFi range extender. Additionally, my work machine (that runs a corporate VPN all the time) keeps a strong connection to my WiFi at a much greater distance. The experience with my work machine is much more “just works” where I’ve had to troubleshoot and reset WiFi hardware on my personal machine.

It would be great to have more reliability with my personal laptop WiFi connection so keen to know where to start.

design – How to call a list of REST APIs in huge volume to get optiomal performance?

I have written a Machine Learning based microservice in Python and using Flask for the REST endpoints. There is a need to pull data by calling around 4 REST APIs. But the call to the 2 APIs is going to be huge volume every day.

1st API – Max 1,000

2nd API – Max 2,50,000

3rd API – Max 1,80,000

4th API – Max 25,000

How should I call these APIs to get optimal performance?. I need to finish all the work in a couple of hours in worst case.

I was thinking to make a bunch of calls using a fixed-size thread pool like this

from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor(max_workers=10) as api_caller:
    for a_call in api_call_list:
        api_caller.submit(api_call_func, a_call)

What are some good SQL Server Performance Metrics from System Views?

From memory perspective I guess

Should be memory reservation, ballooning and swapping

From CPU perspective I think CPU ready time

There are some additional in terms of storage like disk latencies and usage.

Also it would be great to monitor host as well in addition to guest machines.

I do not recall but Jonathan from sqlskills or sqlperformance blogs has some of these listed as well

google play store – Is rooting a 4 year old android phone a good idea? What are other steps to improve performance?

I have an old Archos 50 platinum 4G, 1GB RAM, 4 cores, 8GB storage, android 6, bought for about 80 euros. In my mind, it’s still a good piece of hardware, in absolute terms.

Unfortunately, it’s becoming slow and almost unusable. I’ve disabled many google apps, reduced gmail storage cache, reduced the amount of apps to a minimum of what I want to use, disabled UI animation in the dev mode, but it’s still slow.

I’m really unable to understand what’s wrong with it. I’m taking care of it, and I can’t do a factory reset each time it feels slow (it takes a lot of time to set things up, add apps etc). I’m suspecting apps are getting larger and bloating my phone.

Booting the phone also takes 2 full minutes with the message “optimizing apps”. I did not find any fix.

I’m a developer so I know my ways around unix stuff, I’m just not experienced in rooting android phones, so I’m not sure it’s worth it:

  • How easy would it be to install a custom rom for this phone? Is android the only option? Are there performance oriented OS that works on android phones?

  • Are popular apps like whatsapp, Tinder, gmail, etc supported since the play store is not available on custom roms?

  • Would I get some performance benefit? Since many apps got a mobile website version, I guess it would be enough.

Generally, I can view this as an attempt to go around planned obsolescence, but I’m afraid it’s not really possible. I’m just asking here to make sure there are no good option. What do you think?

performance – Request mixed with JSON strings and array fields, in custom function for reduction need upgrade

i am working with this custom Script to reduce $_POST data that is a mixed of JSON data and Array:

    function buildVirtualData($data)
        if (is_array($data)) {
            $temp = ();
            foreach ($data as $key => $value) {
                $temp($key) = buildVirtualData($value);
            return reduArray($temp);
        } elseif (valJson($data)) {
            $json_obj = json_decode($data, true);
            foreach ($json_obj as $key1 => $json_sub_obj) {
                foreach ($json_sub_obj as $key2 => $value2) {
                    if (is_array($value2)) {
                        $temp = ();
                        foreach ($value2 as $keyof => $valueof) {
                            $temp($keyof) = buildVirtualData($valueof);
                        $json_obj($key1)($key2) = $temp;
                    } else {
                        if ('true' === $value2 || true === $value2) {
                            $json_obj($key1)($key2) = true;
                        } elseif ('false' === $value2 || false === $value2) {
                            $json_obj($key1)($key2) = false;
                        } else {
                            $json_obj($key1)($key2) = $value2;
                return reduArray($json_obj);
        } else {
            if ('true' === $data || true === $data) {
                $data = true;
            } elseif ('false' === $data || false === $data) {
                $data = false;
            return $data;
    function valJson($var)
        if (!is_array($var)) {
            return ((json_decode($var) != null) &&
                (is_object(json_decode($var)) || is_array(json_decode($var)))) ? true : false;
        } else {
            return false;
    function reduArray($array)
        $result = $array;
        if (is_array($array)) {
            $check = true;
            foreach ($array as $key => $value) {
                if (!is_array($value)) {
                    $check = false;
            if ($check) {
                $result = array_reduce($array, 'array_merge', ());
        return $result;
    echo var_dump($_POST);

can someone help me simplify or improve the script ?

performance tuning – Tricks to optimize a maximization problem

I am dealing with the following piece of code, to study a problem in quantum mechanics:

L(n_) := KirchhoffMatrix(CompleteGraph(n));

c(n_) := 1;
w(n_, p_) := Table(KroneckerDelta(k, p), {k, 1, n});
P(n_, p_) := KroneckerProduct(w(n, p), w(n, p));
s(n_) := 1/Sqrt(n)*Table(1, {k, 1, n});
Ps(n_) := KroneckerProduct(s(n), s(n));

H(n_, (Lambda)_) := (Lambda) L(n) - P(n, c(n)); 
U(n_, t_, (Lambda)_) := MatrixExp(-I*t*H(n, (Lambda)));
(Psi)(n_, t_, (Lambda)_) := U(n, t, (Lambda)).s(n);
prs(n_, t_, (Lambda)_) := Abs(w(n, c(n)).(Psi)(n, t, (Lambda)))^2;

Prob(n_) := NMaximize(prs(n, t, (Lambda)), {t,(Lambda)})((1))

The NMaximize function takes quite a while on my machine to compute $text{Prob}(n)$, so I would be interested in any suggestion to increase the efficiency of the code – taking into account that the input graph could be a different one. Probably the hardest part to compute is taking the exponential matrix of $H$, but I’m not aware of any way to optimize it.

performance – Python Avoid Loop with numpy

The problem is this:

I have a numpy array called “contour” of N coordinates (x,y) so of dimension (N , 2)

For each point in this table, I would like to create a square centered at that point and perform a test on the square formed.

I wanted to know if you had a way to solve my problem without using a for loop!

My version with a for loop:

def neighbour(point , mask  , n ): # Create the square around this point and count the number of neighbors.
      mask = mask(point(0) - int(n/2) : point(0) + int(n/2) + 1,point(1) - int(n/2):point(1) + int(n/2) + 1)
      return n**2 - np.count_nonzero(mask)

def max_neighbour(contour , mask=maske , n=ne): # Find the point with as many neighbors as possible
    t = np.zeros(len(contour)) # contour is the numpy array of dimension (2,N)
    for i in range(len(contour)):
        t(i) = neighbour(contour(i),mask,n)        
    return contour(np.argmax(t)) # t contains the list of neighbors for each contour point.

performance – Optimizing a loop

Good morning all,
I’m looking to loop around to find my pattern in a large raw. For this I scan my rawdata variable 4 by 4.
rawdata is heavy and it takes too long.
I’ve tried going through a vectorization technique but it doesn’t work.
Thank you

for(p in seq(0, length(rawdata), by = 4)){
        if((RTC(rawdata((p):(p+4)))=="DATA") & ((SInt32(rawdata((p+4):(p+7))) == 2) & (SInt16(rawdata((p+8):(p+9))))<=19)){
      }, silent=TRUE)  

performance – vectorized crosstabulation in Python for 2 array with 2 category each

I have 2 python list label and presence. I want to do cross-tabulation AND get count for each block out of 4, such as A,B,C and D ine below code.

  • Both the lists have values True and False.
  • I have tried pandas crosstab function, however its slower than my code which is below.
  • One problem with my code is it’s not vectorized and is using a for loop which slows things.

Is there any way to make below function faster in python?

def cross_tab(label,presence):
    for i,j in zip(list(label),list(presence)):
        if i==True and j==True:
        elif i==False and j==False:
        elif i==True and j==False:
        elif i==False and j==True:
    return A_token,B_token,C_token,D_token

Some sample data and example input and output.



A: 4 B: 2 C: 3 D: 5

performance – Simple parser using flex and c++

This is an alternative parser based on the specifications from this question. Briefly stated, the input file is a text file which has at least 33 fields separated by semicolons.

If the fourth field begins with either a T or an E, the line is valid and a subset of it written to the output file. Specifically, fields as numbered from $0$, should be output in this order: $ {0, 2, 3, 4, 5, 6, 10, 9, 11, 7, 32}$, each separated by a comma. All other fields are discarded.

#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <algorithm>
#include <experimental/iterator>
#include <iterator>
#undef YY_DECL
#define YY_DECL int FileLexer::yylex()

class FileLexer : public yyFlexLexer {
    FileLexer(std::istream& in, std::ostream& out) :
        yyFlexLexer{in, out},
    using FlexLexer::yylex;
    /// the yylex function is automatically created by Flex.
    virtual int yylex();
    /// pointer to the current value
    std::vector<std::string> vec;
    std::ostream& out;
    unsigned fieldcount{0};
    bool valid{true};

%option warn nodefault batch noyywrap c++
%option yyclass="FileLexer"

FIELD  (^;n)*
DELIM   ; 

{DELIM} { }
n      {
            if (valid && fieldcount >= 33) {
                std::copy(vec.begin(), vec.end(), std::experimental::make_ostream_joiner(out, ","));
                out << 'n';
            fieldcount = 0;
            valid = true;
            return 1;
            if (valid) {
                switch (fieldcount++) {
                    case 0:
                    case 1:
                    case 4:
                    case 5:
                    case 6:
                    case 7:
                    case 9:
                    case 32:
                    case 3:
                        if (yytext(0) == 'E' || yytext(0) == 'T') {
                            valid = true;
                        } else {
                            valid = false;
                    case 10:
                            auto n{vec.size()};
                            std::iter_swap(vec.begin()+n, vec.begin()+n-2);
                    case 11:
                            auto n{vec.size()};
                            std::iter_swap(vec.begin()+n, vec.begin()+n-1);

int main(int argc, char *argv()) {
    if (argc >= 3) {
        std::ifstream in{argv(1)};
        std::ofstream out{argv(2)};
        FileLexer lexer{in, out};
        while (lexer.yylex() != 0)

Compile with:

flex -o parsefile.cpp lexer.l 
g++ -O2 -std=gnu++17 parsefile.cpp -o parsefile

This works but is slow (2.165 s) on my machine, with the same million-line input file as mentioned in my answer to the other question.

I tried it a few different ways but was unable to get a version that was faster than the PHP code in the other question. The switch statement logic is arguably a bit overly clever and stores only the needed fields in the desired order, but the speed was about the same as the straightforward implementation.

If it matters, I’m using gcc version 10.1 and flex 2.6.4 on a 64-bit Linux machine.