performance tuning – Will running a command line on a supercomputer compute faster than running it on a domestic PC?

I have a Mathematica .nb file currently executing some commands in my personal computer for several days now. The commands are not written for parallel computing and they don’t seem to work with Parallelize. The time it takes to compute them has been too long for my progress and schedule, so I’m considering running the .nb via private supercomputer service provider. My question is, does running the commands that don’t support Parallelize will nevertheless compute significantly faster, or am I being naive believing that rewriting the code suitable for supercomputing won’t be necessary?

performance – PowerShell – Convert English words describing numbers to their values in Arabic numeral

I am extremely new to programming, and I am a really fast learner if I am interested, I have only started using PowerShell in less than a month, to improve my programming skill, this is a script I have written a few days ago, to do what I said in title, I developped it all by myself, completely without help(because when I asked for help online, no one ever bothered to deliver), as a self-imposed challenge.

Like I said, the script converts numbers in the form of English nouns to Arabic numeric values, I am on Windows 10 x64 using PowerShell 7.1 and have confirmed it is working, without bugs, here I will post this:”thirty-seven million eight hundred ninety-one thousand six hundred ninety-three” as sample input, it will correctly output this number 37891693, the goal of this question is to shorten the code while still maintain readability and clarity, without changing the logic, feel free to contribute if you’d like to help me improve my progamming skills, now I will post the script below:

    function Str-Num {
    $lt100 = @{
    $gt100 =@{
        if ($string -match '-') {$string = $string -replace "-", " "}
        if ($string -match ' and ') {$string = $string -replace " and ", " "}
        if ($string.split( )(0) -eq "Negative") {
            $string = $string -replace "Negative "
        if ($gt100.Keys -contains($string.split( )(0)) -or $string.split( )(0) -eq "Hundred") { $string = "one " + $string}
        $count = $string.split( ).count
        for ($i=0;$i -lt $count;$i++) {
            $word = $string.split( )($i)
            if ($decimal -eq $false) {
                if ($lt100.Keys -contains($word)){
                elseif ($word -eq "Hundred" -and $digit -ne 0) {
                elseif ($gt100.Keys -contains($word)) {
            if ($word -eq "point") {
                $decimal = $true
            if ($decimal -eq $true -and $lt100.Keys -contains $word) {
                $number+=$lt100.$word * (math)::pow(10,-($i-$point))
    if ($negative -eq $true) {$number=-$number}
    Return "$number"
$string = Read-Host "Please input string"
Str-Num $string

performance – Merge arrays in objects in array based on property

array = ({
    id: 1,
    items: (1, 2, 3)
    id: 2,
    items: (1, 2)
    id: 1,
    items: (4)
    id: 3,
    items: (1)

const mergeLines = (a) => {
  const processedIds = ();
  const finalResult = ();
  a.forEach((element) => {
    const elementId =;
    if (processedIds.indexOf(elementId) === -1) {
      const occurences = a.filter((el) => === elementId);
      if (occurences.length > 1) {
        const allItems = occurences.reduce((acc, curr) => {
          acc = acc.concat(curr.items);
          return acc;
        }, ());
        element = { ...element,
          items: allItems
      } else {

  return finalResult;


performance – My WordPress site is slow when I’m not using a VPN

I host a WordPress site which isn’t behaving as I’d expect.

Its taking an age to load, sometimes timing out before its finished.

However, if I have my VPN (PIA) enabled, it loads immediately and is super responsive.

Initially I thought the issue was related only to mobile content, as I didnt realise I had my VPN on and was only able to replicate the issue on my phone (Safari/iPhone), however since turning off my VPN, the site is slow and unresponsive on chrome and IE also.

Its a wordpress site on PHP 5.6, Windows Server 2012 with a caching plugin configured.

Has anyone seen anything like this before?
Any suggestions on the cause?

performance tuning – Vectorization of multifold summation to speedup

I searched this website but didn’t find any suitable answer describing how one can speed up summation in Mathematica using vectorization techniques and other techniques.

I often have to numerically sum over a multi-fold series of the hypergeometric type in my research work. One toy example is

lim = 150;
  Gamma(1 + n1 + n2 + n3)/(n1! n2! n3!) (0.1)^n1 (0.1)^n2 (0.1)^
   n3, {n1, 0, lim}, {n2, 0, lim}, {n3, 0, lim}) // AbsoluteTiming

which takes about 42 sec on my laptop.
The only way I know to speed-up is by using ParallelSum instead of Sum, which takes 9 sec, thanks to my 8 core processor.
I want to know if there are any tricks or techniques to speed-up?

performance – Tic Tac Toe Game using Python

Your board dictionary can be simplified from

board = {1: ' ', 2: ' ', 3: ' ',
         4: ' ', 5: ' ', 6: ' ',
         7: ' ', 8: ' ', 9: ' '}

to a dict comprehension:

board = {n: ' ' for n in range(1, 10)}

Your win_game function can be greatly simplified, from

def win_game(marker, player_id):
    """Function to check for winning combination"""
    if board(1) == marker and board(2) == marker and board(3) == marker or 
            board(4) == marker and board(5) == marker and board(6) == marker or 
            board(7) == marker and board(8) == marker and board(9) == marker or 
            board(1) == marker and board(4) == marker and board(7) == marker or 
            board(2) == marker and board(5) == marker and board(8) == marker or 
            board(3) == marker and board(6) == marker and board(9) == marker or 
            board(1) == marker and board(5) == marker and board(9) == marker or 
            board(3) == marker and board(5) == marker and board(7) == marker:

        print("Player", player_id, "wins!")
        return True

        return False


def win_game(marker, player_id):
    """Function to check for winning combination"""
    if board(1) == board(2) == board(3) == marker or 
       board(4) == board(5) == board(6) == marker or 
       board(7) == board(8) == board(9) == marker or 
       board(1) == board(4) == board(7) == marker or 
       board(2) == board(5) == board(8) == marker or 
       board(3) == board(6) == board(9) == marker or 
       board(1) == board(5) == board(9) == marker or 
       board(3) == board(5) == board(7) == marker:
       print("Player", player_id, "wins!")
       return True
    return False

You see, the else statement is not necessary, as the return statement will make the program jump out of the function. Also, as you can probably tell,

a == 1 and b == 1 and c == 1

is the same as

a == b == c === 1

The unnecessary else statement applies to your other functions as well

For get_player_details:

def get_player_details(curr_player):
    """Function to get player identifier and marker"""
    if curr_player == 'A':
        return ('B', 'O')
        return ('A', 'X')


def get_player_details(curr_player):
    """Function to get player identifier and marker"""
    if curr_player == 'A':
        return ('B', 'O')
    return ('A', 'X')

For play_again:

def play_again():
    """Function to check if player wants to play again"""
    print("Do you want to play again?")
    play_again = input()

    if play_again.upper() == 'Y':
        for z in board:
            board(z) = ' '
        return True
        print("Thanks for playing. See you next time!")
        return False


def play_again():
    """Function to check if player wants to play again"""
    print("Do you want to play again?")
    play_again = input()
    if play_again.upper() == 'Y':
        for z in board:
            board(z) = ' '
        return True
    print("Thanks for playing. See you next time!")
    return False

In your print_board function, you can increase the readability of your print statement with a formatted string. Using not instead of == 0 is considered more pythonic:

def print_board():
    """Function to print the board"""
    for i in board:
        print(i, ':', board(i), ' ', end='')
        if i % 3 == 0:


def print_board():
    """Function to print the board"""
    for i in board:
        print(f'{i} : {board(i)}  ', end='')
        if not i % 3:

performance – Sql Server Dynamic partition with page compressed index

I created a table with partitions by following belove link;

I created partitions monthly based on Date_Id column;

AS RANGE RIGHT FOR VALUES ('20200101', '20200201', '20200301','20200401')

AS PARTITION Fnc_Prt_Fact_Sales

CREATE TABLE (dbo).(Fact_Sales)(
   (Slip_No) (nvarchar)(155) NULL,
   (Date_Id) (int) NOT NULL,
   (City_Id) (int) NOT NULL,
   (Store_Id) (int) NOT NULL,
   (Sales) FLOAT
) ON Prt_Scheme_Fact_Sales(Date_Id)

 CREATE  CLUSTERED INDEX (Ix_Fact_Sales) ON (dbo).(Fact_Sales)
    (Date_Id) ASC,
    (City_Id) ASC,
    (Store_Id) ASC
) WITH (
)  ON Prt_Scheme_Fact_Sales(Date_Id)

i want to add partitions by dynamically (monthly). If i do that, how do i make data_compression include new added partition.
On Blog, partitions are written by manually.

if it matters, i ue sqlserver 2017 standart edition.

performance – MySQL V8 replication lagging waiting for handler commit

let me start with some background.

We’ve decided to upgrade our Master DB to a more performant machine. We got a server with 1.5TB of RAM, Quad Intel(R) Xeon(R) Gold 6254 and IO is all NVMe drives. Our current environment is on MySQL 5.7.30 which is what I installed on this new server. I activated it as a READ server first just to see how it behaves. I based my config on existing master server with added slave flags. After a few days we get complaints of random slow downs in our main application. What we discovered is that slave was lagging but that drop tables seemed to be locking the environment. Took it off the read pool to see if I can figure out the issue

After doing some research we thought the issue was this bug. We were always planning on upgrading to V8 so figured I may as well do it now and get it over with. We upgraded to V8.0.22 and again re-activated the new server on the network. Again after a few days it starts to slow down dropping tables and inserts. Since this is V8 we’re seeing a lot of waiting for handler commit. After researching that state we found some docs saying it may be range_optimizer_max_mem_size the issue. We set it to 0, restarted mysql and waited. A day later slow downs again.

Started looking at the OS (centos 7.8), maybe open files is not enough. Looked and we did set it to over 1M open files. We did notice that we set the rule on the mysql user but systemctl has it’s own setting. Set it at systemctl level (/lib/systemd/system/mysqld.service && /etc/systemd/system/mysqld.service.d/limits.conf) and restarted mysql. A day passes and slow down re-appears. Now though the server can’t even keep up with the replication let alone additional traffic.

I thought it was my multi threaded replication (as we have 4 primary databases) so I dropped it to single threaded. All that did was make show slave status always show Slave_SQL_Running_State: waiting for handler commit.

At this stage I’m just stumped at what’s happening. The new server replication just falling behind more and more and show full processlist always has queries with either waiting for handler commit or checking permissions. What seems the slowest is drop tables though. I can see a drop tables hanging in the process list for 3-5 seconds

State: waiting for handler commit
Info: DROP TABLE IF EXISTS crm.invalidPhoneNumbersCompiled_TMP,crm.invalidPhoneNumbersCompiled_OLD /* generated by server */

We also found this on stackoverflow and went ahead and set all the variables on our server but same result.

On my PMM I can’t see anything out of the ordinary on either OS overview or MySQL overview. The server is practically idle and replication just lagging. IO is idle 99.9% of the time and the load is not even a blip on it.


INNO Engine Status

Sample log

performance – Sorting Algorithm (from scratch) in Python

I am attempting to write a Python program where I have to create a sorting algorithm without the assistance of the built-ins (like the sort() function). I used the def keyword to create the sorting_function() (the code bits in this paragraph will make more sense once you see my code) so all I have to do at the end is call the sorting_function(P.S. this is where the original_list needs to be passed as a parameter). My code (almost) works, but my concern is, how do I pass the original_list (or the user’s input) as a parameter of the function? Instead of having my program edited by the user (where they have to go to the bottom of the program to insert the numbers of their list) for it to work? Also, silly me, the user is still going to edit the program, but it’s easier and required to pass the original_list as a parameter of the sorting_algorithm function.

I also thought it would be helpful for you to know that I have no prior coding experience and that this is my first code in any programming language, so this may not be the most efficient code.

My code is as follows (my explanations might be too long, but I need to do this for clarity’s sake. If any of my explanations are wrong/vague, please don’t hesitate to correct me):

def sorting_algorithm(numbers):                                                         #this is the function in which the code will be written and upon completion, should be called on.
    sorted_list = ()                                                                    #the sorted_list variable is assigned to an empty list, which is where the sorted numbers will be stored.                                                               
    index = 0                                                                           #the index starts from zero because, to effectively sort the numbers, the counting must start from the first number in the list.
    comparison_number = 1                                                               #this is a comparison variable used for comparing the numbers in the list.

    while numbers:
        if numbers(index) == numbers(-1) and numbers(index) <= comparison_number:       #if the program reaches the last number in the list and it is smaller than the comparison_number (which is 1)
            sorted_list.append(numbers.pop(index))                                      #then that number is put into the sorted_list's '()'.
            index = 0                                                                   #the index stays at zero for each if, elif and else statements as only one of these conditions can be true.
        elif numbers(index) == numbers(-1):                                             #if the program reaches the last number in the list,
            index = 0                                                                   #then the position of the index should be returned to '0'.
            comparison_number += 1                                                      #the comparison number will also go up by one because numbers (from the list) are being compared to the comparison number (which, again is 1)
        elif numbers(index) <= comparison_number:                                       #if the number (under analysis) is less than or equals to comparison_number
            sorted_list.append(numbers.pop(index))                                      #then that number can be placed into the sorted_list's '()'.
            index = 0                                                                   #the index is again returned to the first number in the list so it (the number) can be analyzed.
            comparison_number += 1                                                      #the comparison_number goes up by one again because another number has been compared and added to the sorted_list's '()'.
            index += 1                                                                  #if none of the above conditions apply to the number(s) in the list,
    return sorted_list                                                                  #then the program will move onto the next number and the loop will restart! Also, I'm not entirely sure
                                                                                        #if the return sorted_list line is needed.

sorting_algorithm()                                                                     #this is how to call on the function, I believe.
original_list = (123, 85, 9587, 0, 453, 7, 63)                                          #this is where the user can/will input the numbers of their list.
print("The unsorted list: {}".format(original_list))                                    #I had to look at some examples of code to figure this out, I'm still not sure what '{}' is for...
original_list = sorting_algorithm(original_list)                                        ##but this will take the numbers of the list and go through the loop and put the numbers in ascending order.
print("And the sorted list: {}".format(original_list))                                  #at last, this line will print out the sorted list of numbers.

Note: I know that the names of my variables are really long (I used entire words to identify my vars), but it’s easier for me to go back and know what those variables do. I’m very sorry if this is annoying to some of you, but since this is my first Python code, I believe it is essential for me to do this to achieve optimal clarity.

I hope my question isn’t too vague, and if I can provide any further details, please let me know.

performance – Configure MySQL to minimize file locks

To start off, I’m not a DBA, so I may have overlooked something simple.

Backstory, we currently host all our client accounts off of a VPS. VPS has 42 cores available, 8GB of ram, and SSD storage. Most of our databases are from CMS (Joomla or WordPress) with a dozen Symfony applications. Current table count is just over 15K with ~4.2GB of data. The initial problem started with pages becoming unresponsive for a few seconds at a time. After digging into the logs I found multiple errors like these

2020-11-27T19:50:35.218190Z 0 (Note) InnoDB: page_cleaner: 1000ms intended loop took 5187ms. The settings might not be optimal. (flushed=8 and evicted=0, during the time.)

2020-11-27T20:17:32.699558Z 0 (Note) InnoDB: page_cleaner: 1000ms intended loop took 4027ms. The settings might not be optimal. (flushed=14 and evicted=0, during the time.) 

The flush times were often as high as 20 seconds. I started researching fixes for it and eventually moved all tables to InnoDB, increased innodb buffer pool, increased table cache size, increased table open cache, and increased file limits. That resulted in file lock errors.

2020-11-27T16:49:38.736243Z 378 (ERROR) InnoDB: Unable to lock ./sedashoa_feb2718/zys4l_easyblog_featured.ibd error: 37
2020-11-27 10:49:38 0x7f6009133700  InnoDB: Assertion failure in thread 140050445842176 in file line 906

I wasted a lot of time tweaking the settings back down, since the first time I contacted my hosting provider about the issues, they seemed to think it was MySQL memory usage. After tweaking everything down to where the max MySQL memory usage was under 3GB and raising table open cache back, I started getting the same error: 37 crashes. This time, when I contacted hosting support, they informed me I was hitting the file ops limit. So I lowered table open cache back down to 1000.

My question is how should I configure MySQL to make the most out of the resources available? I already know we need to either move on to a dedicated host or to a managed SQL server like Digital Ocean offers. In the meantime, what should I tweak to make the most out of the ram available (figure 4GB just for MySQL) without running into the file lock limits. This is the current config.

# Default options are read from the following files in the given order:
# /etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf

#port                            = 3306
#socket                          = /var/run/mysqld/mysqld.sock

# Required Settings
basedir                         = /usr
bind_address                    = # Change to to allow remote connections
datadir                         = /var/lib/mysql
default-time-zone               = '-5:00'
max_allowed_packet              = 32M       #
max_connect_errors              = 1000000
performance-schema              = 1     # turn performance schema on
pid_file                        = /var/run/mysqld/
port                            = 3306
#socket                          = /var/run/mysqld/mysqld.sock

# Enable for b/c with databases created in older MySQL/MariaDB versions (e.g. when using null dates)

tmpdir                          = /tmp
user                            = mysql

# InnoDB Settings
default_storage_engine          = InnoDB
innodb_buffer_pool_instances    = 2     # Use 1 instance per 1GB of InnoDB pool size
innodb_buffer_pool_size         = 2G    # Use up to 70-80% of RAM (working off 4GB or half VM RAM)
innodb_lru_scan_depth           = 256   # Default 1024
innodb_file_per_table           = 1
innodb_flush_log_at_trx_commit  = 1         #
innodb_flush_method             = O_DIRECT  #
innodb_log_buffer_size          = 16M       #
innodb_log_file_size            = 256M
innodb_stats_on_metadata        = 0         #
innodb_use_native_aio           = 0     # MySQL will not start with native asynchronous file access enabled

#innodb_temp_data_file_path     = ibtmp1:64M:autoextend:max:20G # Control the maximum size for the ibtmp1 file
innodb_thread_concurrency       = 16     # Optional: Set to the number of CPUs on your system (minus 1 or 2) to better
                                        # contain CPU usage. E.g. if your system has 8 CPUs, try 6 or 7 and check
                                        # the overall load produced by MySQL/MariaDB.
innodb_read_io_threads          = 8
innodb_write_io_threads         = 8

# MyISAM Settings - disable cache
# query_cache_limit               = 4M    # UPD - Option supported by MariaDB & up to MySQL 5.7, remove this line on MySQL 8.x
query_cache_size                = 0     # UPD - Option supported by MariaDB & up to MySQL 5.7, remove this line on MySQL 8.x
query_cache_type                = 0     # Option supported by MariaDB & up to MySQL 5.7, remove this line on MySQL 8.x

key_buffer_size                 = 10M   # UPD

low_priority_updates            = 1         #
concurrent_insert               = 2         # Same as auto

# Connection Settings
max_connections                 = 25   # UPD
max_user_connections            = 24

back_log                        = 60
thread_cache_size               = 100       #
thread_stack                    = 256K      # Default value for 64 bit platforms    See:

interactive_timeout             = 180
wait_timeout                    = 60

# For MySQL 5.7+ only (disabled by default)
#max_execution_time             = 30000 # Set a timeout limit for SELECT statements (value in milliseconds).
                                        # This option may be useful to address aggressive crawling on large sites,
                                        # but it can also cause issues (e.g. with backups). So use with extreme caution and test!
                                        # More info at:

# For MariaDB 10.1.1+ only (disabled by default)
#max_statement_time             = 30    # The equivalent of "max_execution_time" in MySQL 5.7+ (set above)
                                        # The variable is of type double, thus you can use subsecond timeout.
                                        # For example you can use value 0.01 for 10 milliseconds timeout.
                                        # More info at:

# Buffer Settings
join_buffer_size                = 512K  # UPD
read_buffer_size                = 1M    # UPD
read_rnd_buffer_size            = 512K  # UPD
sort_buffer_size                = 2M    # UPD

# Table Settings
# In systemd managed systems like Ubuntu 16.04+ or CentOS 7+, you need to perform an extra action for table_open_cache & open_files_limit
# to be overriden (also see comment next to open_files_limit).
# E.g. for MySQL 5.7, please check:
# and for MariaDB check:

# ***Non crashing, but with slow cache flush ***
table_definition_cache          = 20000 # UPD
table_open_cache                = 1000  # UPD Total tables on 2020-11-24=15,617
open_files_limit                = 2000  # UPD - This can be 2x to 3x the table_open_cache value or match the system's
                                        # open files limit usually set in /etc/sysctl.conf or /etc/security/limits.conf
                                        # In systemd managed systems this limit must also be set in:
                                        # /etc/systemd/system/mysqld.service.d/override.conf (for MySQL 5.7+) and
                                        # /etc/systemd/system/mariadb.service.d/override.conf (for MariaDB)
# ***Crashes because of VPS op limit ***
#table_definition_cache          = 20000
#table_open_cache                = 20000
#open_files_limit                = 40000

# max_heap_table_size and tmp_table_size should be changed together and kept equal
max_heap_table_size             = 16M
tmp_table_size                  = 16M

# Search Settings
ft_min_word_len                 = 3     # Minimum length of words to be indexed for search results

# Logging
log_error                       = /var/lib/mysql/mysql_error.log
log_queries_not_using_indexes   = 1
long_query_time                 = 5
slow_query_log                  = 0     # Disabled for production
slow_query_log_file             = /var/lib/mysql/mysql_slow.log

# Variable reference
# For MySQL 5.7:
# For MariaDB:
max_allowed_packet              = 64M

Any ideas or suggestions would be appreciated.