server setup – unable to load image in pub/media/customer

I have images in pub/media/customer directory but when i tried to access the image by load it using browser, i got file not found response. I have commented this line in nginx.conf.sample

#location /media/customer/ {
#   deny all;

in my magento root folder as it was suggested in this thread and restarted my nginx but the image still not found.

memory – Computer fails to load with new RAM

I decided to upgrade my PC by increasing amount of RAM from 16GB (4 sticks, 4GB each) to 32GB (4 sticks, 8GB). However, after replacing RAM with new sticks, the computer failed to load. After launching it, some coolers make several rotations and the system fails to turn on. And several seconds later it makes another try but without any success. Old RAM stick work totally normally.

My motherboard specification says that such amount and type of RAM is supported ( Here’s a cite:

  • 4 x 1.5V DDR3 DIMM sockets supporting up to 32 GB of system memory
  • Dual channel memory architecture
  • Support for DDR3 2133/1866/1600/1333/1066 MHz memory modules
  • Support for non-ECC memory modules
  • Support for Extreme Memory Profile (XMP) memory modules

The store where I bought them also confirms their compatibility before I made this order.

I tried the following:

  • Reset CMOS in BIOS by extracting a small battery under the video card;
  • Insert RAM sticks in all slots one by one;
  • Change RAM sticks’ location in different DIMM slots, in different combinations;
  • Manually set the memory modules’ frequency in BIOS to 1866 MHz.

Technical info:

  • Motherboard – Gigabyte Z68P-DS3 (Socket 1155);
  • New RAM – HyperX DDR3-1866 16384MB PC3-14900 (Kit of 2×8192) FURY Black (HX318C10FBK2/16), 2 pairs;
  • Windows 10 (x64).

I’m thinking, should I return the RAM back to the store or there is a way to make it working on my PC.

Any help is highly appreciated. Thank you in advance! 🙂

webserver – Upstream Nginx server is appending port when responding back to load balancer Nginx

We are using the Nginx load balancer that balances the load to our two upstream Nginx web servers. TCP load balancing is done using a server block inside the stream block as below.

stream {
 upstream stream_backend {
        server 192.168.200.x ;
        server 192.168.200.y;
  server {
    listen 443 ;
    proxy_pass backend:8440;
    proxy_protocol on; 

The upstream Nginx webservers host multiple websites. Each website with its own server block and server_name under http{} configured as below. We have made these websites listen on a specific port 8440 with ssl and proxy_protocol as options to Listen directive.

server {
    listen 8440 ssl proxy_protocol;
    root /path/to/folder;

rewrite ^/foo/a.json /bar/b.json permanent; 
location / {
        try_files $uri /index.php?$query_string; 

Issue: When I am trying to access, it is supposed to get a rewrite and server However, it is not working as expected. Instead, the port 8440 is getting appended in the browser as How can I get rid of this port that is getting appended on rewrite rule?

woocommerce – Search result page doesn’t load sometimes or loads in a messy way

After searching any product or clicking on menu categories the result page doesn’t show anything and displays an corrupted messy page or nothing at all with error http 500. Results exist but it doesn’t show them or it finally gets displayed after refreshing the page 4 or 5 times.

this is the website links:

addEventListener not working on SharePoint list form load

I have a SharePoint list form and I prepopulated certain fields with sharepont user properites. I also decided to use DOM development for form manipulation on the list form. Using the DOM, I was able to successfully disable certain form elements. I also have a checkbox at the beginning of the form that is supposed to hide certain fields when checked. For some reason, my eventlistener on the checkbox isn’t working like it should.

In the developer tool when looking at the SharePoint list form, I get the following error: ‘Unable to get property ‘addEventListener’ of undefined or null reference
The error points to this line:



function getProperties(){
        var web = _spPageContextInfo.webAbsoluteUrl;
        var endPointUrl = web + "/_api/SP.UserProfiles.PeopleManager/GetMyProperties";

        axios.get(endPointUrl).then(function(response) {
            var properties =;
            var displayName =;
            var nameAry = displayName.split(" ");
            if (document.querySelector('input(title="Anonymous")') == false) {
                alert("Anonymous Code");

            var fName = document.querySelector('input(title="First Name")');
            fName.value = nameAry(1);
            fName.disabled = true;

            var lName = document.querySelector('input(title="Last Name")')
            lName.value = nameAry(0);
            lName.disabled = true;

            var eMail = document.querySelector('input(title="Email Address")');
            eMail.value =;
            eMail.disabled = true;
            for (var i = 0; i < properties.length; i++) {
                //Office Phone Number
                if (properties(i).Key == "Office") {
                    var oSymbold = document.querySelector('input(title="Office Symbol")')
                    oSymbold.value = properties(i).Value;
                    oSymbold.disabled = true;

Here’s the weird part, I’m able to run the same exact code(slight modifications) in codepen but reading from typicode/json instead of a SharePoint list and it works perfectly. Here’s the codepen

Any idea why it’s not working in SharePoint?

Any assistance you can provide would be much appreciated. I’ve been tackling this for two days.

Rust server load testing – Game Development Stack Exchange

I am looking to stress test my Rust game server and monitor CPU, RAM and FPS. I could not find much around.

I general I think I will need a good amount of compute power, bunch of cloud based VM which generate the load through UDP using some headless client.

Monitoring CPU and RAM should be fairly easy, but how about the in game FPS?
Also I could not find much about Rust clients that could simulate a player.


linux – High CPU Load on CentOS 7 on esxi

We are facing some performance issues on applications in production on upgrading from CentOS 6 to CentOS 7.

For debugging, we did a benchmark exercise where we ran 4 threads of the following python code on a 4 threads 8 GB RAM VM

i = 0
while True:
    i = i + 1
    i = 0

The result is that on a CentOS 7 machine, the load keeps on increasing and we killed the processes once it reached around 11. The same exercise on a CentOS 6 VM causes the load to stabilize around 4.5 – 5 but does not go above 5.

CentOS 7
uname -a
Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

CentOS 6
uname -a
Linux 2.6.32-642.el6.x86_64 #1 SMP Tue May 10 17:27:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

In this case, on CentOS 7, we also have performance degradation – checked by the time difference it takes for looping 1000 iterations in the python code (2x-3x more time for the same number of loops)

When we did the same exercise on AWS EC2 CentOS 7 machine, we saw that the load stabilizes around 4.5 – 5 on that as well.

CentOS 7
uname -a
Linux 3.10.0-862.11.6.el7.x86_64 #1 SMP Tue Aug 14 21:49:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

To rule out ESXI host issue, we tried the same thing on 3 different hosts and tried a CentOS 6 and CentOS 7 machine on the same host – all with same results.

We think that either AWS has patched their hardware somehow – that we haven’t or that we have some bad configuration in our VMWare config for the VM.

What should we look at for possible causes for the unbounded increase in load?

best option to load test with fixed ip using jmeter and azure

Azure devops offers a great service to allow you to run a simple jmeter script (no custom samplers though, and only supports 3.2), with one click. however, this wont work for us, as the site we want to load test can only be seen with whitelisted IPs.

Azure does have support for setting up “Load test rig in a specific VNet for testing private apps” but this is far more complicated than we could get to work.

The option we could get to work was to create 4 large linux server, manually install jmeter SW on each, give each a fixed IP, then run our tests completely manually. However, we only test a few hours a week, so its expensive to have the server on the whole time. If we switch them off, they lose the IP, and we have to manually rebuild them each time.

We tried the subnet route. you can easily create a vlan with a subnet, and put the VMs in it. but then the problem is how to get this subnet to have an external IP/Gateway to “see” the outside world, and to allow me to SSH into each box to administer them. I tried setting up a firewall on the VLAN, but the firewall requires you setup a new subnet – you cant specify the subnet which your machines are in, which is a bit of a show stopper for non-networking gurus.

Perhaps another way is to put say 4 servers on the subnet, setup one to have a public static IP, set this as the default gateway for the other 3 servers, then at least if we keep the gateway server alive, we will keep the ip and network, even if we temporarily shut down the other 3 servers.

We are used to linode. If this was linode, we would createa stackscript (which is similar to a bash script run on vm creation but with parameters etc) to setup the server, security, networking and install vmware and users certificates. Whats the best way to do this on Azure? I.e. take some of the work out of creating jmeter servers? If this were AWS, I would see if there was a decent machine image of ubuntu with jmeter on it, but searching Azure marketplace, there is only some service by pnop which is paid for in addition to the cost of the VMs. No jmeter builds.

Note, we cannot us any service outside of Azure – our organisation will not purchase of any service other than those offered by Azure – so blazemeter etc. is out.

High load average how to diagnosis

I have a VPS and it is a long time from time to time I have high load average and things get slow
but I do not know where to start to find … | Read the rest of

memory – AMD Epyc on Windows slowdown under high single-threaded load

My company has just had a 128-core, dual socket AMD EPYC system for analysis work on wind farms. We use a software package which launches individual, single-threaded, old FORTRAN exes in parallel.

When running with a ‘degree of parallelism’ in the range of 8-16 (so using 8-16 exes and same number of cores) the simulations run at real-time. Increasing this to 64 results in a slowdown to half-real-time; at 128 the slowdown is 5-fold.

We then installed Hyper-V as a test and spooled up 8x 16 vCPU VMs with the software installed. Running 16 simulations in each of the VMs does not result in the same slowdown; all 128 run in parallel at roughly real-time.

Why does this happen? What can we do to make this machine run as quickly bare metal?

Things we have tried:

  • Windows 10 Pro, Windows 10 Pro for Workstations, and Windows Server 2019 Datacenter; all show the same behaviour
  • Running a 128 core VM in Hyper-V and on KVM on Ubuntu; same behaviour but slowdown is lesser on the Ubuntu KVM VM.
  • Writing an in-house program which spreads the launch of the single-threaded applications onto separate NUMA nodes and cores on each NUMA node evenly on a round-robin basis (this showed an appreciable performance benefit but not as much as we would like to see)
  • We played around with NUMA settings in the BIOS, there is some difference, but none that make a huge difference.

The reason why VMs are not a workable solution is that the software used is licenced by machine/MAC address, and the cost of running 8 VMs would be prohibitively expensive. We need to find a way of running this bare metal at good performance.

Many thanks in advance to anyone who can shed any light on this.