Issues in Dual booting Android x86 with Windows 10 on an HP Laptop

I have installed “android-x86_64-9.0-r2.iso” using USB Bootable Drive method. I am facing two big problems here.

  1. After completion of Installation I get two options – Run android or Reboot. I clicked on reboot. My computer booted to Windows 10. The grub menu asking for which OS to boot does NOT appear. I have got UEFI Boot mode with Secure Boot off. My laptop does not support Legacy boot. What can I do about this?

  2. When I clicked on Run android after installation or when I run Android Live from the bootable USB Drive only, I don’t get a cursor on the screen. My touchpad does not work. My keyboard works. Connecting an external mouse works. I have tried dual booting on my previous Dell Laptop and the touchpad used to work there. Why doesn’t it work with my HP PC ?

I have searched the internet and lots of other forums regarding this and did not get any conclusive results or workable solutions. Any help would be appreciated.

googlebot – Mobile usability issues reported by Google at Search Console

Google search console reported that some pages have a Mobile Usability issues:

  • Clickable elements too close together
  • Content wider than screen
  • Text too small to read

Some clarifications:

  • this is not about restrictions in robots.txt
  • some failed pages have less than 8 resources with 550KB size total
  • the failed pages make up a small part of the total and are random
  • “LIVE TEST” for the same URL may fail randomly
  • there is no network problems (packet loss/response time/DNS) with the server where site deployed
  • this issue appeared since April 14th
  • most of the failed pages have 90-99% mobile performance in PageSpeed Insights or Lighthouse

Lighthouse embedded in the Chrome and PageSpeed Insights haven’t found a problem in mobile mode in the same pages that have an issues in Google search console.

But when I use “LIVE TEST” for failed URLs in Google search console, sometimes I get a same failed result with Mobile Usability issues, where CSS file not loaded with a reason “Other error”. I think this is an issues reason but I don’t understand what can I do to fix it, especially if it is due to some netiquette limitation (crawl budget)

synology – Two issues with iSCSI on Windows 10

synology – Two issues with iSCSI on Windows 10 – Super User

I will fix wordpress issues, error, bugs, in 5 hours for $5

I will fix wordpress issues, error, bugs, in 5 hours

Fix any bugs, issues, errors, in your WordPress website

About that service:-

Please DON’T PLACE ORDER DIRECTLY IN THIS PRICES ARE NOT FIXED. IT CHANGES

ACCORDING TO THE TYPE OF ISSUE SO PLEASE FIRST CONTACT ME BEFORE PLACING

YOUR ORDER

Hi! I am working on WordPress for 2 years, I will fix the WP website for any type of issues, errors bugs, Elementor pro, and wp bakery expert. So, I have very good experience at fixing

WP issues quickly.

My Services:-

  1. Fix any kind of WP Issue
  2. WordPress Customization
  3. Fix HTML, CSS, PHP, layout Issues
  4. WordPress Security
  5. Fix Website Speed Issues
  6. Fix WooCoomerce Issues
  7. Fix, Elementor, WP Bakery, Divi, and other page builders Expert
  8. WordPress Security
  9. SEO, Speed and Security config
  10. Content Upload/editing
  11. Fix Contact form, sliders, headers, and footers

Any Help in WordPress Fix | Issues | Errors | Bugs | Problems | Html | CSS |

PHP | Elementor Pro | Divi | WpBakery | WooCoomerce |

If your Any Questions! Inbox me. I’m almost always online so, I reply super fast!

.

magento2.4 – CPU issues after upgrading from Magento 2.2.4 to Magento 2.4

We recently upgraded our site from Magento 2.2.4 to Magento 2.4.

Done the upgrade on a copy of our site on a test server and everything was fine.

When we upgrade on our live server, the page load time increased dramatically when 4+ people were on the site at the same time. This also crashed EleasticSearch, so we moved that to it’s own VPS and ES works fine now. Before the upgrade, 15-20+ users online at the one time wouldn’t have been uncommon and the server handled it fine.

With 2.2.4, we had a VPS with 6GB RAM, 4 CPU’s. Our hosting provider suggested we increase this when we ran into issues after the upgrade. We’re now at 8CPU’s, 12GB RAM and although that improved performance, load times from server were very long.

We now have Varnish running on a separate VPS and while that has sped up load times, it’s still not good enough. Varnish has been running for the last 24 hours and we’ve been getting 503 and 504 errors and our Magento developer has told me these are due to Varnish waiting so long for our Magento server to respond.

Our hosting company is now telling us we need to get a dedicated server, is this necessary? Our Magento developer has said our VPS, if there’s no issues with the server, should be fine. Our hosting company is telling us that the server is fine.

We’re unsure what to do as we haven’t much confidence in our hosting company as they have just been telling us to increase our package, without really investigating why we’re having these issues.

network – Macbook 2007 (A1181) Wi-Fi issues in Windows

I have:

  • Macbook A1181
  • Windows 8.1 in Bootcamp with the latest patches
  • All bootcamp drivers installed
  • Keenetic Viva router with 2.4/5 GHz dual-band Wi-Fi, located in ~1m from the Macbook

The issue is that download/upload speed via Wi-Fi is very low in any app (~60kbps) and often interrupts. On the other hand, ethernet connection works great.

And the same Macbook works great with Wi-Fi when I boot macOS, but, unfortunately, I need Windows.

Other devices, like M1 Macbook Pro (late 2020), iPhones and lots of Android phones (even 2.4 GHz) work fine too, so it seems to be a driver issue.

Any ideas on how to fix it?

long exposure – Issues with dark frame subtraction: Dark frames adding “noise” and changing image color/tint

Instead of reducing noise, the darkframe subtraction increased the
noise- or rather, added some dark/monochrome noise.

How long was your session? What was the ambient temperature? Was the camera at ambient temperature at the beginning of the session? At what point in the session did you take dark frames?

If the camera was at ambient temperature when you started, and the session was long enough that the sensor temperature rose significantly during the session, then you need to apply dark frames made periodically throughout the session that match frames taken when the sensor was near the same temperature.

  • If you apply dark frames taken when the sensor was much cooler to light frames taken when the sensor was much warmer, there will be increased noise caused by the higher temperature that won’t be eliminated by the dark frame taken when the sensor was cooler.
  • If you apply dark frames taken when the sensor was much warmer to light frames taken when the sensor was much cooler, you will “eliminate” noise that wasn’t there in the first place and parts of the image will be darker than the background luminance of the sky.

Instead of reducing noise, the darkframe subtraction changed the white-balance/tinted the image.

The color is different because without dark frame subtraction or any other noise reduction being applied, the predominant color of most astronomical photos will be the colors introduced by the chrominance noise in the image.

Photos were imported from my Pentax K1ii, converted to DNG in LR, and
exported to PS without any editing/import presets applied.

When you imported to LR, your current default settings for LR would have been applied to the raw data. There’s no such thing as an “unedited” raw photo that looks anything like a photo on your screen. If you’re not telling the app how to interpret the linear monochrome luminance values contained in the raw data, you’re letting the app decide on its own.

Issues with Apache HTTPD 2.4 LocationMatch containint?

I am having trouble using a LocationMatch for a specific site that contains a ?

My current LocationMatch is
<LocationMatch “^/SOME/FOLDER/STRUCTURE/TEST/?cmd=logout”>

The actual URL contains the ?, but I am having trouble getting this specific locationmatch to work.

The error that I get is
AH01630: client denied by server configuration: /etc/httpd/htdocs, referer: https://URL/SOME/FOLDER/STRUCTURE/TEST/?cmd=logout

nginx udp load balancer issues

I am trying to get nginx working to load balance udp traffic. The upstream servers are configured for DSR and do not pass traffic back through nginx. I have a group of ports that I need to forward to the upstream servers while preserving the server port. So I need the traffic to get to nginx, nginx look at the list of ips of upstream servers and pick one using “hash $remote_addr consistent”, then send the traffic to the ip of the chosen upstream server with the ip of the incoming client and the original destination port it came to nginx on. Then the upstream server will receive the traffic as if it didn’t go through nginx. Any thoughts?

I have tried using a range with listen 9000-9999; but it doesn’t work and gives an error “host not found in “9000-9999” of the “listen” directive.” So I have a listen line for each port which is a real pain.

stream {

    upstream stream_backend {
        hash $remote_addr consistent;
        server 10.10.10.14:8999;
    }
    
    
    server { #use this for upstream lb
        listen     8999;
        proxy_pass stream_backend;
        proxy_bind $remote_addr:$remote_port transparent;
        proxy_responses 0;
    }
    
    server { #test going directly to ip
        listen 9000;
        listen 9001;
        listen 9002;
        listen 9003;
        #listen lines continue for whole port range
        proxy_pass 10.10.10.30:$server_port; #used this to go directly to a server for testing
        proxy_bind $remote_addr:$remote_port transparent;
        proxy_responses 0;
    }
    
}

ip address – eCom “licensing issues” so removing content – good idea? Any prevention?

My client has a large eCom site and due to licensing issues they can’t sell in specific geographical IP ranges. They currently just remove the products so the page is basically blank.

Can anyone think of a “best practice” workaround?

Rather than zero content, perhaps we should just publish a bunch of keyword content – right?

Does any workaround come to mind? Thanks for all comments…

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123