How to use the CPU as a GPU on a VPS (Windows )?

I need to run a Program on my VPS that requires a GPU. My VPS however has no GPU. I have been told by others to use a "CPU Renderer" to make up for the lack of a GPU. However I can not find any advice on how to do this.

windows 7 – GPU ”Error Code 43”

I need some help. 2 days ago, my 11 year old HDD finally let it go, nothing strange right? I saw that coming anyway so I detached it. Anyway, yesterday, when I turned on the PC, I saw nothing on the monitor. Just pure black screen but I waited for something to happen anyway. My Windows 7 is booting because I hear the sound effect and I saw the RGB colors on my mouse and keyboard flickering (they do that until windows is ready to use). I turned off my PC and tried my luck a couple of times. Voila! It finally booted but screen resolution was horrible! Problem was with my GPU, because it said ”Error code 43”. I installed my GPU (Nvidia GeForce GTX 560 Ti) it’s drivers and installed them again, I removed one of my fans (as seen on picture here)I removed this fan and I thought problem was over because resolution was back at 1920×1080 again and it was working fine. Note: I don’t know which of it was the solution, maybe the fan or the driver. Today, GPU only worked when I turn on and off a couple of times and I am sick and tired of it. Why my GPU is acting like this?

is there any way to get "CPU , GPU , RAM" info in javascript

i work on small 2d game by using Canvas + JavaScript
my idea it’s how can i get resource info & then i printed values in screen
for example like "minecraft" :

enter image description here

google chrome – Why am I still seeing “GPU Process” when “hardware acceleration” is turned off?

I have “Use hardware acceleration when available” turned off but in Task Manager, I still see “GPU Process” eating up over 1/3 of my CPU and a constantly increasing memory footprint.

Why am I still seeing “GPU Process” when “hardware acceleration” is turned off?


  • Chrome Version 86.0.4240.80 (Official Build) (x86_64)
  • MacOS Catalina 10.15.7 (19H2)
  • MacBook Pro 2018

NiceHash Miner for CPU / GPU mining

For miners with experience, NiceHash has released a new version of its popular NiceHash Miner was specially designed to provide the highest possible hashing speed and fast updates, which is especially in demand among those who are not the first to mine cryptocurrencies.

Differences from

Add new algorithm: CuckaRooz29
Improve RPC Power Mode Status Messages
New implementation of monitoring AMD devices


Preinstalled miner plugins:

  • ClaymoreDual
  • GMiner
  • LolMiner
  • NBMiner
  • Phoenix
  • XMRig

The most current version of NiceHash Miner v3.0.4.0 will be for experienced miners. Exceptions are managed using a firewall or antivirus. Some elements of the NiceHash miner are often marked with an antivirus.

pci express – ESXi PCIe passthrough of GPU recognized in guest OS but not functional

I have an ESXi server and am having issues passing a Radeon RX 5700 through to a Windows 10 VM. Windows sees the GPU, but reports that it has stopped the device because it has reported problems (Code 43).

I deleted the first VM I created for this and created a new one but it produced the same error. I also made a Debian VM which also recognized the GPU, but it was unable to use it.

The server is based on a Supermicro X9SRL-F, a Xeon E5-2650v2, and 128GB of DDR3 ECC memory.

The VM has 8 GB of RAM (all reserved) and 2 cores (1 socket). IOMMU is not exposed to the VM. I have tried with and without the “hypervisor.cpuid.v0” parameter (set to false) in the VM’s configuration.

I have tried with and without adding the Vendor/Device ID for the GPU and associated HDMI audio channel to /etc/vmware/ with resetMethod set to default and fptShareable set to false.

“Above 4G Decoding” is enabled in BIOS.

There is an LSI00301 passed through to another VM (FreeNAS) on the same server and it has been working flawlessly for years.

The Radeon RX 5700 works when connected to a physical computer.

The GPU is connected to the server through a 1x to 16x PCIe Riser (v.009S) which has been confirmed to work with another computer.

I have tried different PCIe ports on the server.

The Windows 10 VM is fully updated (build 19041.508) and has Radeon Software Adrenalin 2020 Edition (v20.9.1) and VMware Tools (v10.3.10.12406962) installed.

There is a monitor connected to the GPU via HDMI.

The intended purpose of connecting the GPU to the server is for crypto-mining.

gpu – Unity take screenshot without blocking?

I’ve been trying to actually use Unity’s AsyncGPUReadback.Request with Jint in JavaScript also, but mainly just in C#, to make a basic screenshot without the use of IEnumerable or yield return new WaitForEndOfFrame or something (don’t remember exactly), but I want to use it only with checking the .done property in an Update loop. I have not been able to find any examples online, though I have read the documentation on it multiple times.

First this attempt

var cam = Camera.main;
            RenderTexture rat = new RenderTexture(
                1920, 1080, 24
            cam.targetTexture = rat;
            Texture2D actual = new Texture2D(
                1920, 1080,
                TextureFormat.RGB24, false
   = rat;
        /*  actual.ReadPixels(new Rect(0, 0, 1920, 1080),
                0,  0);*/
            var reqt = AsyncGPUReadback.Request(rat);
            System.Action fncc = null;
            fncc = new System.Action(() => {
                if(reqt.done) {
                    Debug.Log("well" + reqt.ToString());
                    var data = reqt.GetData<byte>(0);
                    Yaakov.removeEvent("Update", fncc);
            Achdus.Yaakov.on("LateUpdate", fncc);//just some custom function to add events to the "Update" loop without monbehaviorus...
            cam.targetTexture = null;
   = null;
            var bite = actual.EncodeToPNG();
                path, bite

gives me an error InvalidOperationException: Cannot access the data as it is not available, have absolutely no idea how to fix this

By combining some functions from the FFMPegOut library as well as (another script I found on github)(1) I kind of almost got something

static Material blat;
        public static 
        void GetScreenshot(
            string path,
            System.Func<object, object> fnc
        ) {
            var cam = Camera.main;
            var form = (
                .allowHDR ? RenderTextureFormat
            int aal = cam.allowMSAA ? QualitySettings.
                antiAliasing : 1;
            int width = 1920;
            int height = 1080;
            RenderTexture rt;
            GameObject bl;
            if(cam.targetTexture == null) {
                rt = (
                    new RenderTexture(
                        width, height,
                        24, form
                rt.antiAliasing = aal;
                cam.targetTexture = rt;
                bl = FFmpegOut
            if(blat == null) {
                var shayd = Shader.Find(
                blat = new Material(shayd);
            var temp = RenderTexture.GetTemporary(
            var raq = UnityEngine
            System.Action fncc = null;
             var k = 0;
            fncc = new System.Action(() => {
                if(raq.done) {
                    Debug.Log("ok man" + raq);
                    var newTaxt = new Texture2D(
                        TextureFormat.ARGB32, false
                    Yaakov.removeEvent("Update", fncc);//not important here just imagine its all in another update
            Achdus.Yaakov.on("Update", fncc);//same as earlier just the same thing as having the above in an "Update" loop just without monos....

which doesn’t give any errors but returns this
(!(enter image description here)(2))(2)
Don’t see much? Neither do I…

So does anyone actually know how to take a simple screenshot with AsyncGPUReadback.Request WITHOUT any IEnumerators?

And BTW, for those just wondering why not IEnumerators, many reasons, but when I try to use the script from Github I literally get no result unity 2020:

using UnityEngine;
using UnityEngine.Rendering;
using System.IO;
using System.Collections;

public class AsyncCapture : MonoBehaviour
    IEnumerator Start()
        while (true)
            yield return new WaitForSeconds(1);
            yield return new WaitForEndOfFrame();

            var rt = RenderTexture.GetTemporary(Screen.width, Screen.height, 0, RenderTextureFormat.ARGB32);
            AsyncGPUReadback.Request(rt, 0, TextureFormat.ARGB32, OnCompleteReadback);

    void OnCompleteReadback(AsyncGPUReadbackRequest request)
        if (request.hasError)
            Debug.Log("GPU readback error detected.");

        var tex = new Texture2D(Screen.width, Screen.height, TextureFormat.ARGB32, false);
        File.WriteAllBytes("test.png", ImageConversion.EncodeToPNG(tex));

it just seems to hang forever… but regardless I want to do this without IEnumerators

dedicated server in the US with integrated GPU

I am looking to rent 4 dedicated servers with an integrated GPU, need multiple locations in the US.

Servers that I had in the past had th… | Read the rest of

Unity Build GPU Performance – Game Development Stack Exchange

I have been banging my head against the wall with this for few days now with no improvement..

The problem is that after build my project keeps using over 30% of the GPU. Even in the editor it takes 20%+
I ended up making a new empty scene with few cubes with rigidbodies, but the performance was still going over 30%. It starts with 3-5 % and within the next 5-10 seconds starts climbing up.

I am using Unity 2019.1.12f1, but the problem is still there even on the newest Unity 2020 version.
The project is using LWRP.

I have played around with the quality settings but they seem to have absolutely no effect on the issue, besides turning the V-Sync on, which capped the framerate at acceptable levels.

I am using Windows 10 , my video card is Geforce GTX 1080 Ti.

Here are some screenshots + the project build


enter image description here
enter image description here

classes – Creating a C++ style class for storing a data on 2D grid on GPU memory

I would like to create a class, which is intended to represent a field of values on a 2D grid. So, I would like to access the elements with a double square bracket A(i)(j), and inside the __global__ or __device__ function access the dimensions of the field – nx_, ny_, because I do not want to pass them each time as an arguments to the functions.
In order to execute multiple threads on this structure, as far as I understand, I have to pass it as a pointer on the device memory. I’ve come with a following solution, but it doesn’t look very beatufil and efficient.

template <typename T>
struct Field
    DEVHOST Field(size_t nx, size_t ny) :
        nx_(nx), ny_(ny)
        data_ = reinterpret_cast<T*>(malloc(nx_ * ny_ * sizeof(T)));

    DEVHOST ~Field()

    DEVHOST size_t size() const
        return nx_ * ny_;

    DEVHOST T* operator()(size_t idx)
        return data_ + idx * nx_;

    DEVHOST const T* operator()(size_t idx) const
        return data_ + idx * nx_;

    size_t nx_;
    size_t ny_;

    T* data_;

template <typename T>
KERNEL void init_kernel(Field<T>* f, size_t nx, size_t ny)
    f = new Field<T>(nx, ny);

template <typename T>
KERNEL void delete_kernel(Field<T>* f)
    delete f;

In order to create an instance of Field<T> to be worked with further, I need to call a kernel, which initializes it. And then a kernel to delete this object on the device memory. The code on the host will look like:

Field<float>* f;
init_kernel<<<1,1>>>(f, 100, 100);

What would be the more robust and clever way to implement the desired functionality. I would appreciate any suggestions, or references to the good practices of CUDA Programming, except for the