drivers – GPU AMD firePro W5130m not working in elementary OS

can’t setup the AMD firePro W5130m GPU on my laptop. the Linux in AMD is too old 2015 and it gives :

=====================================================================

AMD Proprietary Driver Installer/Packager

=====================================================================

error: Detected X Server version ‘XServer _64a’ is not supported. Supported versions are X.Org 6.9 or later, up to XServer 1.10 (default:v2:x86_64:lib:XServer _64a:none:4.15.0-36-generic:) Installation will not proceed.

but when use sudo lshw -C display it showes completly differnet GPU.

how to install AMD firePro W5130m ?

enter image description here

windows 10 – GPU causing black screen

I bought a used Radeon RX 470 GPU from China. Sometimes it works great but usually, at seemingly random moments, the screen will go black, the computer continues running. I have windows 10 pro installed.

I’ve tried using different cables, monitors etc… I’ve narrowed it down to the GPU. This does not occur with another GPU in the same computer.

I installed the proper drivers for this GPU and as far as I can tell, I’m doing everything correctly.

The black screen comes at random. I haven’t noticed any patterns. Sometimes it will happen immediately after booting, sometimes it will run for hours before it goes black and it doesn’t seem to be doing it when pushed hard or anything. Just random.

Please help

macbook pro – MBP GPU Panic-less Logic Boards

I inherited a Mid-2010 15″ Macbook Pro (2.66 GHz Core i7 (I7-620M), A1286) from a relative a few years ago, and since then it started GPU-panicking (a la here, here etc). AppleCare has obviously expired, and would not be an option since they’d inevitably ask for the receipt, which I doubt my relatives would ever be able to find (10 years later). I’m fairly certain I wouldn’t be able to pull off the “make-a-fuss” approach suggested by a lot of answerers.

The standard correction for this nowadays seems to be gfx.io (which is now the only software option, thanks to Apple discontinuing their software patch. Grrr). The problems with this are:-

  1. that this forcefully disables the nVidia adapter (shifting graphics handling onto the CPU, which is not ideal when I want to use external monitors)
  2. that this and similar patches have not always been reliable (I have memories of it not working consistently in the past).

I am aware of numerous other potential fixes (Kext purge, PRAM reset etc), but I have run through all of these at one point or another.

Anyway, so I was thinking about the process they used when they were running the official replacement program. Although I’ve read in some places (comment) that it’s the logic board that was replaced, this article suggests it’s in fact the “defective video card” that is replaced. Either way, I’m assuming they just replaced the defective board with a newer revision – meaning that parts sites (like this) probably have these revised boards knocking about that won’t suffer the same issue.

Considering the still-incredible spec and condition of this machine, I don’t mind spending the money for a new logic board if it means I can reliably use macOS on it. Of course, I’m only willing to spend the money if I’m sure it will actually solve the issue.

So my question is two-part:

  1. Was it definitely the logic board that caused this fault?
  2. What is the part number for a logic board that is known to not suffer from the GPU Panic issue (assuming it actually entered circulation)?

Force headless chromium/chrome to use actual gpu instead of Google SwiftShader

I’m trying to print html to pdf using headless chromium (using puppeteer) and everything works fine except if html contains large png images (over 10.000×10.000px) the whole process of rendering page takes extremely long (up to half an hour, but if using non-headless mode it takes only about 10 seconds). After days of investigating and tweaking I came to conclusion that this must be issue with page compositing process.

Below are dumps from chrome://gpu page in headless and non-headless modes.
Only significant difference i’ve noticed is that, when runing chrome headlessly, puppeteer appends by itself --disable-gpu-compositing and --allow-pre-commit-input which i believe are responsible for dramatic performace dropdown.

Also, in non-headless mode chrome sees 2 gpu units:

GPU0 VENDOR= 0x10de, DEVICE=0x1d01 *ACTIVE*
GPU1 VENDOR= 0x8086, DEVICE=0x1912

and in headless mode only one:

GPU0 VENDOR= 0xffff (Google Inc.), DEVICE=0xffff (Google SwiftShader) *ACTIVE*

which is CPU-based implementation of the Vulkan and OpenGL ES graphics APIs.

So basicaly my question is:

Is there any way to run headless chrome/chromium with puppeteer using actual gpu (especialy for gpu-compositing), or is there any way to print page to pdf in non-headless mode?

Here is my non-headless chrome gpu config (where page rendering is fast):

Canvas: Hardware accelerated
Flash: Hardware accelerated
Flash Stage3D: Hardware accelerated
Flash Stage3D Baseline profile: Hardware accelerated
Compositing: Hardware accelerated
Multiple Raster Threads: Force enabled
Out-of-process Rasterization: Hardware accelerated
OpenGL: Enabled
Hardware Protected Video Decode: Unavailable
Rasterization: Hardware accelerated on all pages
Skia Renderer: Enabled
Video Decode: Unavailable
Vulkan: Disabled
WebGL: Hardware accelerated
WebGL2: Hardware accelerated

Chrome version: Chrome/83.0.4103.0
Operating system: Linux 4.13.0-46-generic
2D graphics backend: Skia/83 8ce842d38d0b32149e874d6855c91e8c68ba65a7

Command line:
/home/wojtas/projects/project-generator/node_modules/puppeteer/.local-
chromium/linux-756035/chrome-linux/chrome 
--disable-background-networking 
--enable-features=NetworkService,NetworkServiceInProcess 
--disable-background-timer-throttling 
--disable-backgrounding-occluded-windows 
--disable-breakpad 
--disable-client-side-phishing-detection 
--disable-component-extensions-with-background-pages 
--disable-default-apps 
--disable-dev-shm-usage 
--disable-extensions 
--disable-features=TranslateUI 
--disable-hang-monitor 
--disable-ipc-flooding-protection 
--disable-popup-blocking 
--disable-prompt-on-repost 
--disable-renderer-backgrounding 
--disable-sync 
--force-color-profile=srgb 
--metrics-recording-only 
--no-first-run
--enable-automation 
--password-store=basic 
--use-mock-keychain 
--disable-web-security 
--user-data-dir=/var/www/project-generator/var/chrome-user-data 
--allow-file-access-from-files 
--no-sandbox
--no-sandbox-and-elevated 
--no-zygote 
--enable-webgl 
--use-gl=desktop 
--use-skia-renderer 
--enable-gpu-rasterization 
--enable-zero-copy 
--disable-gpu-sandbox 
--enable-native-gpu-memory-buffers 
--disable-background-timer-throttling 
--disable-backgrounding-occluded-windows 
--disable-renderer-backgrounding 
--ignore-certificate-errors 
--enable-hardware-overlays 
--num-raster-threads=4 
--default-tile-width=512 
--default-tile-height=512 
--enable-oop-rasterization 
--remote-debugging-port=0 
--flag-switches-begin 
--flag-switches-end 
--enable-audio-service-sandbox 

And here is headless chrome gpu config (which is extemely slow)

 Graphics Feature Status
Canvas: Hardware accelerated
Flash: Hardware accelerated
Flash Stage3D: Hardware accelerated
Flash Stage3D Baseline profile: Hardware accelerated
Compositing: Software only. Hardware acceleration disabled
Multiple Raster Threads: Force enabled
Out-of-process Rasterization: Hardware accelerated
OpenGL: Enabled
Hardware Protected Video Decode: Unavailable
Rasterization: Hardware accelerated on all pages
Skia Renderer: Enabled
Video Decode: Unavailable
Vulkan: Disabled
WebGL: Hardware accelerated but at reduced performance
WebGL2: Hardware accelerated but at reduced performance

Chrome version: HeadlessChrome/83.0.4103.0
Operating system: Linux 4.13.0-46-generic
2D graphics backend: Skia/83 8ce842d38d0b32149e874d6855c91e8c68ba65a7

Command Line:
/home/wojtas/projects/project-generator/node_modules/puppeteer/.local-chromium/linux-756035/chrome-linux/chrome 
--disable-background-networking 
--enable-features=NetworkService,NetworkServiceInProcess 
--disable-background-timer-throttling 
--disable-backgrounding-occluded-windows 
--disable-breakpad 
--disable-client-side-phishing-detection 
--disable-component-extensions-with-background-pages 
--disable-default-apps 
--disable-dev-shm-usage 
--disable-extensions 
--disable-features=TranslateUI 
--disable-hang-monitor
--disable-ipc-flooding-protection 
--disable-popup-blocking 
--disable-prompt-on-repost 
--disable-renderer-backgrounding 
--disable-sync 
--force-color-profile=srgb
--metrics-recording-only 
--no-first-run 
--enable-automation 
--password-store=basic 
--use-mock-keychain 
--headless 
--hide-scrollbars 
--mute-audio 
--disable-web-security 
--user-data-dir=/var/www/project-generator/var/chrome-user-data 
--allow-file-access-from-files 
--no-sandbox 
--no-sandbox-and-elevated 
--no-zygote 
--enable-webgl 
--use-gl=desktop 
--use-skia-renderer 
--enable-gpu-rasterization 
--enable-zero-copy 
--disable-gpu-sandbox 
--enable-native-gpu-memory-buffers 
--disable-background-timer-throttling 
--disable-backgrounding-occluded-windows 
--disable-renderer-backgrounding 
--ignore-certificate-errors 
--enable-hardware-overlays 
--num-raster-threads=4 
--default-tile-width=512 
--default-tile-height=512
--enable-oop-rasterization 
--remote-debugging-port=0 
--disable-gpu-compositing 
--allow-pre-commit-input 

materials – Unity: GPU Instancing with different textures?

I’m making use of Unity‘s GPU Instancing in my 2D game and it really reduces draw calls significantly when using the same Material and same sprite (texture) on all batched gameObjects.

I’m changing colors using MaterialPropertyBlock and everything works fine.

But is there a way to do that with different sprites (textures) on batched gameObjects’ material?

What if I used a Sprite atlas to gather all sprites together, will that make GPU Instancing possible with different sprites?

Material’s Shader:

Shader "Unlit/UnlitTransparent"
{
Properties
{
    (NoScaleOffset) _MainTex ("Texture", 2D) = "white" {}
    // "PerRendererData" this hides the property from the Material 
    inspector (no performance impact, only hides property)
    (PerRendererData) _BaseColor ("Color", Color) = (1, 1, 1, 1)
}
SubShader
{
    Tags { "Queue"="Transparent" "RenderType"="Transparent" 
    "IgnoreProjector"="True" "CanUseSpriteAtlas"="True" }
    ZWrite OFF
    Blend SrcAlpha OneMinusSrcAlpha
    LOD 100

    Pass
    {
        CGPROGRAM
        #pragma vertex vert
        #pragma fragment frag
        // make fog work
        //#pragma multi_compile_fog
        #pragma multi_compile_instancing
        #include "UnityCG.cginc"

        struct appdata
        {
            float4 vertex : POSITION;
            float2 uv : TEXCOORD0;
            UNITY_VERTEX_INPUT_INSTANCE_ID
        };

        struct v2f
        {
            float2 uv : TEXCOORD0;
            //UNITY_FOG_COORDS(1)
            float4 vertex : SV_POSITION;
            UNITY_VERTEX_INPUT_INSTANCE_ID
        };
        
        UNITY_INSTANCING_BUFFER_START(Props)
            UNITY_DEFINE_INSTANCED_PROP(fixed4, _BaseColor)
        UNITY_INSTANCING_BUFFER_END(Props)

        sampler2D _MainTex;
        float4 _MainTex_ST;

        v2f vert (appdata v)
        {
            v2f o;
            
            UNITY_SETUP_INSTANCE_ID(v);
            UNITY_TRANSFER_INSTANCE_ID(v, o);
            
            o.vertex = UnityObjectToClipPos(v.vertex);
            o.uv = v.uv;
            //UNITY_TRANSFER_FOG(o,o.vertex);
            return o;
        }

        fixed4 frag (v2f i) : SV_Target
        {
            UNITY_SETUP_INSTANCE_ID(i);
            // sample the texture
            fixed4 col = tex2D(_MainTex, i.uv) * 
            UNITY_ACCESS_INSTANCED_PROP(Props, _BaseColor);
            // apply fog
            //UNITY_APPLY_FOG(i.fogCoord, col);
            return col;
        }
        ENDCG
    }
}
}

drivers – AMD CPU + Nvidia GPU on a fresh Ubuntu 20.04 system

I was wondering how can I set my system up so that I can use my nvidia dgpu on difficult tasks and amd igpu for simple tasks. I have managed to make it work on Manjaro, using prime-run, but I can’t figure it out on Ubuntu, as I am require to use Ubuntu.

CPU: AMD Ryzen 7: 4800HS
GPU: Nvidia GeForce RTX 2060 with Max-Q

lspci | grep VGA output:

01:00.0 VGA compatible controller: NVIDIA Corporation TU106 (GeForce
RTX 2060) (rev a1) 05:00.0 VGA compatible controller: Advanced Micro
Devices, Inc. (AMD/ATI) Renoir (rev c6)

As far as I know, the best option is prime render offload, but what are the steps to make it work on a fresh Ubuntu 20.04 system? I would really appreciate a step-by-step answer.

lspci -k | grep -EA3 ‘VGA|3D|Display’ output:

01:00.0 VGA compatible controller: NVIDIA Corporation TU106 (GeForce
RTX 2060) (rev a1)
Subsystem: ASUSTeK Computer Inc. Device 1e11
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
— 05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. (AMD/ATI) Renoir (rev c6)
Subsystem: ASUSTeK Computer Inc. Renoir
Kernel modules: amdgpu 05:00.1 Audio device: Advanced Micro Devices, Inc. (AMD/ATI) Device 1637

I am wondering because glxinfo | grep OpenGL gives:

OpenGL vendor string: VMware, Inc. OpenGL renderer string: llvmpipe
(LLVM 9.0.1, 128 bits) OpenGL core profile version string: 3.3 (Core
Profile) Mesa 20.0.4 OpenGL core profile shading language version
string: 3.30 OpenGL core profile context flags: (none) OpenGL core
profile profile mask: core profile OpenGL core profile extensions:
OpenGL version string: 3.1 Mesa 20.0.4 OpenGL shading language version
string: 1.40 OpenGL context flags: (none) OpenGL extensions: OpenGL ES
profile version string: OpenGL ES 3.1 Mesa 20.0.4 OpenGL ES profile
shading language version string: OpenGL ES GLSL ES 3.10

So there is nothings about nvidia.

I have tried different solutions for a week already and nothing worked for me when I used Ubuntu.

I can provide any other information if required.

Thanks in advance!

unity – Saving mesh and instance data on the GPU?

I am trying to optimize drawing a large number of voxels by using GPU instancing in Unity.
I am currently using Graphics.DrawMeshInstanced in a script Update(), but it occurred to me that I am still sending data from the CPU to the GPU every frame, even though my voxel positions are
static.

If I can provide a guarantee that there will be no changes at all to the voxels,
is there any way to keep the mesh data and instance positions in memory on the GPU,
and avoid a CPU -> GPU bottleneck? (I found this, not sure if it is relevant https://docs.unity3d.com/ScriptReference/Mesh.UploadMeshData.html)

If so, how would I instance such meshes to draw multiple copies of them? If not possible, is there any way I can better optimize drawing large numbers of identicaly meshes whose position does not change?

c# – What are Game engines that don’t require GPU?

my computer isn’t really great and I was wondering if there are game engines that do support development on my laptop’s specs, while there not trash, most game engines require Gpu which I unfortunately don’t have.

I’ve seen libGDX and Monogame which are java and C# game engines respectively that does meet with my specs.

My development experience isn’t low as I have made a couple of simple games and an AI in python. And have low experience java and C# (sorting algos)

If any of you know any Game engines with the specs below I’ll be grateful, thanks!

My specs are :
1.8 GHz processor
8 GB ram
256 GB SSD
No GPU

PS : it’ll be great if it’s runs C# or java as I have experience in them but any other languages will work, thanks

Thanks in advance!

Dedicated server with igpu or gpu in USA


Hi,

I am searching for a server provider with specs:

min 7700k cpu (I would use the server as my plex server for my family and me, and I need hevc hw transcoding), or server with newer nvidia gpu.

min 16gb ram

480gb ssd or more

30tb bw ( I would be probably okey with less, but lets say 30 is enough)

1gb or 10gb upload

Location USA

Budget around $100, if It can be cheaper that would be great

I would use the server as my main plex server, I need newer cpu or gpu for hevc transcoding because too often my users are transcoding hevc and older cpu are bad with it.

video – Is there a way to programmatically retrieve total VRAM, free VRAM and GPU usage?

I am looking at profiling my app, although there is no facility that I was able to find, that is giving me VRAM data for my ATI GPU, like total VRAM on board, used and free VRAM amount and GPU load in percentage.

I can use third party software maybe; but I wonder if it is possible to use terminal or something like python to get this info. On Windows the issue does not seem to be present; but on OSX for some reason, getting such data seems to be quite involved, especially for ATI cards.