Questions tagged [gpu]
26 questions
43
votes
7 answers
In software programming, would it be possible to have both CPU and GPU loads at 100%?
This is a general question on a subject I've found interesting as a gamer: CPU/GPU bottlenecks and programming. If I'm not mistaken, I've come to understand that both CPU and GPU calculate stuff, but that one is better in some calculations than the…

Azami
- 549
- 4
- 8
17
votes
7 answers
When should I be offloading work to a GPU instead of the CPU?
Newer systems such as OpenCL are being made so that we can run more and more code on our graphics processors, which makes sense, because we should be able to utilise as much of the power in our systems as possible.
However, with all of these new…

RétroX
- 1,821
- 3
- 13
- 8
10
votes
6 answers
Examples of general purpose algorithms that have benefited from running on a GPU?
I am looking for examples of general purpose algorithms (meaning non-graphics related) that have been proven to run an order of magnitude faster on a GPU than on a CPU. I will use these examples to think creatively about other algorithms that I…

David
- 4,449
- 6
- 35
- 48
8
votes
1 answer
Is hardware accelerated GUI data kept on the GPU
I am doing some research as to how most hardware accelerated GUI libraries work. I am actually only care about the rendering backends of them here. I am trying to figure out what would be the best way to try and write my own as a sort of side…

Gerharddc
- 191
- 1
- 6
6
votes
3 answers
Cuda vs OpenCL - opinions
Interested in peoples opinions of Cuda vs openCL following NVidia's Cuda4 release.
I had originally gone with openCL since cross platform, open standards are a good thing(tm).
I assumed NVidia would fall into line as they had done with openGL.
But…

Martin Beckett
- 15,776
- 3
- 42
- 69
6
votes
0 answers
Incorporating existing 2D OpenCL/OpenGL application in 3D scene
There is an existing real-time, scientific visualization application that uses OpenCL and OpenGL to render complex 2D graphs. My goal is to incorporate this application into a 3D rendered scene. At the very minimum, the desired application would…

Liam Kelly
- 169
- 4
5
votes
1 answer
Definition and usage of "warp" in parallel / GPU programming
I have come across the word "warp" in a few places but haven't seen a thorough definition (there's no Wikipedia page on it either).
A brief definition is found here:
In the SIMT paradigm, threads are automatically grouped into 32-wide bundles…

Lance
- 2,537
- 15
- 34
4
votes
1 answer
How to experiment with GPU programming on Linux+AMD/ATI card?
I've recently acquired a laptop with an Intel i3 CPU and an AMD/ATI 6300 card, running Ubuntu 10.10.
How do I proceed in setting up a development environment that allows me to program the GPU? I assume I'll have to use OpenCL (CUDA is NVIDIA-only),…

ΤΖΩΤΖΙΟΥ
- 183
- 3
- 9
4
votes
3 answers
Parallel processing a Tree on GPU
I have seen a few papers on parallel/GPU processing of trees, but after briefly looking through them I wasn't able to grasp what they did. The closest to a helpful explanation was found in Parallelization: Binary Tree Traversal in this diagram:
But…

Lance
- 2,537
- 15
- 34
4
votes
2 answers
What are the advantages of GLSL's compilation model?
GLSL is fundamentally different from other shader solutions because the server (GPU driver) is responsible for shader compilation. Cg and HLSL are (afaik) generally compiled a priori and sent to the GPU in that way.
This causes some real-world…

Kos
- 1,424
- 1
- 14
- 23
3
votes
3 answers
How does cuRAND use a GPU to accelerate random number generation? Don't those require a state?
My understanding is that every PRNG or QRNG requires a state to prevent the next item in its sequence from being too predictable; which is sensible, as they're all running on deterministic hardware.
GPUs are, by design, non-Von Neumann architectures…

Michael Macha
- 396
- 1
- 2
- 11
2
votes
3 answers
How do you feel about browsers getting into the low level intricacies?
i was working on a website with FF4 and while resizing an element with Firebug, FF4 just crashed my nVidia Display Driver. Fortunately Windows 7 was able to help me by recovering from this serious error and i was able to completely recover my work…

mahen23
- 1,120
- 1
- 9
- 23
2
votes
0 answers
How to package and distribute a Tensorflow GPU desktop application
I am developing a desktop application that utilises Tensorflow. The aim of the application is to let users easily train a given model and use it for inference within the app. I want to support training and inference via the GPU, if available on the…

turnip
- 1,657
- 2
- 15
- 21
2
votes
1 answer
Docker and GPU-based computations. Feasible?
Recently I ran against this question and this Nvidia-docker project, which is an Nvidia Docker implementation and it made me wondering where, why and how this scheme makes sense?
I found out some materials in the web (e.g. this) which states that…

Suncatcher
- 129
- 3
2
votes
1 answer
Large Scale Machine Learning vs Traditional HPC Hardware
I've spent the last few days working with tensorflow for the first time as part of a natural language processing assignment for my degree. It's been interesting (fun isn't the right word) trying to get it to run on a GPU but it got me thinking.
The…

HJCee
- 165
- 1
- 9