Search Unity

[Idea] Unity with C# to GPU power!

Discussion in 'General Discussion' started by Arowx, Jan 7, 2015.

  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    What if Unity made it easier to write parallel processing code for the CPU or GPU in C#?

    Some of you will say no way or it can't be done!

    But it can!

    CUDAfy .Net - "allows easy development of high performance GPGPU applications completely from the Microsoft .NET framework. It's developed in C#."

    Example of use here - http://w8isms.blogspot.co.uk/2013/04/gpgpu-performance-tests.html

    Cool Tech, write C# code that is cross compiled to the GPU!

    OK Unity would need to make it so you could just tag a region of code for parallelism, for me at least but imagine what people with real programming skills could do once they can Untap the power of their GPU's.

    That's unless Microsoft is developing a .Net to GPU technology???

    What would you code in Unity if you could unleash the power of your GPU?
     
  2. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,657
    Isn't your GPU already busy rendering stuff?
     
    Kiwasi, Deleted User and bluescrn like this.
  3. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Good point but a modern gaming device has a CPU and GPU and why not take advantage of both.
    A simple scenario would be you give your CPU a lot of processing to do and your GPU is left waiting on the CPU.

    But if the task can be done faster in parallel on you GPU. Then you're CPU could load the task onto your GPU, while it works out what is needed for the next frame. Your GPU finishes and the CPU passes it the rendering task and picks up the results.

    A bit simplistic, but isn't this the direction the industry is going.

    Don't take my word for it check the Dice Frostbite game engine industry lectures.

    What could be better for game engines and game developers than having two processors, one great for serial tasks and small multi-threading jobs and the other good at small massively parallel tasks. Or on Mobile or APU in a single chip.

    And being able to access both with a single language.
     
    Last edited: Jan 7, 2015
  4. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    This would be definitely cool. Would have many uses for leveraging the GPU power.
     
    CarterG81 likes this.
  5. lmbarns

    lmbarns

    Joined:
    Jul 14, 2011
    Posts:
    1,628
  6. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
  7. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    What makes you think that's not already happening?
     
    Ironmax and Ryiah like this.
  8. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Mac support!
     
  9. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Expanding on this, talking mostly about PC, most gamers and/or developers max out GPU time before they max out CPU time. Where possible, visual stuff is typically cranked up to the point where the system is only just managing an appropriate frame rate, and this usually puts more pressure on the GPU than the CPU. So, in the use case of a game or highly visual application, moving more stuff to the GPU when it's already under high pressure in order to reduce CPU load which is usually under less pressure doesn't make sense.

    Exceptions to this are stuff that work really well on the GPU that would bog down a CPU, or less visual apps where the GPU isn't under particularly high load.
     
    Deleted User and Ryiah like this.
  10. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I'm not aware of any plan for this feature within Unity or C#, do Unity have this on their development road map?

    Then why did game developers want/need AMD's Mantle or IOS's Metal to overcome the performance bottleneck between CPU and GPU?

    And how come Nvidia are always showing off these amazing tech demos where they simulate fluids and galaxies with their latest GPU?
     
  11. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Because they're tech demos. They're meant to look fancy. Would they have got you this excited if they showed you a number crunching benchmark which just printed a few lines of text on the screen? I suspect not.

    How many people buy GPUs based on their ability to crunch data for reports or compress video quickly? Some, but not nearly as many as gamers buy to push more pixels on bigger screens for newer games. ;)

    I could be wrong, but I think that's more about bus bandwidth ("draw calls") than computational speed. No one number tells the whole story of a system's performance.
     
  12. kaiyum

    kaiyum

    Joined:
    Nov 25, 2012
    Posts:
    686
    does this work in unity? I mean, I add the required assembly to unity c# project and it works, just like in .net?
    I am excited.
     
  13. lmbarns

    lmbarns

    Joined:
    Jul 14, 2011
    Posts:
    1,628
    It's not that difficult. It's pretty much required for some kinect 2 projects where you're working with the raw buffers off the sensors in real time.

    Very basic example: Drawing and randomly updating position of 350,000 structs with 2 vector3 variables each at 60+ fps.

    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3.  
    4. public class BufferExample : MonoBehaviour
    5. {
    6.     public Material material;
    7.     ComputeBuffer buffer;
    8.     const int count = 350000;  //number of vertices to generate
    9.     const float size = 5.0f;
    10.     Vert[] points;
    11.  
    12.     struct Vert
    13.     {
    14.         public Vector3 position;  //self explanatory
    15.         public Vector3 color;
    16.     }
    17.  
    18.     void Start ()
    19.     {
    20.         buffer = new ComputeBuffer (count, sizeof(float) * 6, ComputeBufferType.Default);
    21.         points = new Vert[count];
    22.         Random.seed = 0;
    23.         for (int i = 0; i < count; i++)  //make 350,000 verts with random color and position
    24.         {
    25.             points[i] = new Vert();
    26.             points[i].position = new Vector3();
    27.             points[i].position.x = Random.Range (-size, size);
    28.             points[i].position.y = Random.Range (-size, size);
    29.             points[i].position.z = Random.Range (-size, size);
    30.  
    31.             points[i].color = new Vector3();
    32.             points[i].color.x = Random.value > 0.5f ? 0.0f : 1.0f;
    33.             points[i].color.y = Random.value > 0.5f ? 0.0f : 1.0f;
    34.             points[i].color.z = Random.value > 0.5f ? 0.0f : 1.0f;
    35.         }
    36.         buffer.SetData (points); //set the buffer data
    37.     }
    38.  
    39.     void FixedUpdate(){
    40.         for (int i = 0; i < count; i++)
    41.         {
    42.             points[i].position.x = Random.Range (-size, size);  //slow to do random in update, just example
    43.             points[i].position.y = Random.Range (-size, size);
    44.         }
    45.         buffer.SetData (points);
    46.     }
    47.  
    48.     void OnPostRender (){
    49.         material.SetPass (0);
    50.         material.SetBuffer ("buffer", buffer);
    51.         Graphics.DrawProcedural (MeshTopology.Points, count, 1);
    52.     }
    53.  
    54.     void OnDestroy ()
    55.     {
    56.         buffer.Release ();
    57.     }
    58. }
    59.  
    The "scary" compute shader:
    Code (CSharp):
    1. #pragma kernel CSMain
    2. StructuredBuffer<float> buffer1;
    3. RWStructuredBuffer<float> buffer2;
    4.  
    5. [numthreads(8,1,1)]
    6. void CSMain (uint id : SV_DispatchThreadID)
    7. {
    8.     uint count, stride;
    9.     buffer2.GetDimensions(count, stride);
    10.     buffer2[id] = buffer1.Load(id);
    11. }
    And a basic shader to show the position/color:
    Code (CSharp):
    1.  
    2. Shader "Custom/BufferExample/BufferShader"
    3. {
    4.     SubShader
    5.     {
    6.         Pass
    7.         {
    8.             ZTest Always Cull Off ZWrite Off
    9.             Fog { Mode off }
    10.  
    11.             CGPROGRAM
    12.             #include "UnityCG.cginc"
    13.             #pragma target 5.0
    14.             #pragma vertex vert
    15.             #pragma fragment frag
    16.  
    17.             struct Vert
    18.             {
    19.                 float3 position;
    20.                 float3 color;
    21.             };
    22.  
    23.             uniform StructuredBuffer<Vert> buffer;
    24.  
    25.             struct v2f
    26.             {
    27.                 float4  pos : SV_POSITION;
    28.                 float3 col : COLOR;
    29.             };
    30.  
    31.             v2f vert(uint id : SV_VertexID)
    32.             {
    33.                 Vert vert = buffer[id];
    34.                 v2f OUT;
    35.                 OUT.pos = mul(UNITY_MATRIX_MVP, float4(vert.position, 1));
    36.                 OUT.col = vert.color;
    37.                 return OUT;
    38.             }
    39.             float4 frag(v2f IN) : COLOR
    40.             {
    41.                 return float4(IN.col,1);
    42.             }
    43.             ENDCG
    44.         }
    45.     }
    46. }
     
  14. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I don't think so as CUDAfy, also needs the Visual Studio C++ as it converts your C# code to CUDA or OpenCL code. For Unity this would probably be more like how Unity builds for IOS where Unity generates an IOS/Mac project that is then compiled for Mac or IOS.

    There are also additional dependencies, e.g. CUDA SDK that Unity Games would need to include.

    @Imbarns Nice but what if with Unity developed a CUDAfy like technology where you could instead write something like more like this.

    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3. using UnityEngine.GPU; // ideal GPU enabler
    4.  
    5. public class BufferExample : MonoBehaviour
    6. {
    7.     public Material material;  
    8.     const int count = 350000;  //number of vertices to generate
    9.     const float size = 5.0f;
    10.     Vert[] points;
    11.  
    12.     [GPU DATA]
    13.     Vert[] gpuPoints;
    14.    
    15.     struct Vert
    16.     {
    17.         public Vector3 position;  //self explanatory
    18.         public Vector3 color;
    19.     }
    20.  
    21.     void Start ()
    22.     {
    23.         points = new Vert[count];
    24.         Random.seed = 0;
    25.         for (int i = 0; i < count; i++)  //make 350,000 verts with random color and position
    26.         {
    27.             points[i] = new Vert();
    28.             points[i].position = new Vector3();
    29.             points[i].position.x = Random.Range (-size, size);
    30.             points[i].position.y = Random.Range (-size, size);
    31.             points[i].position.z = Random.Range (-size, size);
    32.             points[i].color = new Vector3();
    33.             points[i].color.x = Random.value > 0.5f ? 0.0f : 1.0f;
    34.             points[i].color.y = Random.value > 0.5f ? 0.0f : 1.0f;
    35.             points[i].color.z = Random.value > 0.5f ? 0.0f : 1.0f;
    36.         }
    37.         [GPU]
    38.         gpuPoints = points; // triggers a loading of the data onto the GPU
    39.         [END GPU]
    40.     }
    41.  
    42.     void FixedUpdate(){
    43.         for (int i = 0; i < count; i++)
    44.         {
    45.             points[i].position.x = Random.Range (-size, size);  //slow to do random in update, just example
    46.             points[i].position.y = Random.Range (-size, size);
    47.         }
    48.         [GPU] // triggers the generation of GPU code that is triggered from fixed Update
    49.         for (int i = 0; i < count; i++)
    50.         {
    51.             gpuPoints[i].position *= points[i].position;
    52.         }
    53.         [END GPU]
    54.     }
    55.  
    56.     void OnPostRender (){
    57.         Graphics.DrawProcedural (MeshTopology.Points, count, 1);
    58.     }
    59. }
    60.  
    This is only pseudo code to give you an idea of what could be developed by Unity.

    Note: You would probably still need your shader to draw the data.
     
    Gekigengar, darkhog and ZJP like this.
  15. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    If I used CUDAfy to build a dll or assembly and it generates the OpenCL and CUDA GPU code could I then use the dll and code with Unity?

    And if Unity is moving over to IL2CP could they add a GPU feature set to ease GPU programming?
     
  16. Dustin-Horne

    Dustin-Horne

    Joined:
    Apr 4, 2013
    Posts:
    4,568
    The big problem I see is that it only targets CUDA capable cards. If it were to happen, and I'm not sure if it ever would as the benefit to Unity likely wouldn't outweigh the cost of development, they'd need to build it as an agnostic API that supported CUDA / Mantle / Metal and would still fall back to CPU only if the capabilities didn't exist on the target hardware.
     
  17. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Ahh No actually that's just it's name it supports CUDA and OpenCL as well (see link for details) https://cudafy.codeplex.com/

    Note that OpenCL is supported on a range of ATI and Mobile Platforms (see link for OpenCL compatible hardware) https://www.khronos.org/conformance/adopters/conformant-products
     
  18. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Hey Unity's WebGL builds could also have a WebCL option! ;)
     
  19. ZJP

    ZJP

    Joined:
    Jan 22, 2010
    Posts:
    2,649
    UT Guys, Do it. Pleaaaase. :(
     
    Gekigengar likes this.
  20. elmar1028

    elmar1028

    Joined:
    Nov 21, 2013
    Posts:
    2,359
    What if game is 2d? It would require less rendering power. As a result GPU is nearly useless in such situations.
     
    vannus likes this.
  21. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,051
    2d is 3d. It's all handled the same.
     
    shkar-noori likes this.
  22. Brainswitch

    Brainswitch

    Joined:
    Apr 24, 2013
    Posts:
    270
    Which is loads of stuff.
     
    DDeathlonger likes this.
  23. TylerPerry

    TylerPerry

    Joined:
    May 29, 2011
    Posts:
    5,577
    No...
     
    DDeathlonger and wbknox like this.
  24. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Whilst it is indeed "loads of stuff", I can only think of a few niche things where it'd be relevant to the kind of stuff typically done in game scripting. I wasn't implying that it's useless, just that it has to be considered across the whole system, which is why I mentioned the exceptions in the first place.
     
  25. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    My understanding is the GPU is a way to do way more processing than you can do on a CPU in parallel. But there is the cost of transferring data from your CPU to your GPU and back again. As with batched SIMD instructions (to a lesser extent) this overhead means that you need to be doing lot's more calculations before the overhead of using the GPU gives you the performance increase over your CPU and standard multi-threading.

    But if you look at most modern mobile CPU/GPU SOC's and AMD's APU's a lot of modern chipsets do not have as big of an overhead as older architecture. As they do not need to transfer the data from the CPU over a system bus to the GPU they can used shared on Chip memory.

    Also I think GPU's have limitations on the complexity of code they can run, e.g. loops, branches, registers.
     
  26. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    Modern 2D goes through the same hardware as 3D. Sprites are basically two triangles. It may use less rendering power but the GPU will still be in use.
     
  27. Brainswitch

    Brainswitch

    Joined:
    Apr 24, 2013
    Posts:
    270
    Depends a bit on what you mean with 'game scripting'? Are you defining it as something else/just a specific part/different than game coding?
    It can be used for physics, iso-surface extraction, artificial intelligence, particle/fluid simulations etc. Which may or may not fit a certain game project, but some of the examples I mentioned are almost always used in most game projects and could greatly benefit from GPU power.
     
  28. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    By "game scripting" I mean the kind of stuff that the majority of code in games does. I don't care for the distinction between "programming" and "scripting" in this case, though I did deliberately say "scripting" because I'm not talking about the low-level stuff.

    Yes, some form of all of those things could benefit from GPU power. However, not all forms would benefit from it (eg: of all of the AI work I've done, one task springs to mind as being a good candidate for this), and where there is a potential benefit the rest of the environment also needs to be suitable (eg: the GPU can't already be maxed out doing other stuff).
     
    Dustin-Horne and zombiegorilla like this.
  29. Dustin-Horne

    Dustin-Horne

    Joined:
    Apr 4, 2013
    Posts:
    4,568
    Most individual calculations / methods that are happening per frame probably wouldn't benefit from gpu processing and the overhead of marshalling the extra calls and data between the gpus registers and the CPU would probably negate the computational advantages in most cases.
     
    zombiegorilla likes this.
  30. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Just bumped into this OpenACC a C compiler technology that allows developers to write normal C code then add directives that compile to code an OpenCL or CUDA accelerator.

    http://www.openacc-standard.org/

    And it works by just adding a couple of directives to the C++ code!

    So when Unity fully migrates the builds to IL2CP (C# to C++) they could use openacc to compile the GPU code!

    That's why I love IT, got a problem don't worry smarter people than you and I have probably already worked out a solution!
     
  31. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I somewhat agree with what you are saying but I also think that you are not thinking that we should be moving more of the game engine onto the GPU. If you look at the big new AAA game engines, they are using every available processing unit on a device to get the performance they need to deliver the experience the gamers want.

    If Unity were to take your advice then it would be hamstrung to the CPU.

    The ideal game engine would just use the processing units (CPU/GPU/Other) available to it in the best manner possible and leave the developer to write good workable code.
     
    ZJP likes this.
  32. ZJP

    ZJP

    Joined:
    Jan 22, 2010
    Posts:
    2,649
    This..
     
  33. Dustin-Horne

    Dustin-Horne

    Joined:
    Apr 4, 2013
    Posts:
    4,568
    There's a big difference between Unity pushing Engine processing onto the GPU and Unity exposing an API to allow you to utilize it though. Also, Unity has to be very careful because offloading too much could also leave them hamstrung. One of the greatest things about Unity is that it works across so many different platforms and hardware profiles. They're not relying on everyone having the latest and greatest GPU hardware.

    In order to maintain that level of support they would have to do a lot of selective migration of processing, determining when it is feasible to move the processing to the GPU based on the available hardware and resources. There is overhead in making those determinations that again would mitigate some of the advantages. Also, this consumes GPU resources and leaves you less room to do your own fancy stuff on the GPU. For high end GPUs you're not coming close to their potential anyway, but for lower end and/or mobile GPUs this could be a problem.

    Also, most AAA game engines are built for a specific purpose or type of game and are not nearly as flexible. This works for them because they know in advance the set of parameters they are working within. Most of those engines are also closed engines, not available for public consumption / use. And there are a lot of AAA games that still aren't even multithreaded.

    Then you have UE4 which is available to the masses... the question here is, does UE4 offer you the ability to shift your processing to the GPU? And how much of the base engine processing is actually done on the GPU (I don't actually know the answer to this..)? If it's not any more than Unity the question would be... why not? Maybe it's really not as feasible as it sounds... or maybe the resource consumption isn't predictable enough... then again maybe it's actually a great idea... who knows. :)
     
    zombiegorilla likes this.
  34. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    I tried setting up the CUDAfy example up in unity, but it was as simple as trying to reference the .dll when Unity begans to strike:(
    It's because the CUDAfy assembly is compiled against .NET 4.0 as far as I know, and we would have to rebuild it for 3.5, don't we? The Source code is avaiable though, so I'm trying to recompile it myself. If anyone has more experience there, help would be appreciated. I guess it won't be as simple as switching the target framework?
     
  35. Deleted User

    Deleted User

    Guest

    Thing is, were not even at the point where we can run real-time GI that looks decent with enough bounces on the GPU, were not at a point where GPU resources between mass draw calls can be run efficiently.

    Were not even at a point where most GRFX cards still hanging about are powerful enough to run the latest generation engines at decent settings.

    There's a boat load of things that need sorting out before we ever consider adding more to GPU.

    In theory it's a great idea though..
     
    gian-reto-alig, Kiwasi and Ryiah like this.
  36. jpthek9

    jpthek9

    Joined:
    Nov 28, 2013
    Posts:
    944
    Doesn't PhyX use the GPU as well? I was thinking of using GPU processing for creating a fog of war texture based on an influence map. I think the GPU is especially ideal for processes that aren't significant to simulation.
     
  37. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    Unity only runs PhysX in software mode.
     
    shkar-noori likes this.
  38. jpthek9

    jpthek9

    Joined:
    Nov 28, 2013
    Posts:
    944
    What's software mode?
     
  39. shkar-noori

    shkar-noori

    Joined:
    Jun 10, 2013
    Posts:
    833
    which may support CUDA as well in 5.5 or so...
     
    Ryiah likes this.
  40. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    I meant it only currently runs on the CPU.
     
  41. jpthek9

    jpthek9

    Joined:
    Nov 28, 2013
    Posts:
    944
    Oh, that's too bad. I guess there's no default precursor for GPU processing in Unity... is there?
     
  42. shkar-noori

    shkar-noori

    Joined:
    Jun 10, 2013
    Posts:
    833
    not that much for now ('except for rendering')
    IMO these could be using the power of the GPU for more performance:
    • Global Illumination Baking.
    • PhysX
     
    jpthek9 likes this.
  43. jpthek9

    jpthek9

    Joined:
    Nov 28, 2013
    Posts:
    944
    Personally, I'd like to run subroutines on the GPU as the OP mentioned. It'd all be graphics related stuff anyways, but just more specialized.
     
  44. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    There was a project about a year ago to make OpenCL .NET work with Unity. I have not tried it myself though.

    https://github.com/leith-bartrich/openclnet_unity
     
  45. jpthek9

    jpthek9

    Joined:
    Nov 28, 2013
    Posts:
    944
    Oh, interesting. Unfortunately, the package uses too many big words, something not very accessible to me :C.
     
  46. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Yeah, this is a pretty advance topic. You aren't going to be able to attack it without big words.

    It doesn't even make any sense to consider going down this path until you have maxed out your CPU, and applied every conventional algorithm optimisation technique. If you are building a game where this still isn't enough, you'd also have cutting edge graphics that are already pushing the edge of the GPU.

    I can see cases like simulations where there is no need for rendering, and the GPU is not used. However most of the time these simulations are run on computers without GPUs, or on mainframes with massive parallel CPU capacity.

    In general running extra stuff on a GPU is a nice idea. But in practice its probably better to stick to running graphics on a GPU, and computations on a CPU. Better to utilise all the cores available, and keep jamming in more cores (or even multiple CPUs) rather then try and force GPUs to do jobs they were never designed for. Or drive GPU design away from graphics.
     
    Deleted User, zombiegorilla and Ryiah like this.
  47. jpthek9

    jpthek9

    Joined:
    Nov 28, 2013
    Posts:
    944
    Oh, I see. Guess my hopes and dreams of fancy GPU programming are crushed but it's kind of a relief to have everything be focused in 1 department.
     
  48. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    There are plenty more situations where you'll likely want to use the gpu, like an exhaustive algorithm in editor extensions, or generating things like terrains at the start of the game (Where there's no rendering needed).

    The project I'm actually converting, CUDAfy, officially only supports Windows and Linus, though. And one cannot rely on the enduser having an opencl compatible graphics card or even CUDA compatible.
     
    Kiwasi likes this.
  49. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    We're slowly moving in this direction, but I believe when we finally reach the point where it has become commonplace you won't be targeting a specific device yourself but rather relying on the framework to do it for you.

    I noticed no one has commented on the current state of AMD and Intel processors. Both are now shipping with some degree of integrated graphics hardware. What happens though when an actual card is put in the system?

    Currently the integrated hardware is used for situations where there is low demand for graphics processing. Any other situation will cause the card to kick in. What happens when the card is running? For the most part the integrated graphics will simply sit there and idle.

    If it isn't already happening, I would expect frameworks to be designed to take advantage of the unused processing power of those integrated solutions. It may even happen automatically to some degree due to the how the processor is designed.
     
    Meredoc likes this.
  50. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    We have had processors shipping with integrated graphics capable of compute shaders since approximately 2011 for Intel and 2012 for AMD. Chances are very good now that your target audience will have the capability.

    Additionally OpenCL can run in software mode if hardware is not present.
     
    Last edited: Apr 18, 2015