I've written an Image2Ansi converter app in UWP/C# for making terminal mode text art from images for command-line programs (think Node, Python, etc etc), but it's of course slow it's doing a pixel by pixel comparison with software WriteableBitmaps in UWP. So I thought maybe I might try rewriting it in Unity3D to see if I could get some performance gains, not to mention cross-platform compatibility. Basically I have a big PNG that gives me a permutation of every foreground/background/ASCII character in CP437, and for every 9x16 block in my source image (after I've dithered it down to 16 colors by Jarvis-Judice-Ninke error diffusion) I compare it to each 9x16 font character, getting the sum of the euclidean distances of the colors of each block to get a score, whichever block has the lowest score is the winner for each iteration. So I'm wondering what might be the best way to accomplish this in Unity. I tried loading an image from a Resource and then resizing it and loading it into a UI Raw Image, but it gives me a white sprite (code below). Should I be looking at some sort of shader to help accomplish this (I'm new to shaders and see a Compute Shader and Image Effect Shader but unsure which to choose)? Could be neat for a shader effect if it can be fast enough. End result needs to be able to persist back to the hard drive on the client (both as an ANSI and as a PNG of the result). Image loading in error (in Start() method, inputTexture is a Texture2D in the Resources folder): inputTexture = Resources.Load<Texture2D>("inputimage"); int width = inputTexture.width; int height = inputTexture.height; int newWidth = 720; float scaleFactor = ((float)newWidth / (float)width); int newHeight = (int)((float)height * scaleFactor); newTexture = new Texture2D(width, height, TextureFormat.RGB24, false); Graphics.CopyTexture(inputTexture, newTexture); newTexture.Apply(); Debug.Log(newTexture.Resize(newWidth, newHeight, TextureFormat.RGB24, false)); newTexture.Apply(); UIImage.texture = newTexture; The Debug.Log above gives true but sets the Raw Image to white, if I remove the resize it works. Any ideas?
I used TextureScaler to fix the scaling program. I'm on to Compute Shaders right now. Here's the problem I'm running into: ComputeScript.cs: Code (csharp): void Start () { inputImage = Resources.Load<Texture2D>("inputimage"); int width = inputImage.width; int height = inputImage.height; int newWidth = 720; float scaleFactor = ((float)newWidth / (float)width); int newHeight = (int)((float)height * scaleFactor); Texture2D newTexture = new Texture2D(width, height, TextureFormat.RGB24, false); Color[] pixels = inputImage.GetPixels(0, 0, width, height); newTexture.SetPixels(0, 0, width, height, pixels); newTexture.Apply(); TextureScaler.scale(newTexture, newWidth, newHeight, FilterMode.Point); inputFontTexture = Resources.Load<Texture2D>("vgafont"); int fontWidth = inputFontTexture.width; int fontHeight = inputFontTexture.height; int kernel = compute.FindKernel("CSMain"); result = new RenderTexture(fontWidth, fontHeight, 24); result.enableRandomWrite = true; result.Create(); compute.SetTexture(kernel, "Result", result); compute.SetTexture(kernel, "ImageInput", newTexture); compute.SetTexture(kernel, "FontInput", inputFontTexture); compute.SetInt("InputCol", 20); compute.SetInt("InputRow", 10); compute.Dispatch(kernel, fontWidth / 8, fontHeight / 8, 1); int kernelProcess = compute.FindKernel("CSBestDistance"); compute.SetTexture(kernelProcess, "Result", result); int bufferSize = 16 * 16 * 256; ComputeBuffer buffer = new ComputeBuffer(bufferSize, sizeof(float)); compute.SetBuffer(kernelProcess, "ResultDistances", buffer); compute.Dispatch(kernelProcess, bufferSize / 16, 1, 1); float[] buffer2 = new float[bufferSize]; buffer.GetData(buffer2); Debug.Log(buffer2.Min()); //for (int z = 0; z < bufferSize; z++) //Debug.Log(buffer2[z]); buffer.Dispose(); } Compute Shader: Code (csharp): #pragma kernel CSMain #pragma kernel CSBestDistance RWTexture2D<float4> Result; Texture2D<float4> ImageInput; Texture2D<float4> FontInput; int InputCol; int InputRow; RWTexture2D<float4> ResultDistances; //RWStructuredBuffer<float2> ResultDistances; [numthreads(8,8,1)] void CSMain (uint3 id : SV_DispatchThreadID) { int xoffset = InputCol * 9; int yoffset = InputRow * 16; //int inputoffset = yoffset + xoffset; int fontx = id.x % 9; int fonty = id.y % 16; float2 inputoffset = float2(xoffset + fontx, yoffset + fonty); float4 fontinput = FontInput[id.xy]; float4 imageinput = ImageInput[inputoffset]; float distanceInput = distance(fontinput, imageinput); Result[id.xy] = float4(distanceInput, distanceInput, distanceInput, 1); } [numthreads(16,1,1)] void CSBestDistance (uint3 id: SV_DispatchThreadID) { int x = id.x % 2304; int y = (id.x - x) / 2304; ResultDistances[id.x] = Result[float2(x,y)].x; } When CSBestDistance comes back, I have values of all 1, but if I just do something like this: ResultDistances[id.x] = id.x; My buffer is filled as expected with values 0..65535. So I think it has to do something with the texture, when I look at the texture visually in the Editor after the CSMain kernel runs it gets Result as a texture just fine, it's a grayscale image showing the distances between the source and destination as a 2304x4096 RenderTexture. If I don't run compute.SetTexture(kernelProcess, "Result", result); when I run the second kernel I'll get back all 0's, if I do run it I get all 1's. How do I get my renderTexture to pass appropriately so that I can chain kernels? Or is there a better way? Really my end result is, I just need a 2 element float array giving me the x/y of the lowest distance (which would be a third kernel to write to get the minimum of the distances in terms of x/y), and should be done on the GPU. Can I tell a single computer shader thread operation to kick off subroutines with multiple threads? I'm finding hard to get good documentation around compute shaders on these sorts of topics.