Search Unity

[Best Tool Asset Store Award] Amplify Shader Editor - Node-based Shader Creation Tool

Discussion in 'Assets and Asset Store' started by Amplify_Ricardo, Sep 13, 2016.

  1. christophercimbaro

    christophercimbaro

    Joined:
    Aug 30, 2017
    Posts:
    12
    @KRGraphics yes for the skin it's done with amplify.
    Any person know how to get good physically base refraction with amplify?

    I've got the equation but I don't suceed the convertion in node base system. Refract.jpg Refract.jpg
    Have a nice day,
     
    KRGraphics likes this.
  2. Amplify_RnD_Rick

    Amplify_RnD_Rick

    Joined:
    Feb 15, 2016
    Posts:
    528
    Hey guys,

    We've just uploaded a new build into our website.

    Here are the release notes.
    Release Notes v1.3.1 dev 09:
    • Fixes:
      • Multiple fixes on custom shadow caster
      • Fixed issue on Templates Manager being incorrectly destroyed in some situations
      • Fixed issue on Template data not being correctly synced when user changes its source code and returns to ASE
      • Fixed issue where referenced 'Texture Sampler' nodes was not respecting the original property order
      • Fixed issue on 'Grab Screen Color' node not using Unity default grab pass when selected to use it
      • Fixed small issues on multiple examples
    • Improvements:
      • Added tangent space normals to 'Fresnel' node and removed the internal normal value from its properties
    Happy shader creations!
     
    hopeful likes this.
  3. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    I not sure how to achieve that but considering that object are already being transformed to the view space maybe I would start by trying to resize them in the Z viewspace. That would require some matrix transformation, maybe even matrix editing before transforming. Do you need to view the object flat from the side? Because that kind of flatten on the shader will happen in every camera. Maybe even flatten it on the cpu instead is easier? I'm just throwing random thoughts at this point...

    I haven't tried it yet but I think you want something like this?:

    Unity_2017-09-25_17-51-53.png
    The main difference is that I'm getting the screen position from the vertex function and using just one matrix. The custom MVP matrix must be set by script, keep in mind the order of the matrix operations. Like I said, I didn't try it but take a look at these, they seem legit: here, here and here. I'll take a closer look tomorrow morning, out of time right now. Hope at least gives you some hint for the time being.

    EDIT: regarding your latest update (I was typing as you were creating the posts :p ) if you have access to the Projection matrix shouldn't you be able to do P * V * M, where P is the matrix you send by script and V is you get from your camera and and M you get inside the shader as the model matrix?

    Is this perhaps what you are looking for? If yes, I can try converting it. It shouldn't be too complicated. That formula is out of context so I'm not sure what some of the parameters are. Either way I'll take a look.
     
    Last edited: Sep 25, 2017
  4. DaDarkDragon

    DaDarkDragon

    Joined:
    Jun 6, 2013
    Posts:
    115

    this is what i would be looking for(this setup just came to me right after reading your reply), but preferably in the shader instead of gameobjects and lookats/corrections. ill see if there is anything more I can figure out with ase.

     
  5. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    It looks good... been trying to get mine working but to no avail... I can't get the bump diffuse working... so I am using Alloy Skin until ASE is able to make a proper skin shader.
     
  6. Eriks-indesign

    Eriks-indesign

    Joined:
    Aug 15, 2012
    Posts:
    50
    @Amplify_RnD_Rick How does ASE hold op for mobile shader development? Is it possible to specify which calculations I want the shader to do in the vertex function and in the fragment?
     
  7. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    That is pretty cool, glad you got it working, gonna try myself just for learning purposes, thx for sharing ;)

    You can force a certain part of the graph to run in the vertex function by placing a node that makes everything connected to it run on the vertex function and get the interpolated result in the fragment function. You can even have a keyword toggle to make a certain calculation either run on the vertex function or the fragment function so in one shader you can have multiple variations to allow it to run on multiple platforms. You can also specify precision if you want to save performance there. There's also a bunch of options to deactivate certain default features if you don't need them.
     
    brisingre likes this.
  8. SkiaUra

    SkiaUra

    Joined:
    Jan 19, 2016
    Posts:
    3
    Hello everyone,
    I come with a little problem. I'd like to use a transparent shader with the Depth mode, to use ZWrite Mode, but It's only available in Opaque & Custom Mode. So I try to copy the Transparent parameter into the Custom Blend Mode, but the render isn't the same anyway i tried. Does someone could help me or already tried that ?
     
  9. JulinoleumOne

    JulinoleumOne

    Joined:
    Nov 2, 2016
    Posts:
    13
    Hello, I've been trying to find a way to split RGBA into R-G-B-A lower on the graph after a switch, is that possible? I cant find any node doing it.
     
  10. Deleted User

    Deleted User

    Guest

    @Amplify_Paulo I also need a preview of shader applied on a mesh, meanwhile I create shader.
     
  11. tweedie

    tweedie

    Joined:
    Apr 24, 2013
    Posts:
    311
    Have there been changes made to the Dither(ing) node? According to the docs we should be able to supply the node a texture as the pattern input, but I don't have that option. I want to use a blue noise texture in place of a bayer matrix.
     
  12. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    So here it is:
    Unity_2017-09-27_11-13-54.png

    I ended up basing it heavily on another shader I found online but it should be what you want. The model, UVs and mask do change the final result a lot so it's a fine balance between the three.
     

    Attached Files:

    KRGraphics and nxrighthere like this.
  13. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    Show me the options you are using and the take a screen of it. You have to keep in mind that transparency has all sort of ordering issues so depth is not supposed to work well with transparent objects. What kind of effect are you trying to accomplish? usually there are a few tricks people do for certain cases.

    You can use the Break to Components node if you want each of the individual components, or the Component Mask node if you want a continuous set of those components, or even the Swizzle node if you need to reorganize the components. Hope that helps ;)

    I'm sorry, not sure if I understand you. Do you mean a shader preview window inside the ase editor?

    Yes, it's a recent change, you can get the latest development version of ASE through our website (use your invoice number in the field), we only push a new build to the store after it's stable enough or major features get released.

    Can you be more specific on how you want to use the node? Maybe a link? I'm not sure the recent changes allow you to do what you want.

    [edit] Ok I get it now, I did my research and I know what you want with the blue noise, unfortunately the recent changes to the (now called) Dither node does not allow you to specify your own blue noise texture, the input port is value you want to dither not how the way you change the dither. I did a quick test using simple nodes and it's actually pretty easy to recreate so I'm now considering adding a new option so you can specify your own dither pattern to use, allowing you to use a blue noise. Maybe for the next dev version.
     
    Last edited: Sep 27, 2017
  14. Ziboo

    Ziboo

    Joined:
    Aug 30, 2011
    Posts:
    356
  15. chippwalters

    chippwalters

    Joined:
    Jan 25, 2017
    Posts:
    68
    Definitely digging AS. Using it for a bunch of stuff on our Alamo Reality project. Does a great job minimizing the number of texture maps I need.

    A couple of questions if you don't mind. I'm trying to create a shader for a very small ground mesh for a MOBILE AR app: (this mesh uses Megasplat and I'm trying to simplify if possible)



    1. Does it seem like it will be faster and more efficient to use AS plus a couple 1-2K texture nodes to do this?
    2. Are noise nodes faster, cheaper, better for something like this than a 1-2K bitmap? Or do they slow things down more?
    3. I've looked through the examples and am looking for a simple noise example, where I can hook up a basic noise node to a gradient(?) to export an image and a normal map. Is there any example or a quick description of how this is done?

    Thanks!
     
  16. Deleted User

    Deleted User

    Guest

  17. tweedie

    tweedie

    Joined:
    Apr 24, 2013
    Posts:
    311

    Ah, fantastic! Brilliant news. Sounds like I'll await that update eagerly :) Thanks.
     
    Amplify_Paulo likes this.
  18. christophercimbaro

    christophercimbaro

    Joined:
    Aug 30, 2017
    Posts:
    12
    @Amplify_Paulo Woot Thanks so much, I'll take a look soon to understand how you pluged it.
     
    Amplify_Paulo likes this.
  19. KRGraphics

    KRGraphics

    Joined:
    Jan 5, 2010
    Posts:
    4,467
    This is a nice looking eye... :) For my eye shader, I referenced the eye shader from alloy... and it is very close to it
     
    Amplify_Paulo likes this.
  20. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    It should be, I did a quick test adding a new texcoord variable and I was able to use it by calling the correct vertex texcoord uv set and it worked. Right now you save data from your vertex function into your fragment function using UV texcoords. This is not the ideal way because our texcoord node only shows the first 4 and you might need more than that. We'll change this in the near future. Our template doc page should give you a hint on how to do it. If you are still having difficulties, report back and I'll send you a working version.

    Unfortunately I don't think I can give you a straight answer, those are all hard questions. I think situations like this you are better off doing stress tests in the machine. You could be bound by fillrate, ALU instructions, etc. And without knowing how heavy that particular shader is it's difficult to tell. I know that megasplat does some optimizations so maybe the shader is not too complex, but mobile is a lot more strict.

    I would use that scene and test it on the device, try to cover a large percentage of the camera with your ground. Then use that shader and record results. Create a test shader and blend as many large textures as you need together to simulate lots of texture fetchs (you can try texture arrays as well) and record results. Then do the same with 1x1 textures and record results. Finally create a another test case and mix different noise generators to simulate instructions without textures. These tests should give you a general idea how much each path costs you at the end.

    Because you can be bound by different things you can easily get diminishing returns even when you use the least expensive solution many times over.

    I'm not sure if I understand your #3, we actually have a "Simple Noise" sample in our package. What do you mean by exporting in that context?

    This is our TODO list but it's not a priority right now. We understand this is a very good quality of life feature and we'll eventually get there but right now we want to focus on others things first.
     
  21. Ziboo

    Ziboo

    Joined:
    Aug 30, 2011
    Posts:
    356
    You mean I have to create a new template right ? There isn't a node to access a new texcoord directly ?
     
  22. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    Yes you would need to create a new template and add them yourself. The editor uses the remaining available texcoords for the nodes that need it, it's not impossible to have a node that reserves and uses a texcoord directly, it's just a design decision because we want the template to command what's available to the editor. Eventually we want the "vertex to fragment" node to be supported in templates (not there yet) and most likely this specify behavior will change and accommodate that behavior. So for now, you need to add it to the template and use it with the texcoord node (which will be replaced, hopefully tomorrow, by a node that does not have the channel limitations that the texcoord has).
     
  23. Ziboo

    Ziboo

    Joined:
    Aug 30, 2011
    Posts:
    356
    Ok thanks.

    Also some suggestion that I don't see on the roadmap (needs to be updated by the way).

    - Displaying constant value added to nodes like Add, Substract, ... in the header like unreal
    upload_2017-9-28_11-9-19.png

    - Add comment per node (displaying above the node)

    - A button when clicked sets every default property value to the material value

    - When you use a shader function you loose the preview, which is pretty annoying.

    Thanks for the awesome work

    EDIT: Is it possible to test if a texture is null ? would like to have a switch if the user set a texture or not
     
    Last edited: Sep 28, 2017
    Amplify_Paulo likes this.
  24. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    Totally needs some updates, we've focusing on the documentation of the nodes, but that definitely needs an update.

    1. Can't promise you anything on this one, we've thought about it before and we still aren't sure or to solve it. It's not just property nodes, it's all unconnected input that have internal data.
    2. It's doable, we need to think about how it in our UX, I'm all for options but sometimes too many options put people off.
    3. Good suggestion!
    4. It's in my todo list. I've done some of the ground work to later pick it up, it's just a matter of priority.

    As far as I'm aware, no there's no way to detect that inside the shader, that's also one of the reasons texture properties have a default value.

    What you need to do is create a custom material inspector that check if a texture is null and sets a shader keyword on or off. You set up the custom material editor in the main left panel and you can use the static switch node to create and use a keyword.
     
  25. Ziboo

    Ziboo

    Joined:
    Aug 30, 2011
    Posts:
    356
    Thanks for the answer.

    Still me, is it possible to get the position of the object in the shader ? the pivot position ?
     
  26. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    You need to use the "Object to World" node and leave the input port unconnected. This will return the mesh pivot point position. Keep in mind tho that if your object gets batched the position changes to be the geometric center of all objects in the batch. You can solve this by going at the rendering options and turning on the "Disable Batching" option. The only other way to have the object position and still have batching on is to send the value through script or saving it's position in the object uv channels and fetch it in the shader via texcoords..
     
  27. AbyssWanderer

    AbyssWanderer

    Joined:
    Mar 15, 2016
    Posts:
    77
    Hi everyone.

    Could anyone advise us on how to get access to baked AO in UNity's lightmaps?
    There is a node - Decode Lightmap, but we haven't found its description, tests gave no results...

    What we aim to make is to bake lightmaps with AO in Unity and throw dirt on those places.
    Is there such option in Unity (btw, Unreal actually has this feature)?

    Thank you in advance!
     
  28. Dambusters

    Dambusters

    Joined:
    Jan 6, 2012
    Posts:
    52
    Hi,

    I'm trying to get a gradient representing the distance between the surface of the object and the geometry behind.

    It would seem that the Depth Fade Node should do this, but on testing I find it's output seem to be just the camera depth to the object. I've also tried the method in Water Example (using Screen Pos and Screen Depth) but it is also camera dependent.

    The documentation for Depth Fade gives no indication of how to use it.
    Does it need connecting to something else to work correctly or is it a bug?
    I'm using Unity2017.1.1f1 and I know that Unity reversed the Depth Buffer recently - could this be the problem?
     
    Last edited: Sep 29, 2017
  29. Amplify_RnD_Rick

    Amplify_RnD_Rick

    Joined:
    Feb 15, 2016
    Posts:
    528
    Hi guys,

    We've just uploaded a new build into our website.
    Here are the release notes.
    • Release Notes v1.3.1 dev 11:
      • Fixes:
        • Fixed incorrect UV variable name on Post-Process template
        • Fixed Perforce integration
        • Fixed preview on 'Fresnel' node for the new tangent mode
        • Fixed issue with 'Screen Position' subtitle
        • Fixed issue with 'Vertex to Fragment' node on templates
      • Improvements:
        • Added two additional nodes to templates,'Template Vertex Data' and 'Template Fragment Data'
          • These nodes allow direct access to vertex and interpolated fragment data from the template
        • Adding vertex code entry tag into Post-Process template
        • Improved fail-safe behavior on attempt to write vertex code on template with no vertex tag declared
        • Minor tweaks on some nodes port names and order
        • 'Dither' node now has a input port that allows the use of a custom dither pattern
        • 'Vertex to Fragment' node no longer generates unnecessary code and now acts as a relay if connected to a vertex path
    Happy shader creations!
     
    tweedie likes this.
  30. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    Hi there, first and foremost, sorry for the confusion, we actually updating our wiki to makes some of these nodes more clear, unfortunately those two (along side other two which are similar) are still lagging behind. In fact the example screenshots have been taken, it's the text that's still missing:

    Depth Fade example
    Screen Depth example
    Camera Depth Fade example
    Surface Depth example

    I think you are misinterpreting the results. Depth fade should indeed give you what you need, you just have to keep in mind it needs to be a transparent or a translucent material. Screen depth should also work but you need to do some math on top of it (which depth fade does automatically). All of these 4 nodes are camera dependent, and that's the intended behavior. You said it yourself that you want the distance between the surface and the geometry "behind"... behind? Behind implies a reference, in this case what you are saying is "behind the surface from the point of view of the camera". So these nodes are providing you exactly what you want. I suspect tho that's not really what you want. Do you perhaps want the distance of a surface to another surface irregardless of perspective? If yes then you need some analytical or precomputed data to know that distance, for instance, distance fields. What exactly is the purpose of the shader?
     
  31. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    We just uploaded a new build, you now have two new nodes that give you access of the vertex data in the vertex function and the interpolated data in the frag function. This means you need to add the custom vertex data by hand in your own template but you can now access it and change it as you see fit in the editor.

    Just a heads up, we added a new option to the dither, this should give you what you need for blue noise ;)
    (click image to amplify it or the noise looks weird)
    Unity_2017-09-29_15-40-48.png
     
    tweedie likes this.
  32. Ziboo

    Ziboo

    Joined:
    Aug 30, 2011
    Posts:
    356
    I would like to do some blending with surface intersection (rock over terrain),
    but I would like to use a Masked shader and not a Transparent one.

    These mean that I cannot use a Masked if I want to use the Depth Fade node right ?

    My end goal is to use a ditter alpha mask to make the blending and not go into the Transparent render queue but the alpha test instead
     
  33. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    For that case there's two problems, the first is that as far as I know in order to sample the depth buffer behind the object surface and still have depth writing in that surface you need to change the render type and queue. You can set the material in masked mode add to the render queue offset something like 500 and it will work. The problem is Unity uses these types and queues to do stuff (namely render to depth), you can very well have the risk of ordering issues showing up in a real case where objects move around. You need to test this in your case to see if it's enough. (don't forget to set casting and receiving of shadows ON).

    The second problem is more serious and it's the one I was referring earlier. Even if you can live with changing the render type and queue these camera effects will create a swimming effect. This also happens in the water sample we provide and most water shaders you see around (probably all of them). In case of water it's usually fine since it's moving and not a big deal if the edges "swim around". But for object blending you'll see your object blend change with your camera movement and angle. This is usually a deal breaker and the best option is to use something like signed distance fields where if you have static geometry (like terrain) you can pre-calculate a distance field to that geometry a feed it to the shader to do the blending. Unity does not natively support signed distance fields and as far as I know there's no asset in the asset store that does :|
     
  34. Ziboo

    Ziboo

    Joined:
    Aug 30, 2011
    Posts:
    356
    Thanks a lot for the long answer.
    I had exactly the artifacts you describing, I'll try to investigate more.

    Find a bug or maybe not, but when using the ToggleSwitch node I guess you should add the keyword "
    [Toggle]" to the shader property, like that it will render as a checkbox in the inspector.
    Otherwise if the value is set above 2 you have OutOfRange errors.
     
  35. tweedie

    tweedie

    Joined:
    Apr 24, 2013
    Posts:
    311
    Awesome speed, thanks so much for adding this, I think it's a great addition ;) Good stuff!
     
  36. OP3NGL

    OP3NGL

    Joined:
    Dec 10, 2013
    Posts:
    267
    1) how do i create custom expression that deprived from another set of nodes?
    2) on the wiki page theres input & output nodes... i cant seems to find them in the editor
     
  37. Dambusters

    Dambusters

    Joined:
    Jan 6, 2012
    Posts:
    52
    Hi,

    Thanks for the very prompt reply.

    What I'm trying to achieve is to create a soft edge where this object (i.e. the one with my shader) meets other geometry - (like soft particles do). I'm using a transparent shader with unlit light mode and shadows on.

    The Depth Fade node is exactly what I need but some further investigation leads me to the conclusion that it stops working when I switch Unity Build mode from PC/Mac to iOS.

    In iOS it just shows camera depth to the surface I think. Is there some setting I can turn on to make this work in iOS (for Metal) like it does in PC mode?

    Excellent piece of software you're developing here BTW!
     
    Last edited: Sep 30, 2017
  38. Johannski

    Johannski

    Joined:
    Jan 25, 2014
    Posts:
    826
    Go to Edit-> Project Settings -> Quality
    There take a look at the standard quality level for ios (you can also change it), and set soft particles to true.
    upload_2017-9-30_17-11-17.png
    That should fix it, but beware that creating a depth texture does come with a performance cost (Nothing iOS won't be able to handle, but still keep it in mind :) )
     
    hippocoder and Amplify_Paulo like this.
  39. Dambusters

    Dambusters

    Joined:
    Jan 6, 2012
    Posts:
    52
    Unfortunately the soft particle switch didn't work (I had it on already) but finally solved it by setting the depthTextureMode on the camera in code. In case anyone else finds they have no depth map in iOS, stick this on the camera to activate it:

    using System.Collections;
    using System.Collections.Generic;
    using UnityEngine;

    public class SetDepthTextureMode : MonoBehaviour {

    public DepthTextureMode mode = DepthTextureMode.Depth;
    private Camera cam;

    void Start () {
    cam = gameObject.GetComponent<Camera>();
    if (cam != null) cam.depthTextureMode = mode;
    }
    }

    I do have another problem - in moving the camera back along local z, the Depth Fade values change dramatically - shouldn't they stay the same?
     
    Last edited: Oct 1, 2017
  40. florian_d

    florian_d

    Joined:
    Apr 13, 2010
    Posts:
    34
    Hi there, just bought ASE and it's a great tool, I really dig the general look and feel. I'm a generalist programmer, and I don't know anything about shader or the general approach when it comes to write them, so node based editor, good stuff. And I can previs as I experiment, so it's fantastic.

    I was reproducing this heightmap blending shader in ASE just to understand it and have a go at it. I managed to get it working, but now I'm starting to think that I could reorganize/refactor that in some smaller blocks?

    So, the shader goes like this
    upload_2017-9-30_20-31-40.png

    The core of it is the blending of the heightmap (it's literally the functions described in that website reproduced as nodes)
    upload_2017-9-30_20-32-38.png

    So I'm thinking, maybe this could be collapsed into a smaller block, or as a Group? Is that an existing feature that I'm missing? And of course a couple of minutes after I find the section about Shader Functions. ^_^

    By the way, is there any "Best Practice"/recommendation page? I couldnt find any on the wiki

    Also, I was looking at the generated code and I see a lot of the following
    Code (CSharp):
    1.             float4 temp_cast_6 = (0.02).xxxx;
    2.             float4 temp_cast_7 = (0.0).xxxx;
    They don't seem used, so I'm kind of confused as to what they are, and how/where they are used.

    Sorry if the questions seems too naive, but it's really a new field for me here.
    Cheers
     
    Last edited: Sep 30, 2017
  41. QuantumTheory

    QuantumTheory

    Joined:
    Jan 19, 2012
    Posts:
    1,081
    Hello,

    I'd like to posterize then dither the dot product of the world normal and the word space lighting direction in an attempt to get a sort of "pixel art" lighting setup.

    The posterization is working just fine, but when I try to take the result and "step" in the dither node as suggested in the documentation, I either get black or a division by 0 warning. Could someone provide some help?

    This is in the custom lighting model mode too.
     
  42. Zehru

    Zehru

    Joined:
    Jun 19, 2015
    Posts:
    84
    Hello everyone. I'm really interested in this asset.
    I just have one question. Is it a good idea using it to create shaders to a 2D game? (Sprites with transparency, UI, particles, shadows and lights).
     
  43. Caronte3D

    Caronte3D

    Joined:
    Sep 16, 2014
    Posts:
    43
    Hi!
    Is there a way to use any kind of global variable to acumulate values betwheen cycles?
     
  44. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    1. Not sure if I understand, could you be more specific?
    2. Those two nodes only show up when creating a shader function sense neither work outside of it.

    I see you found the issue yourself. Indeed for the depth buffer to work at all a camera must be using it. Your changes when you move the object in local z could be accounted to the camera perspective. Yes, the values shouldn't change because the difference in distance isn't changing, but by moving the camera in z the perspective will make the surface change which in turn results in slight changes to the buffer. It shouldn't be a problem so I'll take a look if something is up. If you feel like it could be a bug send us a repo case or simply the shader in question so we can take a look.

    Hi there, glad you like it :) feel free to give suggestions, we are listening.

    Documentation as been a pain for us in the past weeks, it's very time consuming but we are getting there. You might find that some information is still missing or incomplete. We don't have a page like you suggested but we do have some ideas on how to get there. Also take a look at the Manual page since there is a few hidden stuff there.

    That generated code happens when editor needs to convert a type of variable into another, it will create a temporary cast variable so it can be reused various times. This mainly happens when a certain output port connects to multiple input ports. I admit that looking at it isn't too pleasant but it's a safer way to get there without generating extra code. In those two cases it's converting a float value into a float4. If you do a find of those temp_cast variables they should be used somewhere.

    Can you show me your setup? You should only require to put something in the main input port for it create a step call to do the dithering, there's no need for you to use your own step node. Maybe the documentation is worded poorly but it's trying to say it creates a step call, not that you need to create one. Report back how it went.

    I'm biased here of course so take my word as a grain of salt. With the new addition of templates that allows the creation of custom sprites shaders, particle shaders, etc. You should have all that you need to create your own effects. And the few things that are still missing (multipasses) are coming in the near future. Of course, if there's something else that you find to be missing we are always checking for suggestions to add new stuff so you can ask away. If you are looking for premade effects we don't maybe the editor is not for you. We do however support shader functions which allow you to create a certain effect to be reused in any other shader you want it to be used. Hope that helps.

    As far as I'm aware, no. There's no notion of continuity within shaders. The two ways to achieve something like that is to use cpu (scripting) to fetch and send values and accumulate them. This is kinda slow so I wouldn't recommend it. The other way is to use render textures to save the results and operate over them to mimic some kind of accumulation. Some effects sometimes use this technique in this way and usually are called "temporal ..." (ie: temporal anti-aliasing)
     
    Zehru likes this.
  45. Dambusters

    Dambusters

    Joined:
    Jan 6, 2012
    Posts:
    52
    Thanks Paulo,

    Yeah, I'd appreciate you taking a look at at reproducing it your end - my test shader is simply the example you gave - Depth Fade node into Emission on a transparent shader - adjust the Depth so it works, then move the camera out along Z and the depth value will increase to 1 (white). I think it's a bug but I'm not sure if it's at your end or in Unity.

    Cheers,
    Bruce

    Screen Shot 2017-10-02 at 11.47.15.png

    Screen Shot 2017-10-02 at 11.41.59.png

    Screen Shot 2017-10-02 at 11.42.31.png
     
  46. davide445

    davide445

    Joined:
    Mar 25, 2015
    Posts:
    138
    Regarding this video

    Understood I can use Amplify to create a custom shader able to make disappear part of the mesh based on a specific (moving) overlapping geometry.
    I did find this can easily done also in Unreal out of the box Blueprint. What is missing in UE is the possibility to drive Cascade particle emission from the shader, so to make them emit on the disappearance border.
    Will be possible to achieve that in Shuriken + ASE?
     
  47. QuantumTheory

    QuantumTheory

    Joined:
    Jan 19, 2012
    Posts:
    1,081
    @Amplify_Paulo Thanks so much for your help on this. I noticed the dither had an input in the documentation, but when I add one with the latest Amplify Shader, the input isn't there. So, I add the step node myself. Obviously I'm doing something wrong as it's not working. Here's a shot of my shader:



    By the way, the Amplify group has stellar customer service judging by this thread alone. The attention you guys give is outstanding. Thanks so much!
     
    hopeful likes this.
  48. Amplify_Paulo

    Amplify_Paulo

    Joined:
    Jul 13, 2017
    Posts:
    297
    I tried just that and everything seems to be working fine. Are you perhaps using post-processing effects? My first bet is some kind of tone mapper or bloom messing up with the final result. Also, what's your unity version? Also, how are you setting your transparency? setting up the main blend mode dropdown to either transparent or translucent should do the trick.

    You might be looking at things from the wrong perspective, if the idea is to create and effect where the geometry dissolves with some sort of rule and particles are born with a similar rule something must exist that drives this effect as you want. I don't know what native blueprint you are talking about but in UE case the blueprint is the driving force. In unity you would need some sort of script to get the same effect (or you could use something like playmaker). Either UE material editor or Unity shaders can't drive such complex mix of effects by itself. Nor can just the particle emitters just like you stated.

    What you need is some sort of rule that can be replicated in both the shader and the particle emitter. If that rule is the expansion of a sphere then dissolving the geometry in a spherical manner is doable and simple. In terms of particles you can emit from spheres and from meshes, the tricky part is emitting from intersection of the two. I don't know exactly how to do it, but I don't think it's impossible. I've don't some crazy emissions experiments with particle playground before and I believe I could do it with it, not sure how much more difficult it would be with shuriken. I don't know cascade from UE enough to give advice on it. Finally you need to setup some scripting/event-animation to set the right values in both the shader and the particle emitter (expanding the sphere).

    So in short, yes it's possible, but requires more work and knowledge.

    Ahhh, I see, you are using an older version, that input is fairly new, grab an updated version from our download page (just put your invoice). The node isn't called "Dithering" anymore, just "Dither" and has a few more options, including the port that you use to the dithering to a value.

    Thx for the kind words :]
     
    davide445 likes this.
  49. Dambusters

    Dambusters

    Joined:
    Jan 6, 2012
    Posts:
    52
    Screen Shot 2017-10-02 at 20.06.38.png No processing effects I'm afraid! To be sure, I just created a new empty project, added AmplifyShaderEditor_v131_010 and nothing else. Blend mode is Transparent. The Light Model is Unlit. I created the shader from scratch (rather than import anything). Same problem.

    Screen Shot 2017-10-02 at 20.15.09.png Screen Shot 2017-10-02 at 20.15.25.png

    I'm using Unity 2017.1.1f1 on a Mac (also happens on Unity5.6.1p1). This occurs for me in PC/Mac or iOS build mode.

    Scratching my head over this one... :)
     
    Last edited: Oct 2, 2017
  50. Cleverlie

    Cleverlie

    Joined:
    Dec 23, 2013
    Posts:
    219
    Hi, is there an easy way to get the world distance from any pixel to the center of the transform of the mesh rendered? I want to do sort of an omnidirectional decal, with a tricky solution, for a tower range visualization.

    the way it works is this:

    I create a normal 3d sphere object in the scene, and set it's position to the tower position and it's scale to match the tower range, then I assign a material done with ASE.
    The material has front-face culling, meaning it renders backfaces only, it renders always on top of any geometry (I guess I have to set the render queue in transparent or overlay), and then it renders each pixel with a color based on the distance of that pixel in the depth buffer to the position of the center of the sphere, I have to transform from pixel coordinate depth to world position and calculate the distance to sphere center position.

    anybody can help me with this?

    EDIT: to clarify, what I need basically is the world position of a pixel given the depth buffer