A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'General Graphics' started by mirrormask, Jul 2, 2020.
Why didn't you just write your own render pipe?
Someone mentioned Unity having an edge over Unreal on Switch. Having worked with both on the platform, I disagree.
Unity by default is a performance hog on last gen consoles due to its fat main thread. There's also an enormous main thread overhead when you have to render anything.
GPU wise, UE4 offers significantly more options to reach your desired performance goal on Switch: non-HDR mobile forward, HDR mobile forward, mobile deferred, desktop forward, desktop deferred, and with additional fine grained control over several aspects of each one. You can also freely switch between them all whenever you want to compare, unlike Unity where changing between pipelines is a destructive operation requiring you to change your lighting setup and code.
It's very likely you'll need to roll something of your own or purchase an asset (which you might have to modify because it may not have been tested on Switch) to work around some slow part of Unity. That is rarely the case with Unreal.
For example, we had to make several changes to Unity's PPv2 package to reduce it's performance overhead on Switch. The way it was designed and coded is simply not performance conscious. And URP inherited a lot of its inefficiencies.
Good to know, thank you very much!
Some of the questions show how out of touch Unity is.
"Would you prefer to target multiple platforms from a single project"?"
Huh... HELL YES.
Pretty much everything they are asking about that Unreal does already in some form.
Really, Unity seems in dire need of an organizational shake up. Example:
I'm using addressables on some port projects now (due to certain platforms' patching requirements, all Unity projects must use asset bundles to try and keep patch sizes small, and every single project we quoted so far still uses resources for everything) and it doesn't offer per platform settings at all. I would like to have different compression settings for different platforms, for example, but it's one-size-fits-all. Which is bizarre because the Asset Graph, which preceded addressables, did have a nice implementation of platform-specific settings for everything.
Asset Bundles are CANCER and Addressables are tiny band-aids.
And I'm fairly sure internally there's no drawbacks from this. The code that does not need to run, does not run. Internal plumbing at it's finest so the developers don't have to struggle to get the best from it.
So perhaps you can give me a reason why you chose Unity?
I do porting, codev, and other engineering outsourcing services, so I have do work with whatever the projects we are hired to work with are using. Thus, I cannot avoid Unity.
I'm not sure I got what you're referring to. You mean switching between pipelines? If you want to compare how your project runs in built-in versus a SRP you have to put up a lot of effort, it's not a simple toggle.
Story time: a costumer came to us wanting to port their game to Switch. Scene complexity was low, nothing too fancy, would easily run on Switch.
However, they built the game using HDRP on 2019 LTS. And since HDRP does not officially supports Switch anymore, the somewhat simple port became far more complicated, risky and expensive: retooling the game to URP was not possible as crucial features with gameplay implications was missing, going back to BRP wasn't possible because they use shader graph and VFX graph, so the only options were either modifying HDRP to run on Switch (which would require our more experienced graphics programmers to be allocated to this port), or porting the game to the latest 2021.2 beta (upgrading projects like this is always time consuming and full of unexpected issues) and pray the missing URP features are functional there. We haven't heard from them since.
It was also the only project we got quoted for which used an SRP until now. Every single other port quote we got so far was using BRP, even the ones using Unity 2020.
So what engine would you choose for your own game if you were making one - 3D title of any complexity of your choice, and the single main reason why?
Just wondered as it's illuminating.
I can be somewhat biased due to my experience, but my answer is UE4, the single main reason being full access to the source code.
When you have to ship a game, reporting engine bugs is useless. There's no telling you're going be heard, and if you do, there's no telling when (if ever) the bug will be fixed, which is very likely to be long past your worst case scenario release date.
Just yesterday I spent almost the entire day making modifications to a decompiled Unity 2018.4 editor DLL, in order to add a crucial feature we need to the Switch building process which was only added in 2020.1. The project is near shipping date and upgrading all the way to 2020 LTS at this point in time would be utter madness. We are also at the cusp of a PS4 SDK expiration and will likely need a waiver to release using the expired SDK for the same reason.
In contrast, years ago on our first UE4 port job, porting a game from Xbox One to PS4. The UE4 version used by the game used a PS4 SDK which was no longer accepted for submission. Instead of risking upgrading to a newer version, we just took a diff between the PS4 platform code between the old version and the first one which supported the newer SDK, and applied the changes to the old UE4 version, which took about three days of work at most.
Back to Unity, since they introduced packages we made project-specific modifications and bug fixes to several of them, as well as debugging into their source to understand why certain things were happening the way they were. This has been invaluable and there's no reason to believe the source code behind the closed doors is somehow of a so much better quality that the games we worked on wouldn't benefit from at least being able to peek at it.
Now, UE4 is not all peaches and roses, of course. It has its issues, and I do have my own reasonably large list of grievances with it, many of which aren't feasible to "fix on our own". But for that last mile of getting a game shipped, UE4 so far has offered me the best support (I won't say "experience" because shipping is never pleasant, anywhere).
I've had one or two using URP, but only because they started up recently and were sold on it by Unity, along with a bunch of other non-ready tech (Like rendering everything through entities and the hybrid renderer v2). Every project which is on BiRP hears the cost of switching and the complications involved and stays on BiRP, but because of all the push from Unity they repeat this discussion every 3 months, wasting everyone's time to come to the same conclusion.
With 6 figure enterprise support and source access it's a bit better, but generally speaking Unity is a really hard engine to ship on- super easy to get started, but none of it has really been tested at scale, and every wheel starts falling off as your trying to ship. I shipped a game with tens of thousands of asset bundles on Unity 5.2, and it was clear no one had ever stress tested that system, or the actual recommended workflow.
When you do run into bugs (sometimes even with enterprise support), it's unlikely you'll ever be able to talk to the person who wrote the code, and instead you feel like you have to shout through layers of middle men. If you scream loud enough on Twitter, or get the attention of key people, they will definitely help you out- but the official channels do not work well. By contrast I have found it fairly easy to get in touch with epic engineers, likely because the whole organization is much smaller, and code ownership is reasonably defined and known.
In general, the bias of "easy to get started, hard to ship" is deeply ingrained in Unity. For instance, if I create a plane it comes with a physics representation. This is great for demos, but horrible for shipping. A few days before shipping a game, a UI artist added a graphic to the interface that scaled up and down, and forgot to remove the physics component that was automatically added. This caused the physics world to rebuild every frame, leaking memory, until the app crashed. At that point in production, this cost around a man-week of time to track down (when you include managers/meetings/QA/10 minute repro). The right solution is to not add physics by default, but a decent compromise would be to make physics objects respecting dynamic scale an opt-in feature, since it's rare that you want that. The current system leads to deadly silent failures, which aren't easily caught until late in production.
I wouldn't say there are no drawbacks. Keeping all possible render options in one pathway is significantly more complex than having separate renderers. That said, the Unity renderers are TOO separate. There wasn't really a need to make them like this, and it would have made development much easier for everyone. For instance, depth only passes, depth normal passes, motion vector passes, shadow passes, these could have all been shared between renderers. Then when you integrate something like DLSS2, it's much easier to do it for all render pipelines (since it requires motion vectors). Same for things like Raytracing, Camera Relative Rendering, etc- there's no reason these can't be added to URP other than they made two code bases and now it's twice the work. And none of that stuff has much to do with being able to customize the renderer. We're just left with 3 render pipelines, costing 3x as much work to maintain or expand, with constant questions about when pipeline X gets feature Y from pipeline Z.
I agree. It's wild that Unity chose to build new layers on top instead of trying to improve and extend the base behind it all. Asset bundles are basically built on top of the resources/sharedassets system (they're basically the same format, but stored in a separate file), then addressables is build on top of asset bundles (so many people think they are separate things).
If Unity would just improve how they package assets with the player so it works better with console patching systems it would shave off a chunk of work we have to do on every port job.
Your points are fair, but from the point of view of actually crafting a game, Unity, until now, gave us a lot more freedom and easy of use than Unreal.
As far as I know, in UE, it's quite bipolar: you either go full "artist" (aka "I don't have a clue what a computer is") or you go full "senior programmer engineer" (aka "Let me create a new full system in C++ and modify the engine source real quick").
There is (was?) no actual middle ground. And if you want to do it right, there is no way to avoid having a team with specialized individuals.
On Unity, by contrast, up until now, scripting in C# has been super easy and as fast as C++ (thanks to Unity's IL2CPP).
Shader programming was a breeze using Surface Shaders.
You didn't have to go to the extremes of UE and it simply worked, meaning you can make a game by yourself or with a very small team.
Nowadays things are quite different in Unity land, and it's becoming so cumbersome that soon there will be no real incentive to use it instead of Unreal.
You also have a point. Shader development was always a strong point in Unity compared to Unreal and there's a smoother path from simple to complex, while in UE there's a steeper slope. Something like "11-11 Memories Retold" for example would be much harder to make and require a lot more boilerplate in Unreal compared to Unity. That advantage has been eroding with SRP, as Unity is turning into bizarro Unreal bringing in the drawbacks without the benefits.
If you don't have a "tinkerer" in your team who can take advantage of accessing UE's inner workings, indeed Unity becomes a better proposition, since you'd be similarly stumped if you hit a wall (maybe more since the availability of source code means sometimes there are loose ends in UE4 you won't find in Unity).
But if you're shipping for anything other than Windows/Mac (and mobile, to some degree), you need to have a "tinkerer" in your team, even when using Unity. And in my experience Unreal is more welcoming to tinkerers compared to Unity.
Because every moment spent on that is a moment not spent on making the game all it should be. Among other pretty much obvious reasons.
Having said that, I have had to re-implement a lot of things in Unity using compute shaders, mechanisms to manage lights, culling, particles and such that should be built-in and generally available to everyone. The new SRPs don't address most of these issues yet.
I'll answer this question as well:
It's similar what you might hear in response to "Why do you use Microsoft Windows as your primary operating system?" or "Why do you primarily use the C++ programming language?" a common response may be along the lines of "Because it's the most widely used platform, providing the most customers / jobs / career opportunities, not because I think it's the best platform"
I am using Unity because of the SRP mess, believe it or not, from an opportunistic mindset --because of its ubiquity and its shortcomings which creates a huge gap and demand in the market. As a consultant, it's a career opportunity. I noticed this mess of SRP rapidly developing in 2017/2018 and decided then to get into Unity as my primary engine knowing that there will be a large amount of developers and studios already invested into Unity and interested in using URP, HDRP, and related "bleeding edge" and "breaking change" Graphics features but have not been able to resource internally to get up to speed on how to navigate them or port from BRP, etc. How could they? So 90% of the work I've been doing in the past 2 years is because of SRP's shortcomings and steep learning curve / inaccessibility. Which sometimes results in advising to my clients that they stick with BRP, or use another engine or framework like Unreal or Godot, if it happens to meet their needs "out of the box" better than Unity can, and they can cut their losses invested in Unity (if any) for that project at least. I don't have any personal stake in anyone using Unity or SRP, I just do whatever I can to help my clients achieve their goals and assess cost and risk, etc for all options available for them to choose.
Even while I am very openly critical of Unity and SRP, the bigger problems often drive me mad and make my job much harder than I believe it should be. But if there were no shortcomings, I'd need to pivot my approach to make a living. So, it's a double edged sword for me. Though as technology progress: major engines, especially those as bloated, ubiquitous, and focus-less as Unity will always have "gaps to fill" for many use cases and industries. Keeping in mind videogames are just one industry that Unity is used by, and Unity Technologies errs towards trying to hopelessly meet the needs of all of these industries and their segments, and thus will always "fail" or at least lag behind a solution that targets and focuses on a particular industry, segment, or use case. I suspect many Unity Asset Store "tool" publishers filling the gaps that Unity and/or SRP leaves unresolved may have similar thoughts and sentiments.
I'm going to voice a controversial opinion - URP and HDRP should just be merged into a single, standard SRP. Want a different lighting model? Great, change a config option on your pipeline. How about ray tracing? Awesome, that's another config option. Maybe you want volume effects? Easily done. Right now there is no reason to keep HDRP effects separate from URP effects. Combining the pipelines would eliminate the confusion and headaches of two sets of shader libraries. Surely the lighting, shadow and other texture formats can be abstracted away into macros or the like?
Since HDRP now has a render graph architecture, which allows it to reorganize pass dependencies depending on what is actually being used on the frame, this sounds quite doable. Just port over everything from URP into HDRP as optional features and let the render graph generation sort things out.
I wouldn't even care if I had to gut all my URP specific code and start over, as long as there was a single unified rendering pipeline going forward with pluggable and configurable pieces.
New survey on render pipelines: https://forum.unity.com/threads/uni...ur-needs-for-render-pipelines-survey.1144796/
This is actually a good survey. It seems to ask the right questions. Probably nothing will ever come of it, but it's at least vaguely promising.
This is a great point and while not impossible in any case, this seems more feasible to maintain if Unity were to drop support for WebGL (1.0 and 2.0,) OpenGL ES 2.0, anything pre OpenGL ES 3.1 / Shader Model 5.0 which is exactly where Epic has drawn the line with Unreal's Renderer (which also uses a render graph architecture.) This leaves only "compute (shader) capable" platforms as Unity calls them and vastly simplifies the scope in developing and optimizing a single scalable modern render pipeline.
Problem is, Unity plans to support all of these platforms for as long as they have a significant global presence, and even improved their WebGL 1.0 support this year (this is what I mean by Unity not having any focus on a particular industry or market segment whereas Unreal is focused on High end, so when it comes to mobile, they have a strict cut off point.) I'm not here to assert or defend that it's the correct idea but I think the idea behind this is that (unlike most Unreal devs) to completely drop support for these platforms it would negatively affect more Unity devs to a greater degree than those that would benefit from Unity streamlining to a single modern render pipeline right now. There's a lot of widely varying render pipeline use cases in this community and trade offs to consider.
Eventually this could / should happen after WebGPU gains more adoption and older mobile devices are replaced until they are mostly or all GPU compute capable a on a global scale. That will happen, but don't hold your breath --it will be many years from now. If anyone is dead set on a single render pipeline engine in the meantime I recommend to use Unreal (mid to high end 3D focus) or Godot, (low to mid end or 2D focus) or the many other engines and frameworks depending on your project's scope and needs. Unity is predictably going to continue to support a massive scope of platforms with its render pipeline(s) and not have a particular focus. This may be beneficial or detrimental to your projects and teams.
Your point about the difficulty scaling down below compute shaders is good. However, this is a temporary requirement (until those platforms become obsolete) so I wonder if it would have made more sense to allow BIRP to handle that for now, and to focus on a single new pipeline that is a UE style compute shader based pipeline. Building a whole new URP to meet that criteria when BIRP can already do it seems like it may have been unnecessary effort and fragmentation.
I think discussing how many pipelines Unity develops misses the point. The real issue is the lack of cross-compatibility and auto-upgradability.
Right now I can make one project and have it 'just work' on almost every platform. While there are differences between these platforms, I can just tab over to the settings for that platform and tweak them.
So why doesn't a system like this exist for the render pipelines? If you think about it, all 3 pipelines right now are basically the same: I make a scene with a camera, lights, and materials. While the settings for these differ between pipelines, there is also a lot of overlap.
This overlap extends all the way down to hand-written shaders. My BIRP surface shader outputs albedo, normals, emission, etc. This is exactly the same as URP and HDRP, so there's no reason why my shaders shouldn't 'just work' already. Even if HDRP includes some new outputs that don't exist in the other pipelines, there's no reason why Unity couldn't make a way for me to add these and have them get stripped from other pipelines like Better Shaders already does.
If such a system existed then Unity and the community could work on however many pipelines they like. People could experiment with any pipeline and easily switch between them. People could make assets and tutorials in any pipeline and have them be useful to everyone else. Unity could release Snaps HD assets and not have them become immediately broken and get 1 star reviewed by everyone, etc etc.
Yes and no: you can do just that with surface shaders, but you can also customize the lighting function and final pixel color.
I think it's a wide spread error to think about shaders as if everything had to follow the Physically Based Render style (PBR).
First of all, because that's just ONE approach to rendering: It has become the de-facto standard for cross application artistic development, but it's not the only one nor the best in every scenario.
The fact that everyone and their mother is using PBR to render makes every game out there look the same!
Also, this approach is much heavier on GPU and CPU resources than non-PBR. (specifically by the use of plenty of textures for a single material and the added complexity of the shader calculations.)
I have a feeling that everyone is obsessed with physical accuracy on renderers, when what we should seek is for our games to look GOOD.
Reality isn't made to look good! Reality has been imposed on us, art should seek something BETTER than that.
Realism is overrated, specially in game development.
And I speak as a 3D developper making a 3D game that looks quite realistic, but I don't use PBR for that, because it's heavy and because it imposes limits on me that I don't like. (artistically speaking)
I agree. Even for realistic rendering there are many things you cannot get looking close to real life with PBR, at least with the limited outputs we're given.
What I'm suggesting is that there should be some kind of system that allows me to write one shader file, make it output any number of things, and it should be up to the render pipeline to determine what to use. This would include some kind of final color function like you describe, as well as every other feature people could think of. The same goes for Camera settings, Light settings, etc.
Right now if I hand write a vert/frag shader with my own lighting then it works in BIRP and URP. While it renders in HDRP, it doesn't react to the camera exposure settings so it's useless. What am I supposed to do, use Shader Graph? Graph tools are a joke for complex shaders, and if I look at the generated output to see what I'm missing it's over 10,000 lines of code which nobody has time for. Even the Light component's intensity setting doesn't work in HDRP, because for whatever reason it was moved to the HDAdditionalLightData component. Why wasn't the Light component just modified instead? If it's for the sake of modularity then why didn't they bother to add the extra 5 lines of code to detect changes to Light.intensity so you can use both?
I've only scratched the surface with SRP usage and have already found so many pain points, I'm sure many more exist. If Unity wants to change their rendering approach then fine, but it just seems like they deliberately break compatibility for no reason.
Absolutely! I big problem right out of the bag is SRP requires custom passes to really do anything special, but there is really no support for custom passes. Shader graph does not do them so you have to manually do them, not a big deal. But to use any of the SRP shader library you basically have to reverse engineer what ever pipe line your working with (no documentation..) just to use the features for that SRP. Then you have to modify every shader to include the pass, not to mention that none of the code is laid out in a very functional manner (lighting) so any little change or refractor (they do it a lot) is gonna break what you hand wrote. This is in no way flexible or modular.
What they need to do is add a Generic/Custom target to ShaderGraph and include a built in way to mass add properties and passes for shaders (including the standard ones), something like BetterShaders does with the stacking. You should be able to stack extra properties and passes in the a config and recompile all shaders using core templates.
That's just my opinion on the shader aspect though.. there is plenty of other things that should be done to the Unity SRP's and Core SRP pipeline aswell.
honestly, I think the new render pipelines are flawed beyond repair.
They should just scrap that and implement single pass lighting and better batching methods in Forward Built-in instead. (per shader instead of per material, better depth sorting to avoid overdraw)
Seriously, with single pass lighting and better batching methods for Built-in, nearly every Unity game since Unity 5 would see it's performance more than double without any compatibility drawback.
This SRP thing is complete suicidal non-sense.
It's the worst thing that had happened to Unity EVER.
Please fill out this survey and let them know:
As I have suggested to Unity before, there is no harm in LTS supporting ES2 but dropping it for 2022 and allowing modern Unity to be engineered for this decade.
The reason that makes sense is because new features WON'T be beneficial to ES2 platforms. The hardware sucks too much. So Unity is keeping these platforms because they're afraid that communicating "LTS is fine!" will not be accepted by customers who have no real technical know-how.
So yeah, Unity likes to hold itself back because there is no staff that can understand how to sell LTS to ES2 while modernising the rest of Unity.
*it's not a reflection on the staff, I genuinely think it's a conundrum they've not addressed yet
I'll argue that WebGL 1.0 support justification is tenuous at best once Webkit gets updated. It's such a marginal market with very little impact.
If Unity doesn't drop 1.0 support when WebGL 2.0 is finally activated in default in WebKit this fall, they are shooting themselves in the foot with a bazooka.
Just make a barebones SRP for ES2+WGL1, or just keep BiRP around for those. It's not like you'll be able to do anything fancy on those targets anyway.
A sensible approach might look something like Open Sourcing the old engine and BiRP, and then moving on...
They can support webgl for a few more years with the 2021 LTS release. Would be nice if they just drop it after that as the spec is garbage still.
I would argue they can just leave the legacy (standard) pipeline for this case. At some point they drop support for standard pipeline and leave just SRP, that would be a good time to dump it entirely.
A bit tangential thought, but still. One of the issues people often mention about Unity (including this thread) is their lack of dogfooding and that a lot of issues stem from that. Reading their case study about Subnautica (that unfortunately reads more like marketing material rather than a case study), it seems like Unity handled Switch port themselves. So I guess it can be considered dogfooding to some extent? I really hope they use their own experience from such projects to improve the engine.
A good step in the right direction, latest 2021.2 beta:
WebGL: The WebGL 1 Graphics API is now marked as deprecated and will be removed in a future release of Unity once all major browser vendors have released browser versions with WebGL 2 enabled by default. (1345140)
So, what pipeline are you using?
Here's the thing, you get growing older pains too.