A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'General Discussion' started by AnimalMan, Oct 28, 2022.
No. It uses Torque. You can find this out with google.
Uhm, Wikipedia claims it's made with Torque 3D.
Google claims it is torque.
The thing is, in this particular case, engine doesn't matter much, because the core of this thing is their soft-body tech and it either written from scratch or a custom solution (they could base it off Bullet, for example). So you could make it in almost anything that provides a scenegraph.
I look at it and think it could be done in unity
It could be done in any engine. It could also be done without an engine. It is the case where the engine does not matter much.
I do wonder how would one go about attaching a soft body sim to Unity?
From my work on procedurally generated meshes, I haven't found a way to modify a high quality mesh (aka 50k+ polygons) in realtime (not even close and not from a DLL either) because there's the bottleneck to the mesh object and that would be necessary for soft bodies, wouldn't it?
Unless of course you do the whole rendering yourself too from a DLL and don't use Unity's pipeline...
You would make a soft body sim.
or you would use a clever car armature and weight it.
Some of his crush looks to be done from bones, and some of the damage is just a flip of materials on a warped model. Like he shrinks the bones. Warps the car, ejects loose parts
it would be possible to not even apply physics but just do a wobble adjustment of its spine. I think it would be possible to achieve that level of detail. But would it take a build of a physics system from the ground up or could you get away with it using the unity rigidbody and joint system
check out the green point light used at 3:15
You could tackle that with compute shaders, however it is much easier to reduce detail level of simulation. And that's what they probably do.
Additionally, modifying a highly-detailed mesh at runtime will likely require you forego convenient api and stick with advanced one, and maybe even abandon mesh object itself, switching to low level graphics API.
Nice ideas, yeah!
Think my mind was a bit too stuck on the thought of attaching a soft-body physics simulation that you'd normally use in Blender, to Unity.
DOTS can do that pretty easily.
In fact I remember one guy did exactly car body deformation using DOTS. Unfortunately person can not share details publically. A bit shame. But pretty good result had back a year or so ago.
for inspiration, check out these cars (made in unity)
Wikipedia cites this as the source: https://www.beamng.com/game/news/blog/beamng-and-torque3d/
And yes, it can very much be done in Unity: Unity Realistic Vehicle Physics - Softbody Tire Physics Test 2, Unity Realistic Vehicle Physics - Vehicle Debugger System Test 1
Here's their thread: Vehicle Physics Simulation
The question of "can it be done in Unity?" is funny because the majority I've seen are done in Unity.
I'm currently investigating texture space physics for hair volume using gpgpu.
You can probably do it encoding the surface points on a texture in uv space, input paramers into other textures, then use the result in the vertex shader using texlod to read back data. Like for snow, or water ripple shader.
I recall seeing that implemented about 10 years ago in first versions of what later became FurMark. So it should be largely a solved problem by now, though locating 10 years old papers for that would be fun.
Based on furmark I'll guess that's in shell texture space like voxel?
My take is shell less though, and try to find a decent stylized abstraction that would work for puffy hair. Things I consider:
- given the scalp is "fixed" and hair has relative position to each other, use a kind of modified 3d proximity mass spring with hashed neighbor pushing, and artistic texture blending. Vector data are fed from a simplified low rez cloth simulation to add visual details on texture space with some parallax flair.
- well what I proposed above, ie drive the vertex by gpgpu simulation. Spline need only to evaluate the bezier on a single vertex. Can probably handled a simplification of curly hair as position offset of a central core spline. Problem is that you don't really solve the geometry density problem.
- bonus round if I found a way to draw procedurally in fragments random overlapping lines, but the main issue is that hair simulation is not "local", you propagate constraints from root to tip, which makes it hard to draw infinite non random curve that overlap, every part can potentially be anywhere within the area from the radius of the hair strand length. It's possible if we stylized and makes hair highly coherent, like parallel to each other and using uv distortion to mock common curve destiny, then overlapping many layer of coherent hair with varying parameters, but that would eliminate the type of hair that naturally puff, unless I find a way to make recurrent raytracing of the, potentially perturbed, helix primitive.
Tangent space flat 2d image. No voxels.
Like I said, it should be a solved problem by now. The issue is digging up the very first version of furmark and a paper associated with it. Because by now I don't recall if it was a r andom OpenGL demo, part of DX SDK(probably not), part of ATI sdk, or one of the NVidia demo.
For example, this seems to be related though that's not the "fur donut" demo with tangent space vectors display I remember.
Notice that they're 10 and 12 years old.
There was also a unity asset, but it seems to be dead.
Can't find anything even remotely similar with a deep search. I probably don't have enough data to narrow down.
"Making Grass and Fur Move
Sven Banisch and Charles A. W¨uthrich
CoGVis/MMC – Faculty of Media
D-99421 Weimar (GERMANY)"
Though this one does not appear to use the technique I was referring to.
There's also paper called:
"Integrating Motion with Real-time Fur Simulated using Shells and Particles" by LEIF TYSELL SUNDKVIST
Very easy to find.
Thanks for taking your time. These are shell solutions, i guess, since nobody reference it, that the initial furmark was basically the same technique but with texture offset instead of shells.
I already found equivalent solutions than these papers independently and my solution expend on them. The particle solution in particular, I already have an implementation where instead of card particles, I use interior shading with a parallax offset of a texture, blended together with a fresnel with a fresnel cut out at the edges to get a silhouettes. The next steps is physics inspired by Worley noise and texture bombing. Then the texture to vertex physics. I originally took inspiration from water physics done in texture. https://amandaghassaei.com/projects/shaders/
Well, as long as you are having fun.
I can swear that around 2005-2010 there was a different "fur donut" demo where you could toggle display modes and one of them had something to do with physics and tangent space (and I think it looked green, you could rotate it and it flash colored blobs of motion vectors briefly), but the old demos and papers are incredibly hard to find with all the junk floating online these days. There's no reason why you wouldn't be able to put some sort of motion vectors into tangent space, by the way. I also found a non-english GL demo with source, which looked a lot like what I remember, but that one had no physics.
Personally I wouldn't invest into researching anything for my own projects, as those things can badly sidetrack development, devour insane amount of time, and result in nothing. But. This is your project, not mine, and I may be just very tired.
To get the very high level of detail they display here, (one of the best I've seen) you would have to skip all the high level APIs in unity and go directly on the lower levels. Is it really "made in unity" at that point?
I thought of that too, I think the switch monster hunter game use that first their short fur effect, which is also why the pc version no longer had it after the motion blur update.
That's fine, because that's the main core of the project anyway, good 3d characters feature rendering, in a kind of 3d VN structure, which mean aside from the art and writing, it's trivial to develop, on purpose. Even the GI idea I dedicated my time on, and is unnecessary as is, was to develop the know how to actually implement these ideas before I had fully conceptualized the solutions. Also God bless ai art because the mock up rendering of game than never was with hair under represented, in such a way it fit a plausible visual target, has been a blessing. Btw do you know free img2img solution? That hugging face site has done wonder for basic generation and idea exploration at my level.
In both videos posted so far visuals are nothing to write home about. They look dated. And so you could create them in anything. The main selling point of BeamNG is their soft deform tech, and you'd need to implement it from scratch anyway.
"is it truly made in unity"? Well, the strength of unity is supposed to be being able to port it to huge number of platforms. You won't be able to match that if you roll out your own solution. And rolling out your own solution will waste a ton of time.
Yes, stable diffusion on local install on nvidia GPU with over 6 gigabytes of VRAM. There's also specialized checkpoint for inpainting specifically, released by runwayml. There's a slightly annoying youtuber called "Aitrepreneur", he covers new stable diffusion improvements and had a video on that.
Basically, you'd need an RTX card with a lot of vram. 3060 with 12 GB will do. The price aren't great, and buying one of those might result in a strong desire to punch one of NVidia higher-ups.
If you're looking into making a 3d VN, take a look at Persona 4 and AI: Somnium Files.
Well I guess that's out of reach for now... I wouldn't had ask if I could afford a whole gpu card.
Well, you COULD try searching for "img2img google collab".
Online sites mention deforum. (https://deforum.github.io/). This should run on google collab, which is supposed to be free, but if you get greedy I expect them to rate limit you or do something similar.
I prefer local install. However, after buying NVidia GPU I don't feel like buying a graphics card again. This S*** got too expensive for my tastes.
Thanks, seems like there is no way around collab lol, I have been shy to learn it!