Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Space Engineers Style Block Rotation System

Discussion in 'Scripting' started by andrew_freuler, Apr 7, 2018.

  1. andrew_freuler

    andrew_freuler

    Joined:
    Apr 7, 2018
    Posts:
    1
    I'm working on coding a building system similar to that of Space Engineers, where all blocks are snapped to the grid, and you can rotate blocks on all 3 axes using the 6 keys above the arrow keys (Insert, Home, PageUp, Delete, End, and PageDown) to rotate the block along the x, y, and z axes. I already have a snap-to-grid system in place, as there are hundreds of tutorials for that online, but what I'm having trouble with is the rotation of the blocks. I thought that this task would be relatively easy, by just directly rotating the block around these axes, but I then noticed that when the player rotates the camera all of the rotations are still absolute according to the world, not the player. For example, if the player starts facing +Z, and the Insert and PageUp keys roll the block left and right respectively, then, the player turns the camera to face +X and hits Insert to roll the block to the left, it will instead rotate the block down towards the player instead of rolling it to the left. I have had other difficult problems in game development, but this one just really seems to stump me. Any help would be appreciated.

    Sorry if you found my explanation a little confusing, it's kinda hard to put this idea into your head without a 3D animation or example, but if you play the game Space Engineers (or Stationeers) then you should know just what I'm talking about with the whole block rotation thing. Thanks in advance.
     
  2. VilgotS

    VilgotS

    Joined:
    Nov 10, 2017
    Posts:
    1
    Hi! Did you ever figure it out? I am working on the same thing and just can't seem to find a solution...
     
  3. SpaceDave1337

    SpaceDave1337

    Joined:
    Mar 18, 2019
    Posts:
    1
    Kind of a bit late, I know I know, sorry for reviving a dead thread, but I just have to help out for any future people looking up this topic.
    The basic system would be to have a "hand" empty gameobject infront of the camera. Then, get a 2nd gameobject, this would be your "block", then add the following script to this "block"

    (For this example, I used QWEASD instead of the 6 buttons you mentioned, just to show the basic system)

    Code (CSharp):
    1. public Transform referenceObject; // Reference object to which movement is relative
    2.     public float rotationSpeed = 100.0f;
    3.  
    4.     void Update()
    5.     {
    6.         if (referenceObject == null)
    7.         {
    8.             Debug.LogError("Reference object is not assigned!");
    9.             return;
    10.         }
    11.  
    12.         // Rotation
    13.         float pitch = Input.GetAxis("Vertical") * rotationSpeed * Time.deltaTime;
    14.         float yaw = -Input.GetAxis("Horizontal") * rotationSpeed * Time.deltaTime;
    15.         float roll = -Input.GetAxis("Roll") * rotationSpeed * Time.deltaTime;
    16.  
    17.         transform.RotateAround(referenceObject.position, referenceObject.right, pitch);
    18.         transform.RotateAround(referenceObject.position, referenceObject.up, yaw);
    19.         transform.RotateAround(referenceObject.position, referenceObject.forward, roll);
    20.     }
    From there on out, the rotational snapping shouldn't be too hard

    Correct me if I'm wrong, I'm just a human :p
     
  4. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,043
    This kind of math can be "baked" into orthogonalized (meaning axis-aligned 90-degree) transformations (aka orthotransforms), if one needs to save this transformation along with voxels, for example.

    I've done a series of experiments until I developed a concrete know-how about such transformations. Turns out there are exactly 24 unique orientations of the cube in 3D space, with the following constraints:
    - you can rotate the cube around any of the three axes,
    - you can rotate the cube as many times as you want, but strictly in multiples of 90 degrees.

    You will always arrive at one of the 24 unique orientations. This means that you can develop a large set of data (quaternions and 3x4 matrices which consume 16 and 48 bytes respectively) from just a few bits (24 is less than 2^5, so theoretically 5 bits should be enough). But how few bits you realistically need to still maintain regular operability with orthotransforms?

    Well, you need exactly 5, and you can neatly split them like so:
    - 2 bits for the X rotation: 00, 01, 10, 11 representing 0, 90, 180, 270 degrees around X
    - 2 bits for the Y rotation: 00, 01, 10, 11 representing 0, 90, 180, 270 degrees around Y, and
    - 1 bit for the Z rotation: 0, 1 representing 0, 90 degrees around Z.

    You actually don't need more freedom on Z due to ambiguities. This particular scheme already contains 8 redundancies because it gives you 32 combinations, yet you only need 24.

    If you also want to add mirror flipping to this (which is where matrices come in handy), then you need to include one additional bit for this. This means with just 6 bits you can represent ALL orientations and both kinds of mirror flips (if you imagine staring directly at a cube's face, the whole cube can be mirror flipped along U or V axes, this takes care of both of them). Having 6 bits also means there are now 48 unique orientations, including the mirror flip (and now there are 16 redundancies).

    For my development I made a class that works natively with the encoded bits, but is capable of producing a quaternion/matrix only when needed (and caching it, so it never has to do it again). Now you can bundle this information with some block ID and whatever else, and produce a fully-featured voxel world that also supports orthotransforms, with which you can render the voxels super quickly because you can find a cached transformation matrix on the spot, and yet you do not need to handle matrices or rotations to manipulate this data, you just fiddle with the bits.

    Obviously I made an internal continuity logic for the Z axis rotation, otherwise it's very hard to tell which bits need to change in order to produce a 180 rotation on Z, for example.
     
    SisusCo likes this.
  5. Bunny83

    Bunny83

    Joined:
    Oct 18, 2010
    Posts:
    3,525
    Since I actually played a lot of SE, to make the rotation system user friendly it's not that trivial. The main issue is camera perspective. What I mean by that is that SE chooses a coordinate system that is best aligned with your own viewing angle. So the 6 keys rotate around a logical coordinate system that is always aligned with your camera view and essentially applied to the actual object depending on which orientation fits best. So when you look from a different direction, the axis around you rotate change. Also after each 90° rotation the orientation of the axis also change to match the view axis. I quickly made this gif that shows the rotation helper in SE:

    SE_RotationBox.gif
    First I just press the delete key 4 times, then the home key 4 times and then the insert key 4 times. After that I actually move myself around to show what happens when you change the view angle. As you can see, the axis are always aligned with your view so it's much more intuitive to work with it.

    Note that the rotation is still aligned with the actual object and not the view. However which axis is controlled by which key pairs depends on the view. So that's probably the best approach here. Create an empty gameobject, place it at the same position as the object you want to rotate. determine the closest axis of the target object that aligned with your cameras forward vector. Do the same with the up vector. Now just use Quaternion.LookRotation to align the empty gameobject with that rotation. When you want to do a rotation, you simply parent the object to your empty GO, and rotate that parent according to the keys. Now the mapping of the keys is constant. After each rotation, you unparent and realign that rotation parent in the same way so you start again from scratch.

    Determining the 2 necessary referency axis should be trivial. Just use InverseTransformDirection on the target object's transform and transform your camera's forward and up vector into the object's local space. Now simply look for the largest vector component (absolute value) so that will be the actual desired axis. So set that component to 1 / -1 depending on the original sign and the other components to 0. Now just use TransformDirection to get back the worldspace vectors.

    That same orientation can be used to view a similar rotation helper as I've shown in the gif. Of course the dynamic labelling of the keys is a bonus. Though as I said, from the rotation parent's point of view it always does the same rotations and all the magic happens from the alignment. Of course for grid based systems you would round / clamp the orientation to the grid in the end. As orionsyndrom already mentioned, in a voxel grid like what SE uses you don't actually have individual objects once you build a block as the block becomes part of the "grid" (as SE calls it). So the logical orientation relative to the grid's origin could be stored with just a few bits. Like minecraft originally did encode the chest orientation with just 2 bits of the block metadata. Though MC has changed quite a bit how metadata is stored now so it doesn't really apply anymore :)

    In Space Engineers I can recommend turning on some of the debug helpers which can help you understand how their grid system works in general. In the info tab you can turn on things like center of mass and the grid pivot (the origin of the grid's local space). It even shows you the local axis at the pivot. It also helps to see which parts are actually a separate grid and which belong to the same. Cutting a ship in half would actually create two new grids. The actual position of the pivot can be quite important when you try to build something "large" like a space elevator :) Because SE also uses some sort of floating origin to reduce floating point issues. However if the pivot is very far away, collisions become really janky. Well, after building 40km you can expect something like half-a-block error (though I actually build one in survival from space down to the planet because it's easier that way. Of course with unsupported stations switched on).
     
    spiney199 likes this.
  6. Bunny83

    Bunny83

    Joined:
    Oct 18, 2010
    Posts:
    3,525
    Personally I think the SE rotation controls are really well designed. Since I've grown up with the good old turbo editor by borland (since I started programming with pascal), I'm actually used to using the ins,del, home, end, pgup/down keys on a regular basis. Those keys can be easily found without looking and CTRL+INS and SHIFT+INS is still my primary way of copying / pasting text in almost all applications. Since I navigate my text with the cursor keys it's just natural to use CTRL+ cursor keys to jump between words, select text in the same way and use the keys above to do the copy, cut, paste stuff.

    The design to have del / pg down to rotate in the horizontal plane (Unity's y axis), home / end to rotate around Unity's x axis and ins / pg up to rotate around the up axis just feel natural to me. When you play a lot Space Engineers I rotate things intuitively without really thinking about it.