Search Unity

  1. Improved Prefab workflow (includes Nested Prefabs!), 2D isometric Tilemap and more! Get the 2018.3 Beta now.
    Dismiss Notice
  2. The Unity Pro & Visual Studio Professional Bundle gives you the tools you need to develop faster & collaborate more efficiently. Learn more.
    Dismiss Notice
  3. Let us know a bit about your interests, and if you'd like to become more directly involved. Take our survey!
    Dismiss Notice
  4. Improve your Unity skills with a certified instructor in a private, interactive classroom. Watch the overview now.
    Dismiss Notice
  5. Want to see the most recent patch releases? Take a peek at the patch release page.
    Dismiss Notice

Physics raycast on 2d button not working

Discussion in 'Scripting' started by AnalogUniverse, Oct 11, 2018 at 12:54 PM.

  1. AnalogUniverse


    Aug 10, 2018
    Hi All

    Im following a touch and mouse input tutorial, using raycasts, My project is set to 2D and I have Polygon collider 2d on the button Im making. My touch code is a static singleton class called EventsManager, I have onMouseDown and onMouseUp methods on the button script itself, which works fine. But my eventsManager touch and click code is failing on the line

    if (Physics.Raycast(ray, out hit, touchInputMask))

    The touchInputMask is a Layer called touch Input and is set correctly for both my button and the public touchInputMask property of my EventsManager class. Ive tried it with is trigger set to false and true on my buttons polygon collider same result, so its not something as simple as that

    Heres the offending section of my code.

    Code (CSharp):
    1. #if UNITY_EDITOR
    4.         if (Input.GetMouseButton(0) || Input.GetMouseButtonDown(0) || Input.GetMouseButtonUp(0))
    5.         {
    6.             touchesOld = new GameObject[touchList.Count];
    7.             touchList.CopyTo(touchesOld);
    8.             touchList.Clear();
    10.             Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
    12.             Debug.Log("camera ray coordinates x " + ray.origin.x + " y " + ray.origin.y); //correct coordinates logged
    14.             if (Physics.Raycast(ray, out hit, touchInputMask))
    15.             {
    16.                 GameObject recipient = hit.transform.gameObject;
    17.                 touchList.Add(recipient);
    19.                 Debug.Log("raycast hit detected");  //doesnt log
    What am I doing wrong ? Any help much appreciated Thanks in advance
  2. eses


    Feb 26, 2013
  3. AnalogUniverse


    Aug 10, 2018

    Hi eses

    I thought it was probably something to do with that But now Ive got another problem, I have a public LayerMask touchInputMask; property and Ive changed my code to this

    Code (CSharp):
    1. Vector3 ray = Camera.main.ScreenToWorldPoint(Input.mousePosition);
    2.             hit = Physics2D.Raycast(ray,, touchInputMask);
    4.             //Debug.Log("camera ray coordinates x " + ray.x + " y " + ray.y); //correct coordinates logged
    7.             if (hit.collider != null)
    8.             {
    9.                 GameObject recipient = hit.transform.gameObject;
    10.                 touchList.Add(recipient);
    12.                 Debug.Log("raycast hit detected " + recipient.GetComponent<QPUIButton>().testVar);
    But now regardless of which Layer I put my button on the above code fires when I click the button, so it appears to be ignoring the LayerMask any ideas ?
  4. AnalogUniverse


    Aug 10, 2018
    Its Ok I think Ive figured it, the documentation isnt very clear but it appears to show

    origin, direction, contact filter, results, distance

    But that doesnt seem to work where as the following argument order seems to work

    origin, direction, distance, layerMask

    Code (CSharp):
    2. Vector3 ray = Camera.main.ScreenToWorldPoint(Input.mousePosition);
    3. hit = Physics2D.Raycast(ray,, Mathf.Infinity, touchInputMask)
    Altho Im now confused by ContactFilter2D and LayerMask and why LayerMask works Lol its a never ending battle wrapping my head around Unity, I thought this was supposed to make the process of Game creation simpler, I had an easier time when I was making a framework from scratch in Objective C lol ;D
  5. eses


    Feb 26, 2013

    Hi again, I now read your post properly, may I ask what are you trying to do?

    "I have Polygon collider 2d on the button Im making"

    Are you trying to avoid using UI system altogether and build some custom system using GameObjects and collider2D's?

    If you only need buttons, that can be clicked or touched, it will be way easier using UI system. Just add buttons to canvas and that's it. You would only need RayCasting to detect hits in sprites or 3d objects that are not part of UI system.
  6. AnalogUniverse


    Aug 10, 2018
    Yeah I spent a week struggling to get the UI canvas to match the camera size and then get the sprites to be displayed at the proper scale. The standard transform and the sprite render does all that out of the box first time intuitively as you would expect. Then I wasted a further week trying to create a UI heart meter and the amount of code I had to use to get it to scale and position correctly was ludicrous, setting anchorMin anchorMax messing around with the rect transform, just a snippet from what should have been a relatively simple class

    Code (CSharp):
    1. void AddAndRemoveElements()
    2.     {
    3.         if (_maxCapacity > images.Count)
    4.         {
    5.             for (int i = images.Count; i < _maxCapacity; i++)
    6.             {
    7.                 GameObject go = new GameObject("gameobject");
    8.                 RectTransform rt = go.AddComponent<RectTransform>();
    9.                 rt.anchorMin = new Vector2(0, 1);
    10.                 rt.anchorMax = new Vector2(0, 1);
    11.                 rt.pivot = new Vector2(0.5f, 0.5f);
    12.                 rt.localScale = new Vector2(1.0f, 1.0f);
    14.                 Image image = go.gameObject.AddComponent<Image>();
    15.                 go.transform.SetParent(gameObject.GetComponent<RectTransform>(), false);
    16.                 images.Add(go);
    17.             }
    18.         }
    20.         else if (_maxCapacity < images.Count)
    21.         {
    22.             for (int i = images.Count - 1; i >= _maxCapacity; i--)
    23.             {
    24.                 GameObject go = images[i];
    25.                 images.RemoveAt(i);
    28.                 if (Application.isEditor)
    29.                 {
    30.                     //Object.DestroyImmediate(go);
    31.                     StartCoroutine(Destroy(go));
    33.                 }
    35.                 else
    36.                 {
    37.                     Destroy(go);
    38.                 }
    39.             }
    40.         }
    42.         Debug.Log("images count " + images.Count);
    45.         RectTransform rect = GetComponent<RectTransform>();
    47.         int negx = (rect.anchoredPosition.x < 1) ? -1 : 1;
    48.         int negy = (rect.anchoredPosition.y < 1) ? -1 : 1;
    50.         float oldOffsetX = (rect.sizeDelta.x * rect.pivot.x) * negx;
    51.         float oldOffsetY = (rect.sizeDelta.y * rect.pivot.y) * negy;
    53.         Debug.Log(rect.sizeDelta.y);
    55.         rect.sizeDelta = new Vector2((_spacing * _maxCapacity) - (_spacing - maxElementWidth), maxElementHeight);
    57.         float newOffsetX = (rect.sizeDelta.x * rect.pivot.x) * negx;
    58.         float newOffsetY = (rect.sizeDelta.y * rect.pivot.y) * negy;
    60.         Debug.Log(newOffsetY);
    63.         rect.anchoredPosition = new Vector3((rect.anchoredPosition.x - oldOffsetX) + newOffsetX, (rect.anchoredPosition.y - oldOffsetY) + newOffsetY);
    64.     }

    And then I want to animate it all with gotween, and from what Ive read, what the Unity canvas and rect transform do behind the scenes is as complicated as the code looks and runs really slow, why is anyones guess, so I figured the best option is to do my own UI and simple have a UIElement parent class with an anchor point -1 to 1 and a xy buffer from the anchor point, which is used in a simple function to convert the transform x and y to different screen ratios. I might be missing something here but it was really unintuitive to me and being made to jump through unnecessary hoops, to get it to pay nice was really testing my sanity.
    The only problem Ive got now is using Bitmap fonts with a standard textmesh. If I cant resolve that I'd rather just ditch Unity and go back to my old framework, than struggling with the Unity GUI, it caused me that much grief !