Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice
  3. Join us on November 16th, 2023, between 1 pm and 9 pm CET for Ask the Experts Online on Discord and on Unity Discussions.
    Dismiss Notice

Watson Unity SDK

Discussion in 'Assets and Asset Store' started by mediumTaj, Aug 1, 2017.

  1. mediumTaj

    mediumTaj

    Joined:
    Feb 20, 2015
    Posts:
    28
    Hi all,
    I'm currently a developer at IBM in Austin, TX. We put out an open source Watson Developer Cloud Unity SDK over a year ago.

    The Watson Unity SDK will let you add Watson services to your games and applications. You can give your applications the ability to hear speech using the Speech to Text service. You can give your game the ability to see and classify images using the Visual Recognition service. You can add the ability to comprehend natural language and classify intent using the Natural Language Understanding service. Game logic can be executed upon classifying that intent. A full list of Watson services can be found here

    https://www.ibm.com/watson/products-services/

    I added Watson services to the Survival Shooter demo to show what you can accomplish using the Speech to Text service with Natural Language Understanding. Using the Speech to Text service, your application can recognize your speech and convert it to text. It then can send the string to Natural Language Understanding to extract an intent to execute a command. In this case "I need air support" will return a trained intent of `air_support` and drop a bomb on the player's position.



    The SDK is also very useful in mixed reality use cases where the user does not always have access to the keyboard.

    I've been working on a major refactor of the SDK for release to the Unity Asset store. I'd love to get some developer's opinions on the SDK. Please let me know your thoughts on what may be confusing and how we can make the SDK better!

    https://github.com/watson-developer-cloud/unity-sdk/tree/feature-config-refactor

    **Note: ** This is the `feature-config-refactor` branch.
     
    Last edited: Aug 1, 2017
  2. mediumTaj

    mediumTaj

    Joined:
    Feb 20, 2015
    Posts:
    28
    bump! Any feedback would be greatly appreciated!
     
  3. ZhavShaw

    ZhavShaw

    Joined:
    Aug 12, 2013
    Posts:
    168
    Ok, so I wasn't going to leave feedback because I'm not good at it, but might as well.
    First of all, I'm keeping an eye on the thread because this is more than interesting.
    Second, I was wondering if this would work with someone who speaks english, but with an accent. I'm from the Caribbean and I tend to speak with a different accent as expected. Would it still understand me correctly? Would it still be accurate?
     
  4. mediumTaj

    mediumTaj

    Joined:
    Feb 20, 2015
    Posts:
    28
    Good question! One of the great things about Watson services is that each service instance can be trained to your particular use case. You should be able to add custom words according to how they sound. You can send this data to your speech to text instance in this format:


    {
    "words": [
    {
    "word": "string",
    "sounds_like": [
    "string"
    ],
    "display_as": "string"
    }
    ]
    }

    The service is continually getting better and there will be more support for options like this in the future!
     
  5. ZhavShaw

    ZhavShaw

    Joined:
    Aug 12, 2013
    Posts:
    168
    Oh, this is awesome!
     
  6. CarlosLM

    CarlosLM

    Joined:
    Jan 8, 2015
    Posts:
    4
    I just started to explore so don't have a full feedback. The video looks awesome. For a prototype I'm doing I'm interested in speech recognition and triggering searches on data base on the speech, can you point me in the right direction?
     
  7. GameMaster5

    GameMaster5

    Joined:
    Oct 4, 2015
    Posts:
    3
    Hi ,

    I am not able to get unity package for watson SDK from the link which you have shared.

    it will be great if you can share some documentation on it


    thanks,



     
  8. scarffy

    scarffy

    Joined:
    Jan 15, 2013
    Posts:
    25
    Hi,

    Would love to understand how to implement it. When I look into the documentation, it's confusing for me to understand.

    Thank you
     
    kfranci6 likes this.
  9. nat42

    nat42

    Joined:
    Jun 10, 2017
    Posts:
    353
  10. JPFerreiraVB

    JPFerreiraVB

    Joined:
    Sep 18, 2017
    Posts:
    39
    Hi.

    First of all congrats on starting this.
    But... i can't make it work. I want to use the SpeechToText, and this is the log:
    Looks like the WSConnector is closing the socket connection for no reason.
    I was able to grab the audio, and it is recording my microphone, however, look's like the connection to WatsonBackend is not workin.

    Any idea?

    EDIT:
    I was able to trace the error. This happens when i change the Scripting Runtime Version from 3.5 to 4.6.
    Since my project is based in 4.6 version. Does this have a simple fix or do i have to use 3.5?
     
    Last edited: Jan 2, 2018
  11. ryan77anderson

    ryan77anderson

    Joined:
    Dec 30, 2017
    Posts:
    1
    Taj - thanks for posting the how-to YouTube video:

    (there is now a simplified asset in the asset store I see)
     
    Mooney322 likes this.
  12. mediumTaj

    mediumTaj

    Joined:
    Feb 20, 2015
    Posts:
    28
    Sorry everyone, I really wish I got notifications about these posts. Please take a look at the video series @ryan77anderson posted. The Watson Unity SDK is now available on the Unity Asset Store!
     
  13. sourabh10995

    sourabh10995

    Joined:
    May 6, 2016
    Posts:
    4
    Were you able to fix it?
     
  14. PendulumIcePrince

    PendulumIcePrince

    Joined:
    Dec 28, 2017
    Posts:
    1
    Hi guys, this is my first time posting this and have the same problem with JPFerreira. Do you guys know how to deal with it?:(:(:(:(:( Please disregard the highlighted part. Thank you.

    Screen Shot 2018-03-03 at 10.53.39 AM.png
     
  15. sourabh10995

    sourabh10995

    Joined:
    May 6, 2016
    Posts:
    4
  16. mediumTaj

    mediumTaj

    Joined:
    Feb 20, 2015
    Posts:
    28
    Please update your SDK to v2.0.1.
     
  17. JPFerreiraVB

    JPFerreiraVB

    Joined:
    Sep 18, 2017
    Posts:
    39
    I did by changing the Scripting Runtime Version from 4.6 to 3.5.
    The last version available in Asset Store does not solve the issue. I'm unable to make it work with 4.6 selected.
    I've opened a Ticket with IBM, but no feedback so far.

    ERROR:



    SUCCESS:
     
  18. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,848
    I just downloaded the IBM Watson SDK asset from the Asset Store today, and am keen to get speech-to-text working in even a demo app. But after a couple of hours with it, no luck.

    I know I have my cloud service set up, and the credentials correct, because I tried the simple Curl examples here, and they work.

    But the only example I could find in the asset (ServiceExample scene, ExampleSpeechToText asset) does not appear to do streamed recognition. It does run, and after much time appears to have done whatever it set out to do (except for a couple of errors because I have the Lite service), but I poked through the code and don't see it using StartListening.

    So, by crawling through the SpeechToText.cs source file, I attempted to hack out my own script that would do streaming recognition:
    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4. using UnityEngine.UI;
    5.  
    6. using IBM.Watson.DeveloperCloud.Services.SpeechToText.v1;
    7. using IBM.Watson.DeveloperCloud.Logging;
    8. using IBM.Watson.DeveloperCloud.Utilities;
    9. using IBM.Watson.DeveloperCloud.Connection;
    10. using IBM.Watson.DeveloperCloud.DataTypes;
    11.  
    12. public class WatsonListenTest : MonoBehaviour {
    13.  
    14.     [Header("Service Credentials")]
    15.     public string username;
    16.     public string password;
    17.     public string url;
    18.  
    19.     [Header("Debug Stuff")]
    20.     public Text statusText;
    21.  
    22.     SpeechToText speechToText;
    23.     AudioClip recording;
    24.    
    25.     void Start () {
    26.         LogSystem.InstallDefaultReactors();
    27.         UnityObjectUtil.StartDestroyQueue();
    28.        
    29.         //  Create credential and instantiate service
    30.         Credentials credentials = new Credentials(username, password, url);
    31.  
    32.         speechToText = new SpeechToText(credentials);
    33.         speechToText.OnError = OnError;
    34.         speechToText.StreamMultipart = true;    // use Transfer-Encoding: chunked since we are sending multiple chunks to stream
    35.         speechToText.DetectSilence = false;    // for now!
    36.        
    37.         if (!speechToText.StartListening(OnRecognize)) {
    38.             Debug.LogWarning("StartListening returned false");
    39.         } else {
    40.             Debug.Log("StartListening OK");
    41.         }
    42.        
    43.         Log.Status("Start()", "Checking whether Watson's logging system works");
    44.     }
    45.    
    46.     void Update () {
    47.         string status = "";
    48.         if (speechToText.AudioSent) status += "Audio Sent; ";
    49.         if (speechToText.IsListening) status += "Listening";
    50.         else status += "Not listening";
    51.         statusText.text = status;
    52.        
    53.         if (Input.GetKeyDown(KeyCode.LeftShift)) {
    54.             Debug.Log("Recording (5 seconds)");
    55.             recording = Microphone.Start(null, false, 5, 44100);
    56.         }
    57.         if (Input.GetKeyUp(KeyCode.LeftShift)) {
    58.             Microphone.End(null);
    59.             AudioSource audioSrc = GetComponent<AudioSource>();
    60.             if (audioSrc != null) {
    61.                 Debug.Log("Playing recorded clip");
    62.                 audioSrc.clip = recording;
    63.                 audioSrc.Play();
    64.             }
    65.            
    66.             Debug.Log("Analyzing clip");
    67.             float[] samples = new float[recording.samples * recording.channels];
    68.             recording.GetData(samples, 0);
    69.             float max = 0;
    70.             foreach (float sample in samples) if (sample > max) max = sample;
    71.             Debug.Log(samples.Length.ToString() + " samples, max = " + max);
    72.  
    73.             Debug.Log("sending clip to Watson");
    74.             var data = new AudioData(recording, max);
    75.             bool result = speechToText.OnListen(data);
    76.             Debug.Log("OnListen returned " + result);
    77.             //            speechToText.StopListening();
    78.         }
    79.        
    80.         if (Input.GetKeyDown(KeyCode.Return)) {
    81.             Debug.Log("Stopping listening");
    82.             if (!speechToText.StopListening()) Debug.Log("StopListening returned false");
    83.         }
    84.     }
    85.    
    86.     void OnError(string error) {
    87.         Debug.LogWarning("Watson error: " + error);
    88.     }
    89.    
    90.     void OnRecognize(SpeechRecognitionEvent results) {
    91.         Debug.Log("Something recognized! " + results);
    92.     }
    93. }
    94.  
    But it doesn't work. Because I have it set up to also play each recorded clip via an AudioSource, I know that the recordings are fine. And from the debug logs (I also hacked some additional debug output into SpeechToText.cs), I can see that it's sending data to the server. But the only response I ever get from the server (i.e., the only time OnListenMessage is invoked) is with a "state: listening" message. I never get any results. (And so of course my OnRecognize callback is never called.)

    This is true apparently no matter how many chunks I send, or how long I wait in between. I'm saying very simple things like "Hello," "one, two", etc. I must be doing something wrong, but I swear by this point I've looked at every method in SpeechToText, and I can't figure out what it is.

    It doesn't help that the asset comes with no readable docs... some of them appear to be in Open Office format, and I don't know what the heck a .shfbproj is. Can we get a simple PDF or HTML file please?

    All very frustrating. Does anybody have a simple example of continuous recognition with this SDK?
     
  19. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,848
    OK, just to follow up for future searchers: Kimberly Siva at Mixspace kindly pointed out to me the ExampleStreaming scene, which is the one I should have tried, rather than the ExampleServices one.

    After loading that one up, and editing ExampleStreaming.cs with my credentials, it just works! Seems really fast too. I'm delighted!
     
    castana1962 likes this.
  20. castana1962

    castana1962

    Joined:
    Apr 10, 2013
    Posts:
    400
    Hi mediumtaj
    I am making a Sci-Fi game to learn a language. I am starting to knowing Watson SDK for Unity and I would need your advice . Would It be posible?
    I did the following
    - Move the Game objects by Command voice( I did the Code to do ii).
    But now I would need to récord and save this command voice and compare It with an original phrase text source. Is It possible with Watson SDK? Thanks for your time.
    Alejandro Castan
     
    Last edited: Jun 26, 2018
  21. castana1962

    castana1962

    Joined:
    Apr 10, 2013
    Posts:
    400
    Hi All
    I am working with Unity(and Lexicon pluging) and IBM Cloud Speech to Text and Conversation services and I could sync succesfully Conversation service but the Speech to text service sync is Failed, I added both the username and password that it figures in the IBM Cloud services Certificates but I cannot sync the Speech to Text Service. Does anybody help me with that?
    Thanks for your time
    Alejandro Castan
    Ps. If anybody needs some info about Lexicon please here. It is great !!!!
    https://assetstore.unity.com/packages/tools/ai/lexicon-113459
     
  22. castana1962

    castana1962

    Joined:
    Apr 10, 2013
    Posts:
    400
    Hi All
    Complementing my previous emailI I saw that when I try to Sync the Speech to Text service I got the following error in Unity

    [07/01/2018 15:41:14][RESTConnector.ProcessRequestQueue()][ERROR] URL: https://stream.watsonplatform.net/speech-to-text/api/v1/customizations, ErrorCode: 400, Error: 400 Bad Request, Response: { "code": 400, "code_description": "Bad Request", "error": "This feature is not available for the Bluemix Lite plan. Please upgrade to a paid plan to activate this feature: https://console.bluemix.net/catalog/services/speech-to-text"

    Speech to Text sync Failed.....

    Since I am experiencing with Watson IBM services at this time, Is there some way to repair this problem and continue with my Watson Speech to Text Service test?
    Thanks for your time
    Alejandro Castan


    0
     
  23. rgjones

    rgjones

    Joined:
    Jan 23, 2017
    Posts:
    19
  24. castana1962

    castana1962

    Joined:
    Apr 10, 2013
    Posts:
    400
    Hi rgjones
    Yes. I am.
    The problem is that like I have a Bluemix Lite plan and I cannot to access to the Speech to Text services Custom Model features. I was trying without Custom Model features and I have a lot problems when I speak with my microphone. it does not detect well all my words.... It would be great if IBM could add Speech to Text custom model features( for one time at least...). I saw some project with this feature and there are a big difference... Hopefully it can be possible !!!
    Regards
    Alejandro
     
  25. mediumTaj

    mediumTaj

    Joined:
    Feb 20, 2015
    Posts:
    28
    Unfortunately speech to text customization is only available for paid plans now. You would have to upgrade to the Standard plan.
     
  26. Jelmersb

    Jelmersb

    Joined:
    Jul 12, 2016
    Posts:
    66
    Hi, We are using the visual recognition now in a Unity AR app and are impressed by the results.
    But right now, the app sends a photo every second to the IBM servers. As this app is something that our customers will use at home, this is not good privacy-wise. Can we embed an offline library?

    Secondly, I would like to express some frustration over IBM cloud / Watson's front-end. While Watson services and the SDK are good, the web front-end is pretty horrible in my experience. Now for example, I can't add custom models any longer, yesterday the whole service was unavailable, earlier, i couldn't set up a paid plan, contacting customer support leads to a "your connection is not private" warning..
     
  27. castana1962

    castana1962

    Joined:
    Apr 10, 2013
    Posts:
    400
    Hi All
    I am trying to record a voice and check it against a file to see if they have said the same like in a language learning by Watson Service( or by metod in a Class Unity instead....)
    For Example
    If the player says “Thanks“ by microphone, Speech to text show me in Unity Editor the Text phrase “Thanks“ and what I would need is that the Game compare this text phrase and the user get an answer “Great“( if the the compare if it is correct) or “You are close. Try again“( if the compare is not correct.)
    How I should do it? by any metod in any class in Unity or Watson could do it for me?
    Hopefully you can understand me.
    Could anybody advice me ?
    Thanks for your help.
    Alejandro
     
    Last edited: Jul 15, 2018
  28. castana1962

    castana1962

    Joined:
    Apr 10, 2013
    Posts:
    400
    Hi again
    I am interested in setting Seech to text like custom model in Unity projects
    Since I saw this info about this topic
    https://www.ibm.com/blogs/watson/2016/09/build-custom-language-model-convert-unique-speech-text/
    https://www.ibm.com/watson/developercloud/speech-to-text/api/v1/curl.html?curl#check-jobs
    But I think that it is for Watson developer Enviroment and not for Unity, for it, Is there any Custom Model speech to text example in the Wastson SDK? or Could anybody advice how I should do it?
    Thanks for your time
    Alejandro
     
  29. Deleted User

    Deleted User

    Guest

    @mediumTaj I followed each step to setup watson sdk for unity and put source url= ws://gateway-wdc.watsonplatform.net/speech-to-text/api
    and IAM Apikey= 8G4ms9s0aMOKeQ5xVfCCIH2nbGCxzk2D8M9yzK1BJQX3 These are given on IBM watson page.
    When i hit play it runs, But when i start speak through microphone this error appeared "[07/16/2018 16:19:07][SpeechToText.OnListen()][ERROR] Failed to enter listening state".
    i will appreciate your help!
     
    ebrublue likes this.
  30. castana1962

    castana1962

    Joined:
    Apr 10, 2013
    Posts:
    400
    @mediumTaj
    Hi, sorry for my ignorance but I am interested in integrate Watson Machine Learning Services to Unity. Is it possible?
    If it is like that, please let me know how do it?
    Thanks for your time
    Alejandro
     
  31. ronbonomo

    ronbonomo

    Joined:
    Oct 15, 2015
    Posts:
    32
    I am getting errors when I import Watson. I get this error with the Watson SDK

    Assets/Watson/Scripts/Connection/WSConnector.cs(416,72): error CS0117: `System.Security.Authentication.SslProtocols' does not contain a definition for `Tls12'

    And I get this error with the watson snadbox for VR

    A tree couldn't be loaded because the prefab is missing. And

    Binary to YAML conversion: type UInt16 is unsupported
     
  32. mediumTaj

    mediumTaj

    Joined:
    Feb 20, 2015
    Posts:
    28
  33. Lotiaz

    Lotiaz

    Joined:
    Mar 14, 2018
    Posts:
    4
    Hi @mediumTaj ,

    Thank you for putting this up on Unity store for us to download.
    So I have downloaded the assets and imported it into Unity. I followed your videos and got a speech to text + translation up and running. I can now translate from English and into the supported languages. I would like to get it to also run the other way around. However, I try to use the code but change things into "fr-en" and add _fr in the code (like this guy
    ). Yet it just seem to do "en-fr" no matter what. Do you have any ideas what I can do in order for it to translate fr-en or other languages to en in speech to text?

    Many thanks
     
    Last edited: Sep 25, 2018
  34. mediumTaj

    mediumTaj

    Joined:
    Feb 20, 2015
    Posts:
    28
    Hi @Lotiaz - There should be a `fr-en` language model available

    Code (CSharp):
    1.     {
    2.       "model_id": "fr-en",
    3.       "source": "fr",
    4.       "target": "en",
    5.       "base_model_id": "",
    6.       "domain": "general",
    7.       "customizable": true,
    8.       "default_model": true,
    9.       "owner": "",
    10.       "status": "available",
    11.       "name": "fr-en",
    12.       "training_log": null
    13.     },
    Additionally you will need to change your `SpeechToText` instance to understand French

    Code (CSharp):
    1. _speechToTextService.RecognizeModel = "fr-FR_BroadbandModel";
     
  35. Lotiaz

    Lotiaz

    Joined:
    Mar 14, 2018
    Posts:
    4
    Oh thats great! Thanks a lot! :D
     
  36. espillier

    espillier

    Joined:
    Sep 2, 2018
    Posts:
    1
    Today I succeeded in running the "ExampleStreaming" scene succesfully for the first time, but things didn't go as smoothly as in Taj's videos.

    I am running Unity 2018.2.18f1 and I downloaded Watson SDK 2.12 from Github (note that the asset store still proposes version 2.11 at the time of writing of this post).

    Here are the steps that I followed in order to make things work properly:

    1) registered an account on IBM Cloud and created the Speech-to-text service at location Frankfurt. This was easy and went smoothly.
    2) received the service credentials and copied the apikey and url fields
    3) created a new project in Unity, imported the Watson SDK and opened the ExampleStreaming scene
    4) went to "Edit - Project Settings Player"; scrolled down to Other Settings/Configuration and switched scripting runtime version to ".NET 4.x Equivalent"; restarted Unity.
    5) selected the ExampleStreaming gameobject
    6) in the inspector, pasted the url (cfr step 2 above) into the field "Service Url"
    7) pasted the apikey (cfr step 2 above) into the field "Iam Apikey"
    8) entered "https://iam.bluemix.net/identity/token" into the "Iam Url" field
    9) left username and password blank (CF Authentication)
    10) opened the file WSConnector.cs in Visual Studio
    11) changed line 207 from if (URL.StartsWith("https://stream.")) into if (URL.StartsWith("https://stream")) - removed a period after the word "stream".
    12) removed the same period two times on line 209 (one for https and one for wss).
    13) clicked the Play button and saw how my heavily-accented English was more or less smoothly transcribed.
    14) entered "fr-FR_BroadbandModel" into the "Recognize model" field, and observed that a Belgian accent seems to create confusion in the transcription process.

    Steps 10-12 were necessary because I call the speech-to-text service from Frankfurt and the test on the service URL is not correctly formulated in WSConnector.
     
  37. eco_bach

    eco_bach

    Joined:
    Jul 8, 2013
    Posts:
    1,601
    Example streaming works great after solving the iOS authentication issue! Now I need to improve Speech to Text accuracy by creating a custom language model. Saw this post here but need some handholding. How to actually run this code inside Unity? Is there an actual project with a scene to reference?

    https://gist.github.com/akeller/4c45ab3fd4438667010c47f7f604d556

    and

    https://developer.ibm.com/tutorials/watson-speech-to-text-custom-language-model/

    Was suggested to use cURL or Postman because of the time to process but still unclear how to translate the C# example to RESTful API calls.

    Anyone?
     
    Last edited: May 14, 2019
  38. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,848
    Did the network API change? I used this successfully some months ago, and the Service Credentials were defined by Username, Password, and Url. Then I didn't use it for a while; today I tried to use it again, and it no longer works. I know Lite plan services get terminated after 30 days of inactivity, so I signed up again, but now the credentials look quite different — instead of username and password, I have an "API Key" (and no password), plus still a URL.

    The Watson code I have has no place to put such an API key, and I don't find it on the asset store anymore. Where do we get the latest version of the code, which will work with the current network API?
     
  39. teamdevsupergeeks

    teamdevsupergeeks

    Joined:
    Sep 19, 2019
    Posts:
    2
    Hi @mediumTaj

    Today inspired by this playlist. I decided to explore Watson services on Unity, recently I also played with MLK and I was very excited to see Watson on another platform. * - *

    But I soon noticed that some things have changed, so the codes presented in the videos are no longer working. T.T

    Still goes the step-by-step of what I did and if you can help me update them, I really appreciate it.

    In summary, I've been following this tutorial here.

    1- I installed Unity (2019.2.5f1)
    2- I created a 3D Project.
    3 - Accessed: https://github.com/watson-developer-cloud/unity-sdk - downloaded unity-sdk-master

    It gave some 999 errors, then I saw that I needed the core.

    4 - Accessed: https://github.com/IBM/unity-sdk-core/releases/latest - downloaded unity-sdk-core-0.3.0

    Rolled some warnings, but no errors \ o /

    Following the playlist, I grabbed the Credentials for my services on the IBM Cloud and wrote it down in a notebook.

    I created a Canvas in the project, put some text inside it and opened a new script to test Language Translator.

    Follow my code:
    Code (CSharp):
    1.  
    2. using System;
    3. using System.Collections;
    4. using System.Collections.Generic;
    5. using UnityEngine;
    6.  
    7.  
    8. using UnityEngine.UI;
    9.  
    10. using IBM.Cloud.SDK;
    11. using IBM.Cloud.SDK.Connection;
    12. using IBM.Watson.LanguageTranslator.V3;
    13. using IBM.Watson.LanguageTranslator.V3.Model;
    14.  
    15.  
    16. public class LanguageTranslatorDemo : MonoBehaviour
    17. {
    18.     public Text ResponseTextField;
    19.     private LanguageTranslatorService languageTranslator;
    20.     private string translationModel = "en-pt";
    21.     private string versionDate = "2019-09-19";
    22.     private string apiKey = "SECRET";
    23.     private string serviceUrl = "https://gateway.watsonplatform.net/language-translator/api";
    24.  
    25.     void Start()
    26.     {
    27.         LogSystem.InstallDefaultReactors();
    28.         StartCoroutine(CreateService());
    29.     }
    30.  
    31.  
    32.     private IEnumerator CreateService()
    33.  
    34.         TokenOptions languageTranslatorTokenOptions = new TokenOptions()
    35.         {
    36.             IamApiKey = apiKey
    37.         };
    38.  
    39.     Credentials languageTranslatorCredentials = new Credentials(languageTranslatorTokenOptions, serviceUrl);
    40.  
    41.         while (!languageTranslatorCredentials.HasIamTokenData())
    42.             yield return null;
    43.  
    44.         languageTranslator = new LanguageTranslatorService(versionDate, languageTranslatorCredentials);
    45.  
    46.     }
    47.  
    48.     public void Translate(string text)
    49.     {//OnTranslate and OnFail are in error.
    50.  
    51.         languageTranslator.GetModel(OnTranslate, OnFail, text, translationModel);
    52.  
    53.         Translate("Are you enjoying the course?");
    54.     }
    55.        
    56.     private void OnFail(RESTConnector.Error error, Dictionary<string, object> customData)
    57.     {//RestConnector.Error is also giving error.
    58.  
    59.         Log.Debug("LanguageTranslatorDemo.OnFail()", "Error: {0}", error.ErrorMessage);
    60.     }
    61.     private void OnTranslate(Translation response, Dictionary<string, object> customData)
    62.     {//response.translations gives error.
    63.  
    64.         ResponseTextField.text = response.translations[0].translation;
    65.     }
    66. }
    67.  
    The errors are:

    Error CS1061 ‘Translation’ does not contain a definition for "translations" and could not find any "translations" extension method that accepts a first argument of type "Translation"

    Error CS0426 Type name "Error" does not exist in type "RESTConnector"

    Error CS1503 Argument 2: Unable to convert from "method group" to "object"

    Error CS1503 Argument 1: Unable to convert from "method group" to "Action <DetailedResponse <TranslationModel>, IBMError>"
     
  40. teamdevsupergeeks

    teamdevsupergeeks

    Joined:
    Sep 19, 2019
    Posts:
    2
    I found this project, which seems to me to be updated, but after entering the data (api and url) of LT and STT services. He still didn't work, or maybe I forgot something.

    Does anyone have a suggestion?

    Link: https://github.com/mediumTaj/watson-live-translation
     
  41. yogeshbangar

    yogeshbangar

    Joined:
    Aug 3, 2017
    Posts:
    1
    I solved Error

    I encounter the same issue with Unity 2018.3.14f1.
    I just change player settings and then works fine
    file -> build settings - > player settings -> Other Settings
    Configuration
    • Scripting runtime version : .Net 4x equivalent
    • API Compatibility level: .Net 4x
     

    Attached Files: