A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Made With Unity' started by theylovegames, Sep 23, 2012.
I have some confuse about detect veracity, after record words, sometimes I didn't talk anything, like knock desk, touch micphone, it will be match a word!
expect you can improve about this!
It doesnt work for me... I can't set up any word
I have already a voice chat implemented, and this is working with my headset.
can you explain why it doesnt work???
Didn't work for me either.. i seem to be recording some audio but it just does random movements.
I have a new example #10 which uses voice commands on facial expressions.
Take the new demo for a spin - http://theylovegames.com/WordDetection_1_7.html
I generated the Facial Expressions pretty quickly using customizations to get an approximation for these facial shapes. I then exported the faces as OBJ files and imported to Unity.
Unity toggles the MeshRenderer on each expression to switch between the faces.
The next example will use an additional mesh that lerps between the expressions, rather than toggling visibility for smooth transitions.
This video had inspired me doing it by hand. And I figured there would be an automated way to make the process faster.
This is the process that I'm using to add new heads for facial expression.
New tutorial for adding a retro talking head.
A few new demos.
Word Detection now supports Blend Shapes.
First I combine the old OBJ files into FBX Morph Maps (Blend Shapes):
And then The FBX Blend Shapes can fit right into Unity for use by the Word Detection Package.
New Demo: (Word Detection with Blend Shapes)
The 1.8 version of Word Detection is now available in the Unity Asset Store. This version includes the new facial animations.
I've already started on the next examples which will include animated models with facial expressions that use word detection for input.
You can even use word detection to drive emotion:
Word Detection input can be used to drive playing movie clips.
sir is it possible to control a first person controller or mobile controller with your app?
When the word is detected, you would tie that with the character controller and maybe a forward input for a number of seconds.
Other word actions could make 90 degree left and right turns.
Another set of words could do strafing.
It's a good example to add. Thanks for the input!
I recently buy your application and I try to connect it to my character controller particular in android i dont know what material I will use to move the character...can you help me provide some video tutorial or step by step process to make my character move
Your character is represented with a game object in the scene. I would attach a custom script to that game object and then use an update method that transforms the character using the word detection input.
It's a great idea and I'll add it to the list of examples for the next update.
hoping that example soon! thanks!
Thanks for the suggestion. I put together an example for using Word Detection to drive the Character Controller
I'll post a demo online....
thanks man!...did you try it on passing a wall with a collider? mine's make a glitch through it
No problem adding a wall. I just added a plane and the character controller bumps into it. The example will have a wall part of the demo.
Here you can try the new demo:
Use Word Detection to Drive the Character Controller:
sir tim! sorry for the late reply I am debugging my code...I just replied my email on our conversation
The new examples are now published in the asset store.
This new update comes with the goat video examples! You also get the voice controlled character controller example!
Here's the list of example scenes:
Be sure to check out today's "Cyber Monday Sale" to get your 85% discount on "Word Detection"! (TODAY ONLY!)
For the character controller how can I make the character stop everything when i say stop? As of right now the character runs continuously. Also, I don't want the player to have control of these voice commands. If I make an executable game can someone else use it with my predefined words or does it have to be my voice specifically to control the character?
Also, if I say shoot. Can the character shoot or is this beyond this application?
The source is included. I would say in the character controller script, add another word "Stop". And then you can set the detection of that word to stop movement of the character. Scripting is required, but the Word Detection example gives you the skeleton as a base to start with.
Adding "Shoot" is a matter of adding another word to the list of words being detected. The word profiles control which words are detected and every example has a list of words that the example uses.
The code is full source C# and will work for all the target platforms where Unity has access to the Microphone. If the Android device works in Unity then YES it works for Android. I tested that it works on a Samsung Galaxy S3 phone.
When I try to build .apk a warning says:
Assets/WordDetection/Character Controllers/Sources/Scripts/ThirdPersonController.js(294,32): BCE0019: 'ResetInputAxes' is not a member of 'WordDetectionInput'
Here's the update you are looking for.
This script has to go in a subfolder within Plugins to be accessible from the ThirdPersonController.js script.
If you want to change to landscape view you'll find that in the Unity player settings under presentation.
So I'm working on a project that works with user interaction with molecules. I have a script that creates a molecule but it's not an object like cube or the interactive heads. Can i put the word detection code inside of the script that creates and interacts with the molecule or do I have to keep the scripts separate?
The scripts would just drop into your project. You would be able to invoke the plugin scripts from your own scripts. The best place to start is to study the various example scripts.
It's really amazing your script, but i must ask.
how many words can it detect? can read a small group of words? how much memory consume when it does that?
Word Detection can detect any amount of words. But the thing is you want to control the expected list of words in small groups to maximize accuracy. Users can record word profiles for any language.
Ok, thanks, you have sold one copy ;-D
So for your example 5 i.e. the scene where the cube can be controlled, every time a word is being set to a voice, it executes the command for that word while it is being set. I am wondering which line in the script controls that because I am implementing a way to switch between scenes and every time I try to set the voice recognition for that word, it executes the command and switches scenes before I can set it.
I'm terribly sorry to bother you here, but I wasn't able to find an email address for you on your site.
I'm leading a team that is developing a mobile app for one of the largest children's brands in the world. We are interested in exploring voice controls, and your plugin seems to be one of the most popular hits from our searches. I would like to discuss this further with you, if possible, over email or Skype.
Please contact me as soon as you can at loughrank AT gmail DOT com.
I hope to hear from you soon!
Hi Kevin, I emailed you directly.
"You'll see it pulls in input from the microphone. As long as you can get the raw wave data from the audio clip the same algorithm would work."
Hi, is it possible for you to elaborate on the above regarding use of prerecorded sound samples. I have a number of voice commands recorded elsewhere that I would like to load into your system. I have tried exporting as RAW (unsigned 8-bit PCM) and loading using example 8 but the console reports "
Failed to load word: System.OutOfMemoryException: Out of memory"
I have noticed that the wave profile of my exported audio files and the example 8 saved files in Audition are completely different. Can you let me know what I am doing wrong and how these audio files can be used?
that is awesome... exactly what i expected one day....
could you give some response delays, depending on noise , quality etc ?
From audacity export to WAV or MP3 and then drop into the Unity Assets Folder. That turns the sound into an AudioClip. And then you can call clip.getData() to get the float array that detection uses.
The response delay should take as long as it takes to say the word and read the microphone data. It should be under 100ms.
If you record a noise profile that should help. Although accuracy is pretty low unless you can make the sounds distinct. The fewer number of sounds in detection the more likely they'll be different.
Try the examples and experiment.
Hi Theylovegames, i'm interested in buying your asset, I'm looking for an asset that able to recognize some words, but this words must be prerecorded, I want to use that sounds as a trigger for some events, is your asset able to work with pre recorded sounds?
Yes several examples include loading pre-recorded sounds in the package.
Although for best matching it's good to record sounds in the environment that you are testing.
When I go to buy word detection, I get a warning that the scripts are not/may not be compatible with Unity 5. Can you please let me know if this works with Unity 5?
I published the package on Unity 4.X and have not resubmitted for 5.X. The API is basically the same with likely some minor changes to audio.PlayOneShot().
Is it possible to detect if the word has been said well, and if yes then do something, if not try again and so on? Also like text-to-speech and other way around?