Search Unity

Koreographer - Audio Driven Events, Animation, and Gameplay

Discussion in 'Assets and Asset Store' started by SonicBloomEric, Sep 15, 2015.

  1. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Hello! Thanks for reaching out!
    There are multiple ways to handle this and the best solution for you depends on the needs of your game. Two options are discussed in this FAQ answer on the Koreographer forums. As a quick overview here:
    1. Adjust the event timings in the Koreography Editor (or in script); use them to start Timing Window Timers. You move the OneOff events (I assume you're using OneOffs) ahead by a known number of samples to the earliest time that you want to check for "too early". You then manually watch for a "window of time" that you configure - each event would start a "timer" that you manage for each button press (presumably this would only be for a few frames, maximum, depending upon the music). If the user presses a button in the window defined by your timer you can decide if it was closer to too early, spot on, or too late.
    2. Skip the event system and use the Koreography Data directly in script. This is the approach that the Rhythm Game Demo that is included with Koreographer implements. You treat the Koreography as an event list and manage where you are manually. If you track what the "next" event is, you can always check and see how far away you are from that timing and make the decision about whether the user is too early, too late, or...
    One key here is that OneOff events will only trigger on a single frame. You can cache the fact that an event was hit (Koreographer's events happen early in the Update loop) and then check for button press in your main Update loop (or even in the event callback, for that matter), but it's extremely precise (OneOffs only trigger in one frame). You can also implement a system like #1 and #2 above by converting all your OneOffs to Spans, but this makes the data harder to maintain.

    In short, I'd recommend using either approach #1 or #2. Feel free to start with the Rhythm Game Demo that's included in Koreographer. Others have done that very thing great success!

    I hope this helps!
     
    akbar74 likes this.
  2. akbar74

    akbar74

    Joined:
    Nov 16, 2017
    Posts:
    15
    thanks for your answer.
    what do u mean of "moving event"?

    i think my big problem is that i can not understand the simple and beat. why we did not work with real time?
     
  3. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Select them in the Koreography Editor and then drag them to the left (earlier) by a set amount. You can also use the controls to move them by a specific number of samples or snap them to an earlier beat demarkation.

    Samples are individual numbers that make up an audio file. For any audio file you will see something called the Sample Rate which is a number similar to 44.1KHz or, equivalently, 44100. That defines the number of samples in one second of "solar" time. So moving an event one sample with an audio file that has a sample rate of 44100 means that you move it by 1/44100 seconds (0.000022675736961 seconds).

    Beat time refers to the tempo of the music. If you have 120 Beats Per Minute (BPM) then you will have a beat every half-second. This allows you to time things with the main pulse of the music. Music has different speeds and "beat" is one of the main determinants of that speed "feel". Using the tempo setting (e.g. 120BPM) you can convert to solar time. You simply multiply the beat number by (60/BPM). In the case of 120BPM, you would have 60/120 = 0.5. Therefore 1 beat would be 0.5 seconds.

    Music is not a real (solar) time thing. It has its own self-consistent, typically per-track timing mechanisms and Koreographer does everything it can to enable workflows that make the most of those timing mechanisms. When authoring events in the Koreography Editor, you can typically ignore timing if you simply match things up to the waveform. Unlike beats (tempo) which can change throughout a song, samples do not - they are consistent throughout the playback of a digital audio file. They can easily be converted to precise solar time values if need-be ("numSamples/sampleRate").

    I hope this helps clarify things!
     
    akbar74 likes this.
  4. akbar74

    akbar74

    Joined:
    Nov 16, 2017
    Posts:
    15
    YES OF COURSE!
    Your answer was really great and now i think i can do something with my game.
    thank you again:):):)
     
    SonicBloomEric likes this.
  5. gegagome

    gegagome

    Joined:
    Oct 11, 2012
    Posts:
    331

    Hi Eric

    Thank you for your answer

    Using Koreagrapher, is it better to detect say, 5-second chunks of silence, in runtime or precompile time?

    How is this better handled?

    Thanks
     
  6. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Precompile time. (Design/Editor, rather than with runtime analysis, which Koreographer does not support.) With Koreographer you would add an event to a KoreographyTrack (with ID "silence", for example) at the time the silence begins in the AudioClip. You have two good options:
    1. OneOff Event: Add a OneOff event at the beginning of the silent section of the AudioClip. You could add a Float Payload that has the amount of time the silence lasts and you could track that yourself!
    2. Span Event: Add a Span event that covers the length of the AudioClip. When you get an event callback for an event you can always ask what the start and end time of the event is (they are identical for OneOff events). With a span event, you would simply subtract the end time from the start time to get the event duration, in case you need to pass this information to another system. You will get a callback every frame that the audio is playing silence (handling the callback is up to you!).
      • Note 1: The "time" of each event is reported in Samples. To convert the Samples to Seconds, you divide the Sample time but the Sample Rate of the audio. (You can grab this value from the Koreography object or hard-code it if you know what the Sample Rate value!)
      • Note 2: The Span event option is probably the better of the two because it is easier to simply "draw" the timing information (the event) into the Koreography, rather than calculate the value and store it manually in a Payload.
    Does this make sense? I hope it helps!
     
  7. eric_delappe

    eric_delappe

    Joined:
    Dec 10, 2015
    Posts:
    18
    Is there a way to change the tempo from the MIDI converter before exporting to a Koreography track?

    I'm exporting MIDI from Ableton Live, which does not save tempo information. Thus, they always import into Koreographer at 120BPM. Since my actual song is 140BPM, the MIDI events do not line up correctly.
     
  8. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Question for Clarification: When you say the events do not line up correctly, is this with the beat grid in the Koreography Editor or that they do not line up with the corresponding events in the audio during playback?

    If the resulting events are properly lined up with the audio playback and it is only the beat grid that's an issue (or events relying upon the beat-time APIs), please open the Koreography file containing the converted events in the Koreography Editor and adjust the tempo there.

    On the other hand, if the resulting events do not properly line up with the audio playback, then please verify that the Koreography used during conversion has an audio file with the same Sample Rate as the file needed during playback. The MIDI Converter uses the audio file's sample rate to convert from time-in-seconds to time-in-samples. If the same KoreographyTrack is used with two audio files that feature different Sample Rates you may experience issues with events being either too quick or too slow.

    Does this help?
     
  9. eric_delappe

    eric_delappe

    Joined:
    Dec 10, 2015
    Posts:
    18
    The events are not lined up with the beat grid or the audio, but I don't see what sample rate has to do with it. The issue is that the BPM of the MIDI is different from the BPM of the Audio/Koreography. The events are using 120BPM but the song is 140.

    If I use a 120BPM version of my song and set the BPM of the Koreography to 120 as well, everything works fine. But obviously I don't want to be limited to 120BPM songs.
     
  10. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Koreography Events are tied to sample positions within the audio file. MIDI data comes through as one of several possible formats of delta-time-between-events. The MIDI Converter converts these various time formats into Solar time (hours:minutes:seconds) and then into Sample positions based on the Sample Rate of the audio file in the Koreography you specify in the MIDI Converter (a typical sample rate value is 44100 samples-per-second).

    If you add the 140BPM version of your audio file to a Koreography, specify that Koreography when using the MIDI Converter, and then change the BPM of the Tempo Section* to 140, does the beat grid not adjust to line up?

    *Quick Note: If you specify an in initial Start Offset in the MIDI Converter, you will need to select the second Tempo Section in the Tempo Section to Edit dropdown of the Koreography Editor when changing the BPM.
     
  11. eric_delappe

    eric_delappe

    Joined:
    Dec 10, 2015
    Posts:
    18
    They do not line up. Here is the step by step of what I'm doing:

    1) Create new Koreography in Koreography Editor
    2) Drag in 140bpm audio file
    3) Change BPM of tempo section to 140, so beat grid lines up with audio
    4) Open MIDI file in MIDI converter
    5) Check "selected" box on channel 1
    6) In "Koreography Track Export" tab, drag in Koreography created in step 1
    7) Fill in "Event ID" field, and click export new track
    (Note: this adds the track to the Koreography's dropdown list, but the "Track Event ID" field is still empty and greyed out, and the events themselves are not added. Might be a separate bug?)
    8) In Koreography Editor, click "Load" and select Koreography Track created in step 7. "Track Event ID" is now filled in and events are added, but with incorrect timing

    Here's a screenshot. The audio file is just a kick drum pattern so it should be easy to see where the events should line up (note: there is one additional event not pictured because it goes beyond the range of the audio clip)


    And here is the same but with the 120BPM version of the audio, showing the events lining up correctly:
     
  12. eric_delappe

    eric_delappe

    Joined:
    Dec 10, 2015
    Posts:
    18
    In the meantime, I've found a workaround. I can change the BPM of the MIDI with a free software called Midi Editor before bringing it into Koreographer.
     
    SonicBloomEric likes this.
  13. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Thanks very much for the clear writeup and examples. I dove into the codebase and was reminded that the MIDI Converter always starts by assuming the MIDI standard default of 120bpm. One of the "delta-time-between-events" formats is called PPQ (Parts Per Quarter note) which subdivides time by beats. Ableton uses a PPQ of 96 during export but does not specify a BPM (there doesn't appear to be a way to do it in Ableton, either). According to the MIDI spec, anyone dealing with that file should fall back on the default tempo [120bpm], which the MIDI Converter does.

    This isn't helpful for you, though. :/

    It seems that what you'd like is the ability to adjust events in a Koreography Track based on BPM, yes? An option to adjust all events within a Koreography's tracks based on the adjustments made to the BPM? Such a feature wouldn't just change the beat grid when adjusting a Tempo Section's BPM, but would shift all events based on the change. If so, we can add this to our system as a feature request.

    That is indeed an excellent workaround in the meantime. I'm glad to hear you found a suitable workaround and are unblocked! [Another workaround would be to build a quick utility script that performs the operation specified in the previous feature request.]

    For macOS users, we've seen success with the MidiYodi application.
     
  14. akbar74

    akbar74

    Joined:
    Nov 16, 2017
    Posts:
    15
    hi again:)
    i do a lot of things with your previous help. thank you.

    can i change the music at runtime ?
    for example in your RhythmGameDemo can we change the the music at runtime. or in other way we stop previous music at play another one?
     
  15. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Yes, you can do this. Please take a look at the code in RhythmGameController.Restart(). That shows specifically how to restart the Rhythm Game.

    In addition, all Music Players provided by Koreographer have a LoadSong() method that you can use to change music. These methods take Koreography data as input in one way or another (MultiMusicPlayer takes a list of MusicLayer objects which each may reference either an AudioClip asset or a Koreography asset). Use these methods to change the playing music/Koreography at runtime.
     
    akbar74 likes this.
  16. gegagome

    gegagome

    Joined:
    Oct 11, 2012
    Posts:
    331

    Hi Eric

    I worked on another part of this game but the time has come to purchase Koreographer. Couple more questions:
    I am currently using AudioSource.Play to play my stems, so I assume I need to perform playback using Koreographer? Can I play around 12 different tracks simultaneously?
    Do you have a built-in song ended event?

    Thanks again!
     
  17. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    You do not need to perform playback using Koreographer's Music Player components (SimpleMusicPlayer, MultiMusicPlayer) unless you need access to the Music Time APIs. Even then, you can bypass the need for those components by implementing the IKoreographedPlayer interface explained on page 31 (in v1.5.0) of the Koreographer User's Guide documentation.

    Overall there are two options:
    1. Use the MultiMusicPlayer to control your layers. This player allows ducking of tracks, but they must all be constantly playing (this keeps them synchronized). If you have a more complex setup, then this option is out.
    2. Use the AudioSourceVisor to watch your AudioSource components. The AudioSourceVisor is a simple component designed to track playback progress of an AudioSource component that it does not manage. You should be able to simply "attach" one of these to each of your existing AudioSource components and everything should "just work". This component is described in page 32 of the Koreographer User's Guide.
      • If you went this route and still wanted access to the Music Time APIs, you would implement the IKoreographedPlayer interface as mentioned above.
    Yes. If you use option 2 above, Koreographer makes no guarantee that the audio will playback synchronized. If you use option 1 above, Koreographer makes a best-effort to keep audio playback synchronized. With respect to the event system, Koreographer has no problem whatsoever handling 12 Koreographed tracks running simultaneously.

    Unfortunately, no. This is due to limitations with Unity's underlying Audio subsystem. You can, however, simulate these by adding a KoreographyEvent to the end of each track you're playing. Koreographer will dutifully issue a callback when the event is reached (regardless of the looping flag state). No callback would occur if you stop the AudioSource directly with AudioSource.Pause/Stop.

    Hope this helps!! :D
     
    gegagome likes this.
  18. akbar74

    akbar74

    Joined:
    Nov 16, 2017
    Posts:
    15
    hi
    how can i get the last sample of track in start?
    i want to check when the track finish and for that i want to check if the "Koreo.GetLatsetSampleTime()" is equal to the last or greatest sample of track so the track is finished. is this a good way?
    sorry i ask a lot of question :(
     
  19. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Why not simply add a OneOff KoreographyEvent to your Koreography at the last position in the audio track and then listen for it with the event system? This is how we recommend people determine if the end of a song was reached.

    This could work, but may have issues if you have looping. In a looped scenario it is very likely that the GetLatestSampleTime() API will never actually send you the last sample time index. Rather, you would receive something closer to the beginning of the clip.

    For the record, you access the last sample time via the standard AudioClip API (keep in mind you may have to subtract 1 from the total sample count to get the actual indexed value).
     
    akbar74 likes this.
  20. Dreamcube017

    Dreamcube017

    Joined:
    Dec 20, 2009
    Posts:
    239
    I imported the PlayMaker integration and got all these errors.

    upload_2018-4-18_1-18-29.png

    I'm using version 1.8.9 with Unity 2017.2.2.1p4.
     
  21. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Can you verify that PlayMaker is actually installed in your project? We were only able to reproduce the errors you show by installing the PlayMaker integration without having PlayMaker actually installed (resulted in the same 158 errors).
     
  22. Dreamcube017

    Dreamcube017

    Joined:
    Dec 20, 2009
    Posts:
    239
    Weird, I guess when I imported the PlayMaker package, I had to go to its Install folder and install the package in there. The errors are gone. now. Sorry about that. Thanks!
     
    SonicBloomEric likes this.
  23. Dreamcube017

    Dreamcube017

    Joined:
    Dec 20, 2009
    Posts:
    239
    By the way, were you ever able to get in contact with Lazla from Bolt? It looks like the units in there aren't too hard to create based on what everyone is saying. It'd be awesome to have some specific Koreo units although I can do a little with what Bolt pulls from reflectiono.
     
  24. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Huh. PlayMaker is odd in that it has a two-step installation process - it has a helpful "project checker" that verifies whether installing the update will completely bork your project or not. For fresh projects, however, this can be somewhat confusing.

    Glad to hear that you got it worked out!

    I was, yes! Please take a look at this post in this thread and, more importantly, @ludiq's (Lazlo's) response on the topic in the Bolt forums here.
     
  25. Dreamcube017

    Dreamcube017

    Joined:
    Dec 20, 2009
    Posts:
    239
    Thanks! I actually got it working! So another question. xD

    Is it possible to use the rhythmgame controller, lane, and note scripts on 3D objects or are they only meant for 2D purposes?
     
    Last edited: Apr 19, 2018
  26. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    You can modify the code to work with 3D objects. The Rhythm Game Demo was built for 2D (Unity UI). You can adapt it to 3D but you will need to handle the transition (asset changes, boundary/target definition, positioning, etc.) yourself.

    All of the code is provided in the demo and is well commented. Many others have successfully adapted it to their needs.
     
  27. Dreamcube017

    Dreamcube017

    Joined:
    Dec 20, 2009
    Posts:
    239
    Thanks, yeah I was looking through it and it does look very organized.
     
    SonicBloomEric likes this.
  28. artificialStart

    artificialStart

    Joined:
    Aug 26, 2015
    Posts:
    2
    I'm trying to use this system to make a fairly simple 3D endless runner style game where the player can only move left or right with the beat and have run into a few roadblocks.

    Firstly it seems that the events sometimes do and sometimes don't even register which has me really confused, as my Koreography is just a single track with one, empty payload event tied to it where every event is on a beat.

    Code (CSharp):
    1. [EventID]
    2. public string eventID;
    3. public Koreography rhythmKoreo;
    4. public float hitWindowRangeInMS = 120;
    5. int hitWindowRangeInSamples;
    6. public int curTime;
    7. public int curCheckIdx = 0;
    8. public List<KoreographyEvent> rhythmEvents;
    9. public KeyCode moveL;
    10. public KeyCode moveR;
    11.  
    12. void Start()
    13.     {
    14.         rhythmKoreo = Koreographer.Instance.GetKoreographyAtIndex(0);
    15.         rhythmEvents = rhythmKoreo.GetTrackByID(eventID).GetAllEvents();
    16.         rigidbodyCom = GetComponent<Rigidbody>();
    17.     }
    18.  
    19. void Update()
    20.     {
    21.         hitWindowRangeInSamples = (int)(0.001f * hitWindowRangeInMS * rhythmKoreo.SampleRate);
    22.  
    23.         rigidbodyCom.velocity = new Vector3(horizVel, 0, 4);
    24.         curTime = Koreographer.GetSampleTime();
    25.         if (curTime > rhythmEvents[curCheckIdx].StartSample)
    26.         {
    27.             curCheckIdx++;
    28.         }
    29.         if ((Input.GetKeyDown(moveL) ))
    30.         {
    31.             if (IsBeat())
    32.             {
    33.                 //move left
    34.             }
    35.        }
    36.         if ((Input.GetKeyDown(moveR)))
    37.         {
    38.             if (IsBeat())
    39.             {
    40.                 //move right
    41.             }
    42.        }
    43. }
    44.  
    45. public bool IsBeat()
    46.     {
    47.         curTime = Koreographer.GetSampleTime();
    48.         beatTime = rhythmEvents[curCheckIdx].StartSample;
    49.         hitWindow = hitWindowRangeInSamples;
    50.  
    51.         return (Mathf.Abs(beatTime - curTime) <= hitWindow);
    52.     }
    53.  
    From my understanding, this logic should be working, but the actual functionality of it is very spotty and I can't figure out the cause of the problem.
     
  29. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Sounds fun! Let's see if we can help you with your issue!

    First, these lines should be moved into the Start() method. All values included appear to be either constant or determined once at edit time. There is no reason to re-evaluate every frame :)

    That is, of course, unless you're playing with the inspector at runtime to determine the best values to use. In that case, please disregard this note!

    That out of the way, let's take a look at the meat of the issue:
    In this code, you're saying "if the current time is beyond the start position of the current "next" beat, advance the index so that we should be checking for the next beat. This is fine, except that your hitWindow logic says "it's okay to be up to 120ms ahead or up to 120ms behind" the exact beat time (see: your IsBeat() method). To capture this, you would need to update the test such that the check is:
    Code (CSharp):
    1. if (curTime > rhythmEvents[curCheckIdx].StartSample + hitWindowRangeInSamples)
    Without considering the hitWindowRangeInSamples before incrementing the check index, events that should be in range may have been straight-up dropped from consideration.

    Even better than that logic would be a while loop that advances it several times, in case you have two very-close-together events or a sudden hiccup in framerate that causes you to skip over multiple events. This would look something like:
    Code (CSharp):
    1. while (curCheckIdx < rhythmEvents.Count &&  curTime > rhythmEvents[curCheckIdx].StartSample + hitWindowRangeInSamples)
    I hope this helps!
     
  30. artificialStart

    artificialStart

    Joined:
    Aug 26, 2015
    Posts:
    2
    That fixed my issue! Thanks!
     
    SonicBloomEric likes this.
  31. TomaTantrum

    TomaTantrum

    Joined:
    Nov 2, 2014
    Posts:
    27
    Hey Eric I have been eyeing Koreographer for a while and was looking to potentially pull the trigger during the sale. I would like to know ahead of time though if some of what Im looking to do will require the base or preffesional version, or perhaps niether will be an option in these cases.

    Item 1: Im looking at doing something similar to the karaoke setup seen in Koreographer and some apps released with it.
    However, rather than highlighting text I am looking to load in text in sync with audio.
    I looked over the documentation for the karaoke setup and that seemed viable with some changes, but i wanted to try and get confirmation first.

    Item 2: would there be anything preventing using a music file to time out text display rather than a speech audio file? Perhaps syncing text with certain beats or when a particular instrument is played?

    Thanks for your time.
     
  32. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Thanks for reaching out! I'll address each of the items below:

    Yes, loading in text in sync with audio is entirely possible. All of the demos provided with Koreographer (e.g. Karaoke Demo, Rhythm Game Demo) are examples of what you can do with Koreographer. They provide well-documented, working examples of one possible way that you might implement a feature.

    Basically, you would load up your music file in the Koreography Editor, create a Koreography asset and then add a KoreographyTrack asset to the Koreography. You could either then add raw events (effectively script triggers) to the KoreographyTrack at the locations in the music that feel best for advancing/loading text (e.g. in an RPG speech bubble). You then set your text handling system to listen for those events at runtime (via a simple script callback function) and play the music (with the Koreography). The events that you create work similarly to Unity's Animation Events, except with more flexibility.

    Case-in-point: if you knew exactly what portion of text you wanted to add throughout a piece of music, you could add those portions to individual events with a Text Payload. How you handle this depends entirely upon the requirements of your design.

    There is no need for Koreographer Professional Edition for this feature. If you have access to a MIDI representation of your music (generally exported from the project that made the music in a Digital Audio Workstation [DAW]), then the Professional Edition could be useful as it would provide access to the MIDI Converter (which allows you to import MIDI files and convert data within them into Koreography data).

    Nothing whatsoever. Koreographer does not distinguish between audio based on its contents. The Koreography Editor merely shows you the contents of the audio file as a waveform to help you determine the best location for the events you add to it. With correctly added Tempo information, adding events to beats is a breeze thanks to the "Snap to Beat" and beat subdivision features. This is also fully supported in the base version of Koreographer!

    I hope this helps!
     
  33. TomaTantrum

    TomaTantrum

    Joined:
    Nov 2, 2014
    Posts:
    27
    Thanks Eric that helps a lot.

    Last item:

    So do the automatic event generation features of RMS and FFT audio analysis lean more to advance users fine tuning things rather than help the musically challanged to sync things up?

    I havent been able to clarify those features for myself.
     
    SonicBloomEric likes this.
  34. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    The Audio Analysis features are covered in the Koreographer User's Guide, pages 20-24.

    For Koreographer Professional Edition v1.5.0 (current at time of writing):
    • RMS: This audio analysis method uses RMS calculations to create Koreography Events with configurable Payloads. As RMS is good at finding a relative loudness of an audio stream over time, this data is particularly useful for creating a speaker effect when visualized. Because this is an average of the volume across all sounds in the source audio clip, results are most effective when applied to data containing very few “voices”, such as speech or stem files.
    • FFT: This audio analysis method uses Fast Fourier Transform calculations to create Koreography Events with
      Spectrum Payloads. The FFT analyzes the audio using the configured settings and outputs a series of frequency spectra over time. The Spectrum Payload data can be used to create a spectrum visualizer, frequency-specific effects, and more. As with RMS analysis, results are most effective when applied to data containing very few “voices”, such as speech or stem files.
    The FFT analysis provides the same data as Unity's built-in AudioSource.GetSpectrumData API with the following differences:
    • Replay Stability: Unitys built-in API runs the FFT algorithm over the audio wherever the samples currently happen to be. You are therefore never guaranteed to get the same exact results between calls. As Koreographer effectively records the FFT, it can be replayed with the same data returned every time. If you have some gameplay element based on the data, then [audio] speed hacks should not affect anything here.
    • Lower CPU Cost: Unity's built-in API runs the FFT algorithm at runtime. This is a benefit for games where the tradeoff of space-for-CPU makes sense.
    In short, the current audio analysis features are intended for advanced users with specific needs. We do want to help "the musically challenged", as you put it ;), do a better/faster job syncing things up. Solving that problem in general, however, is very challenging - analysis that works great for one genre [or track even] may fail completely for another genre [or track]. If you can get the tempo settings in for a given music track then the "Snap to Beat" features should help you lay down a base set of events pretty quickly! (You can also use the 'e' key while the audio plays in the Koreography Editor to add events where you see fit [keep in mind that this also adheres to the "Snap to Beat" feature and you may want to turn it off while in this mode].)

    I hope that this helps shed some light on the features!
     
  35. TomaTantrum

    TomaTantrum

    Joined:
    Nov 2, 2014
    Posts:
    27
    Thanks for all the detailed quick responses!

    That helps a ton.
     
    SonicBloomEric likes this.
  36. TheMoonwalls

    TheMoonwalls

    Joined:
    Nov 18, 2013
    Posts:
    30
    Hi!
    I have two questions about Koreographer:
    1. I have some audio files generated during the game (text-to-speech system) - will Koreographer be able to use them in order to synchronize them with subtitles?
    2. Does Koreographer support different language versions? If I synchronize a prerecorder english voice overs with english subtitles, will I be able to swap subtitles to german (voice overs remain english), or I will have to prepare a separate synchronization for each version?

    Thank you for your answers!
     
  37. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    This might be possible. It entirely depends on the output of the text-to-speech system that you use and/or how you use it (the design of your system). Requirements:
    1. The audio files are accessible at runtime. If you're using Unity Audio (and not something like Wwise), then this means they must at some point be AudioClip instances. This is necessary so that the Koreographer event system can track playback. If the system is a black-box, one-way DLL/module/etc. then Koreographer would not be able to handle this.
    2. The audio system generates timestamps that map words/syllables to positions in the output audio. This is assuming that you want to use Koreographer to perform something like the Karaoke painting shown in the included Karaoke Demo during subtitle presentation.

    Koreographer is an event and timing system that allows you to send triggers from audio to gameplay (rather than the other way around). Language version handling must be handled by you. For instance, any Koreography Event markup in the audio encountered will send an event back to the game. When your code handles it, some internal state on your side would be in charge of deciding which language to show. Depending on the design of your subtitle system (specifically, how granular you wish the presentation timing to be with respect to sentence/clause/word/syllable), you may or may not need to prepare subtitle-language-specific event markup.

    Does that make sense?
     
    TheMoonwalls likes this.
  38. gegagome

    gegagome

    Joined:
    Oct 11, 2012
    Posts:
    331
    Things are great around here using Koreographer.

    I have a question

    I want to use:
    Code (CSharp):
    1. Koreographer.Instance.RegisterForEvents("events_1", FireEventForBass);
    but I need to register more than 5 events and I want to use the same method for each events, passing the eventID and making desicions based on which event was triggered.

    Any ideas?

    UPDATE:

    I tried adding the text payload option and checking against
    Code (CSharp):
    1. koreoEvent.GetTextValue() == "eventName"
    and it worked but I am not sure if that is the only option.

    In terms of performance is it better to use the text payload or the int one?

    Thanks
     
    Last edited: May 7, 2018
  39. gegagome

    gegagome

    Joined:
    Oct 11, 2012
    Posts:
    331
    Also is there a way to know when the last frame of an event is triggered?

    It is almost like a one off event right after a span event, but without the one off event.
     
  40. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Hmm... it's tough to say without knowing more about the game or system design. Typically, we would suggest having either unique functions to handle events and then dispatch as needed or use Payloads to differentiate event purpose/meaning/impact/etc.

    That is definitely a valid option. Without knowing more about the specific use case, it's tough to suggest a different approach. If it's working, though, then you should be good!

    Int Payload would be more performant. Integer comparison is faster than string comparison. That said, the difference is mostly a memory one: the Text Payload requires more memory. Depending on the density of your KoreographyEvents, though, the differences may not even show up on your Profiler...
     
  41. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Yup! Please see this FAQ entry.
     
  42. gegagome

    gegagome

    Joined:
    Oct 11, 2012
    Posts:
    331
    Perfect!
    Thanks a lot
     
    SonicBloomEric likes this.
  43. TheMoonwalls

    TheMoonwalls

    Joined:
    Nov 18, 2013
    Posts:
    30
    @SonicBloomEric
    Thank you very much for your answers! I went through your documentation, and I just want to confirm if I understand it right - let's say I want to create a karaoke game. I have the music track, the voice track, and lyrics (as a text). If I want to sync the text with the voice track, than I have two options:
    1. Use the professional edition to generate automatic events.
    2. Create the events myself (after each syllabe, word, or sentence) - which basically means that I have to create and time each event one by one.

    In other words - if I want to sync subtitles with a voice track, and I don't want to spend several minutes to place events myself, than I should buy the professional edition, and use automatic event generation. Is this correct?
     
  44. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Unfortunately, option 1 is not actually an option. Whether you get Pro or base, you will have to use Option 2. That said, you could also automate Koreography Event generation by building a script that reads word and timing information from, for example, Google Cloud Speech-to-Text and writes that information as Koreography Events.

    As explained above, unfortunately this is incorrect. Part of the problem is that determining how to present words is an extremely difficult problem to solve. Do you want phoneme or syllable based painting/presentation? Words, perhaps? Or do you want line by line? Does your implementation need line breaks? How do you determine the length of a line? What are the requirements of your specific use case? There are a lot of seemingly small decisions that have a huge impact on the design of a system capable of automating things like this. And that is only in the presentation. Speech detection itself is a huge problem that large corporations (see: Google) spend lots and lots of money on - and even their systems have issues (although things are getting much better)...

    Please see this post for a look at the capabilities and purpose of current Professional Edition Audio Analysis features.

    I hope this is helpful!
     
    TheMoonwalls likes this.
  45. Dreamcube017

    Dreamcube017

    Joined:
    Dec 20, 2009
    Posts:
    239
    Hi again!

    I'm working with the PlayMaker actions and I want to use curve data from the event Payload, so I used the Get Payload Curve action. I can see the curve appear when I hit play, but I don't know how to send the values from it to anything else. Even if I use a Sample Curve action, I don't know how to get the payload's curve or any values from it into other actions that use curves. And because Unity doesn't allow you to just copy and paste keys in the curve, I can't just paste the keys into another action's curve.

    Thanks for your help.
     
  46. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Is your goal to:
    1. Retrieve the entire AnimationCurve to hand off to another Action?
    2. Retrieve the value of the curve at the current time in a Koreographer Span Event?
    If your goal is #2, please use the Get Koreography Event Payload Float Action. From the documentation:
    As for #1, I believe that you could once store an entire AnimationCurve in a PlayMaker variable to sample with another Action. It looks like this functionality may have been removed or broken. I've reached out to the PlayMaker developers for input.

    Hope this helps!
     
  47. Dreamcube017

    Dreamcube017

    Joined:
    Dec 20, 2009
    Posts:
    239
    AH that 2nd option would work great for this since I am just controlling the value of a shader parameter.

    The first option where I want to send the entire curve to something else would be useful though. I see many nodes that can use curves, but there is no way to pass previously created curve data to those curves.
     
  48. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Great! Glad to hear it will work!

    According to the PlayMaker developers, they do not currently have a way to pass AnimationCurves around as variables. It's apparently on their TODO list, but they've de-prioritized it. If memory serves, we provided the Get Koreography Event Payload Curve action in anticipation of one day having the ability to get the AnimationCurve stored into a variable. Apologies for the confusion this caused!
     
  49. Dreamcube017

    Dreamcube017

    Joined:
    Dec 20, 2009
    Posts:
    239
    That's ok. The Float worked like a charm! I had to use a value multiplier to get the movement to be more drastic, but it all worked out.

    Hmm... This is pseudo-logic, but couldn't one technically create a list or all the values that make up the curve, pass it to something, and then have that something recreate the curve?

    Just a thought since that's basically what the curves are anyway; a large list of floats.
     
    SonicBloomEric likes this.
  50. SonicBloomEric

    SonicBloomEric

    Joined:
    Sep 11, 2014
    Posts:
    619
    Yup! Floats and tangent values (also floats). :)

    You could do something like that, I guess, but it would require a lot of work to provide all the support for existing PlayMaker actions...