A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Assets and Asset Store' started by thelghome, Apr 30, 2019.
I would like to stream microphone so how can I encode the microphone audio without playing back?
There is MicEncoder for the streaming, AudioDecoder for decoding the streaming on other side.
Hello, is there any chance with the FMETP Stream tool pack to share screen from Windows XP and Windows 7 directly to Unity and then to Hololens2?
Desktop capture on XP and Win7 isn’t supported. As far as we know, some third party sources may have win7 desktop capture support for Unity, then you can stream the Unity view via FMETP STREAM.
It's my bad not checking the library well.
Now, I got a new problem.
I would like to stream the desktop. When I connected my notebook with an external monitor and closing the notebook lid, it can be encoded normally.
However, when I run it with the notebook monitor, I got a warning and there is nothing encoded!
The warning message was:
[FMD] 0:\\.\DISPLAY1 => Unsupported.
GameViewEncoder/<RenderTextureRefresh>d__67:MoveNext () (at Assets/FM_ExhibitionToolPack/FMCore/Scripts/Mapper/GameViewEncoder.cs:462)
Could you please help me?
For your case, I assumed that you have two GPU. You may try switching between them for monitor usage.
referring to Microsoft solution:
To work around this issue, run the application on the integrated GPU instead of on the discrete GPU on a Microsoft Hybrid system.
Ah, yes it is about the running GPU, thank you very much.
However, do you have any idea that why when using external monitor, it always use integrated buy when using attached monitor, it always use high performance GPU?
It's not always, only happens on some laptops as far as I know.
It could be some companies' brand default settings. But I believe that you can still force switching integrated and high performance GPU in your GPU settings panel.
Thanks a lot for your help.
Now I came up with another issue, like the sound from speaker echo back to the other side via the microphone. If both side plugged with head phone, there is no problem but if they use speaker then it will be infinite echoes. Do you have any idea?
You may disable AudioEncoder, use MicEncoder only.
Because AudioEncoder will capture all the sound in the Game scene, including other's Mic.
Actually, there is only MicEncoder. What I mean is, its like the sound from speaker come to microphone and encoded by MicEncoder to the other side.
If you are testing in same room without earphones, the echo will happen. It’s physics, and how sounds transmitted in reality.
Great asset! I just wanted to post some of my found fixes and additional info for the hybrid GPU problem with Desktop sharing on Windows 10.
As mentioned before in this thread, the "[FMD] 0:\\.\DISPLAY1 => Unsupported." error can happen because of an error that can happen with the Desktop Duplication API. This usually happens when there's multiple GPU's or a hybrid GPU system like is commonly found in laptops.
The solution, as mentioned, is usually to make sure Windows 10 uses a specific GPU when running your application. You can do this through windows or your GPU, but depending on how you do it windows sometimes overrides your choice. I found this stack overflow thread and some of the links on it to be very helpful:
But I'll summarize what worked for me here as well: basically, if you go to control panel -> System and Security -> System -> Display -> Graphics settings, you can set a program (like Unity) to use a specific GPU through windows (you'll have to restart the program for it to work though). If that works, you can consider doing it programmatically by setting a registry value (disclaimer: I don't actually know how to do that part yet myself, I only know the thread people say it's possible so I figure it'd be helpful for some of you here.)
Anyways, hope that's helpful to someone!
Hello, I've been using this asset for a couple weeks without issues but I could use some guidance regarding the GameViewEncoder when using the RenderCam mode.
The app I'm working on runs on the Oculus Quest 2. It streams the headset's view to a windows app. However the VR user must see some objects than the PC user must not see (some UI elements and 3D objects).
In order to do this I duplicated the VR camera, set the new camera as the GameViewEncoder's render cam and set it's culling mask to ignore the layer of the objects that should not be streamed to the PC application. So far so good, but this second camera is obviously causing a performance hit because both cameras have to render the environment.
I'm pretty new to Camera stacking but I've figured there must be a way to do this better by using a base camera that renders every layer except one, and using a second Overlay Camera that renders only that one layer. Ideally I would then stream the view of the first Camera before the second Camera's output is rendered in the first cam's stack.
Can this realistically work ? How should I go about it ?
TL;DR: I want to stream the VR headset view but exclude one layer from the render texture that is sent. Using a cloned camera with a different culling mask as the RenderCam is too taxing performance-wise.
In theory, setting the camera Clear Flags like depth or don't clear, which may do the similar trick.
From the perspective of composition like traditional animation rendering, we can separate the render pass for each layer and compose them into final rendering.
In Unity, the most related topics would be render queue and culling mask, and replacement shader for the camera.
I would say, it's still possible, if you want to isolate the default rendering process.
However, doing in VR would be much different because their default rendering process would be different. It is hardware specific like Quest 2. The most challenging part would be emulating the whole rendering process and send back the correct 3D VR view to Quest 2, when you try to override it.
This would also be a very interesting research topic, like how to compose the VR rendering properly & manually.
We may invest some time to research it, but not in our first priority(as it will take lots of time, and time is money/cost for living~)
Hope above info can give you some direction.
Hello, first of all I apologize for my ignorance in networking.
I should connect to a server with a C++ application running on it, that send messages, and get this message on my Unity application.
I know its IP, and the port that the server is using.
I actually don't know if it's using SSL (I don't think so).
Can I use your plugin to "just" let my Unity app to listen to a specific port on a specific IP, and get the message?
Should I open the same port on my PC to get the message?
Is there a software that I can use locally to test that my Unity app is actually receiving the message?
For your situation, you only have to use the UDP listener in C#. There are tons of examples around the world like github, stackoverflow..etc. It's very simple code in few lines actually, which normally you don't need to use any plugin for your case.
But if you want to observe how we make the Server-Clients communication structure, you could try FMNetworkUDP, which is much cheaper than FMETP STREAM.
Many thanks! I already own "FM WebSocket", can I use this one or is it for another purpouse?
I see, FM Websocket is mainly for websocket and socket.io usage.
If your C++ server is running websocket, you can still receive basic messages from your websocket server.
Hello, I am doing similar tests, on the one hand I have a pc using MicEncoder and AudioDecoder, to reproduce the estpy sound using headphones; on the other hand I have a tablet with MicEncoder and AudioDecoder, to reproduce the sound I am using a speaker. My suggestion is that on the side of the PC with the headphones I hear the echo of my voice that is reproduced in the speaker of the tablet, I think it is due to the delay in sending the audio.
If you are testing voice speaking between PC and tablet, but still in same room with only one headphone.
You will be in this situation.
People voice -> PC Mic -> Network -> Tablet Speaker -> (echo sound loop from here) -> PC Mic -> Network -> Tablet Speaker -> (loop again)
People voice -> Tablet Mic -> Network -> PC headphone -> (ended, no echo)
Please keep in mind that sound playback latency won't create infinite echo loop.
This would be exactly same case of two mobile phones calling in same room, one with earphone and one with speaker mode only.
Hey there! I'm using FMStream for my project to stream a camera on a PC to a phone, then send some data from that phone back (vector3s, floats, and strings). It's working okay and smoothly, but the latency for the video stream is frustrating. It's weird because the vector3s, floats, and strings are sending near instantly, but the video doesn't catch up until a second later.
I'm using the FMNetworkManager/UDP script for this. Do you have any suggestions about what to do?
Do you have screenshot of both your GameViewEncoder & Decoder settings in inspector?
It's tricky because I need access to the render texture of the camera, and using the GameViewEncoder overrides that. So I'm using the TextureEncoder so I can still use the render texture for other things.
In any case, here are the Encoders. I've tried messing with all the different settings, and read the documentation a million times, lol, but still not getting great results. I have messed with the FPS and Quality sliders, and while they do increase the quality/fps, they don't help with the latency much.
After I press play, the AsyncGPUReadback Support does work.
If GameViewEncoder is ultimately faster, is there a way to use that but still have access to the camera's RT? Or, is there a way to downscale the texture that is being sent to decrease latency?
I don't think its a matter of having good wifi either: My wifi is pretty good, and there are only 3 devices on it at a given time.
Also, here's the code for the other data I'm sending is in update, if that's helpful.
byte SendByteData = new byte[10 * 4]; // px, py, pz, rx,ry,rz,rw,zoom, focus, exposure
int offset = 0;
byte bytePX = BitConverter.GetBytes(cameraLocalTransform.transform.position.x);
byte bytePY = BitConverter.GetBytes(cameraLocalTransform.transform.position.y);
byte bytePZ = BitConverter.GetBytes(cameraLocalTransform.transform.position.z);
byte byteRX = BitConverter.GetBytes(cameraLocalTransform.transform.rotation.x);
byte byteRY = BitConverter.GetBytes(cameraLocalTransform.transform.rotation.y);
byte byteRZ = BitConverter.GetBytes(cameraLocalTransform.transform.rotation.z);
byte byteRW = BitConverter.GetBytes(cameraLocalTransform.transform.rotation.w);
byte zoomValue = BitConverter.GetBytes(zoomSlider.value);
byte focusValue = BitConverter.GetBytes(focusSlider.value);
byte exposureValue = BitConverter.GetBytes(exposureSlider.value);
Buffer.BlockCopy(bytePX, 0, SendByteData, offset, 4); offset += 4;
Buffer.BlockCopy(bytePY, 0, SendByteData, offset, 4); offset += 4;
Buffer.BlockCopy(bytePZ, 0, SendByteData, offset, 4); offset += 4;
Buffer.BlockCopy(byteRX, 0, SendByteData, offset, 4); offset += 4;
Buffer.BlockCopy(byteRY, 0, SendByteData, offset, 4); offset += 4;
Buffer.BlockCopy(byteRZ, 0, SendByteData, offset, 4); offset += 4;
Buffer.BlockCopy(byteRW, 0, SendByteData, offset, 4); offset += 4;
Buffer.BlockCopy(zoomValue, 0, SendByteData, offset, 4); offset += 4;
Buffer.BlockCopy(focusValue, 0, SendByteData, offset, 4); offset += 4;
Buffer.BlockCopy(exposureValue, 0, SendByteData, offset, 4); offset += 4;
If you are passing your own render texture, please be reminded that you do update the camera render manually per frame as possible.
Another advise on this issue:
The webcam update could drop hugely, if your app is not running at background or your editor mode is not on-top of other applications. It's Window10 issue, it will try to slow down your application when it's running in background.
In current version, you should be able to acquire the public texture2D variable "CapturedTexture" via GameViewEncoder.
We are also working on some updates, which you can get the render texture via GameViewEncoder.
If you are in urgent, you may email us for the beta version: firstname.lastname@example.org
Interesting, that does sound good, but will that decrease the latency do you think?
Are there any settings I should change to help it?
First of all, please check if our demo scene working smoothly or not. We have to find out what cause the issue, as TextureEncoder is very similar to GameViewEncoder.
The second possibility is your Unity Editor or PC application is running in background on your system.
By default, the webcam update frame rate will drop hugely for saving processing power, when your application is not actively on-top other applications.
Okay, so I just checked extensively, and the examples scenes work really well! I didn't even know you could send video from both the client and the server! The framerate on the examples are much better with lower latency. I don't know what it could be
Do you mean to say that if I don't pass in my own and just use the GameViewEncoder it doesn't do this?
GameViewEncoder will update the render texture referring to the StreamFPS rate.
Right, but does that mean that giving TextureEncoder my own render texture has more overhead than GameViewEncoder?
Thanks for your help!
We also tested the TextureEncoder on our side, without any delay issue. It's almost 100% the same as GameViewEncoder.
1.Create a new Camera in the scene
2.Create a render texture in Asset folder
3.Assign the render texture to the camera
4.Assign the render texture to the TextureEncoder input
5.Pass the encoder byte result to GameViewDecoder
6.Display the result on RawImage via Canvas locally.
Thus, I don't think there is any overhead issue. According to your situation, most likely you were updating your render camera manually instead of default updates. It means your render texture didn't update on time for some reason. The encoding cycle should be correct.
Interesting, okay. I will give it a try, thank you!!
I'm curious is "Send to server" = "Unicast"?
"Send to others" = "Multicast"?
"Send to All" = "Broadcast"?
I want to send resulotion and framerate to decoder, is it possible?
Technically, it’s not. For FMNetworkUDP, it stores all connected IPs and handle those send types. You can also send to specific IP via SendToTarget().
The simple way will be sending a string to others, and others can read them and decode your customised info.
But I didn't type any client ip, the data still arrive client correct. Is this technology called UDP Broadcast?
We have auto network discovery feature enabled by default, which you don't have to worry about typing any IPs in local network. (As long as you don't have firewall related blocking).
For network discovery, we do involve UDP broadcast. For sending data part, we don't have to use broadcast or multicast, because that we've already known all the connected clients' IPs.
FMETP STREAM V2 is available now.
We have some new experimental features and added Linux support, optimisation and bug fixes.
Since there are many major changes since the first release in 2019, we decided to release an upgraded version. It also can support us for long term development.
You will get a free upgrade if you purchased our original version within 3 months(suggested by Asset Store Guide).
For other owners, you can still have 50% off for the upgrade.
Thanks again for everyone who enjoy our product, and very appreciated your feedbacks and support in past few years!
V2.0 Can't connect between iOS 14.7.1 and PC.
network information below:
my pc created wifi, and pc network card ip 172.27.35.1, iPad ip is 172.27.35.2.
pc side in game display another network card ip 192.168.1.100.
iPad side not pop any network permiss windows,iPad side game display ip is 172.27.35.2.
any idea for me? please
Since you have two local network/wifi in your PC, it may detect the wrong one and broadcast the wrong IP to clients.
In this case, auto network discovery may not work for you. You have to either disable your unused network card completely, or you have to input your Server IP on clients manually.
In general, enabling multiple network cards in same server PC isn't recommended.
*The issue is very obvious too:
When you are hosting server in 172.27.35.x, your server broadcasts the wrong IP info 192.168.1.100 to your clients.
I can't let user to disable their network card, and input server ip still can't connect.
I think problem maight be iOS local network permission problem. because I use
I can't let user to disable their network card, and input server ip still can't connect.
I think problem maight be iOS local network permission problem.
because I use https://github.com/bartlomiejwolk/PingTool to ping iOS self ip is timeout, to ping pc ip is timeout too.
We just tested on iOS14.7.1 and Win10 PC, either server or clients are both working fine.
Both devices connected into same Wifi network.
By default in our latest version, our example scene will popup a permission request on iOS devices.
If you denied once, you have to turn it on manually in iOS app system settings.
If you can share screenshots of our example scene(iOS & PC), it may help us to investigate your case.
As you mentioned, your wifi is created & shared from PC. This is a very tricky setup.
Some insecure udp packets could be blocked. It's firewall related issue if you can't even ping directly.
(From my point of view, it's most likely your system wifi sharing settings issue. You may test it on a standard Wifi environment for comparison)
Thank for you reply
After many build test, I think I found below:
Hot spot WIFI: FM boradcast enable app remote debug conesole in xcode cause fail. Ping success.
Hot spot WIFI: FM boradcast disable app remote debug conesole in xcode cause fail. Ping fail.
Standard Wifi: FM boradcast enable app remote debug conesole in xcode cause success. Ping success.
Standard Wifi: FM boradcast disable app remote debug conesole in xcode cause fail. Ping fail.
Hot Spot Wifi cause problem,
The build config cause problem too.(I use iOS builder for unity on window, seem like bug on this program)
When I use xcode build. the Hot Spot problem and Ping problem is disapear!
iOS builder via Win10 may always cause bug.
If you don't have a Mac machine, running a virtual machine with Mac OS would be an alternative.
or it's also worth to have a real Mac OS environment, like m1 macbook air or m1 iMac..etc.
Woo Hoo V2. Happy to pay the 50% upgrade fee. This asset has been amazing.
Now am I going to break everything by updating it in my project or am I pretty safe?
And does WebSocket have a Large File Sender option? (New Feature maybe?)
Or do you know of an asset I could use?
Also with the UDP Large File Sender.
Do you recommend an asset that can work with it for directory file viewing for transfer?
I will have a look anyway to see what is around.
Glad to know that our tool works for your projects.
Some folder names might be changed, we recommended that you should keep a backup of your project first.
Then, please try removing V1 assets completely from your project before importing V2 assets.
WebSocket is TCP, which doesn't need Large File Sender, as the protocol is reliable.
File explorer requires some native plugins, we don't have specific recommendation yet.
I always backup before I do changes............ lol
I have found a few assets. I'll have a play.
I updated my project to the new version and everything looks ok.
I'm casting the render cam from a Quest 2 headset to a pc program via Web Sockets.
It worked ok before but now I am getting a frame here or there.
The PC program is the server and the Quest is the Client.
I tested with the moving cube demo program in the asset. eg as the server and it worked ok.
Was there any changed done in the GameView encoder that may be causing this?
We tested Quest 2 and it should be compatible.
Do you have any screenshot of the frame issue and your GameViewEncoder settings? It will help us to understand the issue.
You may try to reset the component in your existing project too. Sometimes it could be just missing shader or wrong variables from old project(V1).
Yes it was working before.
It's like it sends a single frame every so often.
I'll delete and re add the renderCam and see how I go.
Found they issue. There was a glitch we had before where once connected I would have to disconnect and reconnect the server side for the casting to have running video.
It was due to the Gameview encoder/Decoder being enabled before the connection was established.
All working fine now.