I recently started using Unity Render Streaming and I want to share some thoughts and findings because on the Github issue page and here in the forum a lot of issues/questions are about similar topics and are posted again and again. Maybe my code changes (see next post) will help in one case or another. First off, I'm really happy about Render Streaming. Having this functionality is really cool and opens up new use cases for us. It's already working pretty well even though some key features for real world use are still missing. Especially improved and more stable hardware encoding because using software encoding for a high-resolution application running with 60fps is not really feasible. And not so much of a feature but a guide/sample on how to integrate an SFU/media server would also be necessary. Since we don't have experience with that it would probably take us a lot of time to achieve that. What also took quite some time (more than feels necessary) was to find out about issues and workarounds for common problems. After reading the docs of the WebRTC and Render Streaming packages I had to read through all the Github issues and the forum as well to look for certain information. What was especially troublesome was to find the right information on how to improve the quality of the video stream. What is a bit annoying since video streaming is what it's all about. And there are quite a few Github issues about bad video quality. There is a short explanation about bitrate in the WebRTC docs but this should definitely be shown somewhere in the samples of the Render Streaming package. Because out of the box the hardware encoded stream looks pretty bad compared to the software encoded one. So it would be good to know what the default parameters for the encoders are and which one can be changed and how. The fact that information and issues are split up between the WebRTC and Render Streaming packages makes it even a bit more difficult to find information. Maybe they could link to each other more often in certain topics.