Forum Replies Created
Syphoner isn’t a project we support. We build the underlying framework Syphoner uses. Please hit up http://www.sigmasix.ch/syphoner/ for help 🙂
Sorry we can;t be more help. Might be related to app-nap and the program quitting in the background? Maybe get info on the application file in /Application/Chrome and in the get info box -> disable app-nap?
Just a guess.
How are you measuring latency?
Turn your servers off (stop publishing) but keep your cameras running / rendering to texture, and see if that changes anything WRT to performance. There is some slight overhead due to how Syphon copies textures so to speak, but i imagine rendering 6x is main the overhead.
Are you running 6 servers? You are then rendering 6 cameras… *THATS* your performance issue.
Have one server, and switch cameras if you can, assuming thats what is going on.
Still the same basic gotcha.
Linux requires a collaborative effort between the kernel developers and the hardware /GPU driver teams, and an agreed upon API to access GPU memory across process boundaries.
Apple was able to do that via IOSurface for us. I believe that it required specific collaboration with Nvidia, intel, ATI, Imgtec, and other GPU vendors in Apples ecosystem to ensure it works.
I don’t believe linux/the community has this capability, or a need / desire for it.
Thats strange. Have never heard of that.
I’d post a bug on github about either incompatibility or what not with Syphon and the video libraries.
As for running slower in P2.
Is the app hidden in the background? Perhaps App Nap is the culprit? Can you disable it?
Also you can create your texture once, (check if its 0, then make it) and re-use it theoretically. Or gen / delete. Every glGen needs a corresponding glDelete at some point, or its a leak.
I think you’re fine to specify the pixel format to 32 BGRA. You’d have to manually handle the native case and switch on the pixel format of the vended pixel buffer and draw each case, which is a royal pain in the ass.
I don’t think you need to.
And I think you can remove the flush. As for the takeUnretainedValue – I’m honestly not sufficiently knowledgable about swifts memory semantics to know.
Just try removing it, and see what happens. 🙂
Also I don’t think you need to manually specify the IOSurface increment/decrement use count. Any reason you are doing that?
I don’t use swift, but I think i see some errors in there
*) Dont specify pixel buffer attributes and AVFoundation will vend frames in a native fast path format. Requesting buffer format will occasionally add overhead as say, RGB frames need to be sent as YUV.
*) You probably don’t want a planar format, but something like kCVPixelFormatType_422YpCbCr8 or kCVPixelFormatType_422YpCbCr8_yuvs or kCVPixelFormatType_32BGRA. 420 format is usually for DV chroma sub-sampled video and will look like shit 🙂 BGRA 32 is native fast path RGBA upload on Intel x86/x64 machines for 4:4:4 sampling RGB.
*) kCVPixelBufferIOSurfacePropertiesKey is a dictionary, not a string
*) you appear to be leaking textures every frame
*) Your texture format and your buffer format do not match. You’re submitting textures and BGRA,but requesting them as 420. Use kCVPixelFormatType_32BGRA for your format type.
Otherwise it seems to be on the right path!
Its likely that you cant your CPU cant keep up with encoding AIC that fast. Its a lot to ask.
Two things to try.
a) Run Black Magic Speed Test and see if you can write uncompressed 4K. Note the rough average of your disk in MB/s.
Uncompressed 4K runs roughly 620MB/s (3180×2160 @ 30Hz, RGB 4:4:4)
b) If your disk isn’t fast enough, try other codecs that are not as compressed, like Pro Res 4:4:4 or 4:2:2 HQ, or Animation (depending on your disk throughput).
This is ultimately about balancing how much disk throughput you have with how much encoding your CPU can handle in 1/30th of a second or so.
You don’t need to install anything.
You need to read the manuals of the software you purchased.
Its built in. Enabling Syphon in Modul8 is in a menu.
Enabling a Syphon input in MadMapper is clicking the source that automatically pops up in the menu.
Apologies for the delayed reply.
I’ve never heard of this issue to be honest, but if Syphon is taking a second to record it certainly sounds like an issue.
* What operating system and machine are you on?
* Do you have any additional audio hardware like capture devices?
* What bitrate audio are you recording?
* What bitrate video are you recording? (resolution and frame rate)
* What type of disk are you using?
* Ensure you are on the latest versions of Soundflower
Ensure that your drive is fast enough, and that you are using a ‘normal’ bitrate around 44.1 or 44.8 Khz, and you are using a per frame or no compression like Photo JPEG, Uncompressed, Pro Res. Avoid h.264. Same for audio, use LinearPCM. Those options tend to use more disk space, and require faster disks, but give you better results.
Im unsure where the second or so delay is coming from. Let us know if the above is helpful.
Please use the search – there are recent threads on Processing 3.0 that have the info you need 🙂
So, this is a little tricky.
SpriteKit can, as of El Capitan, use Metal, not OpenGL, as a rendering engine.
Syphon does not (yet?) support Metal.
SpriteKit typically defaults to using OpenGL Core Profile behind the scenes.
Syphon does not yet support Core Profile in the main release. There are however forks of Syphon that can get you there.
As for integration, you are likely going to have to make a custom SCN or SKNode that makes its own renderer,and implement syphon there. Likely it should be a root node and render its children prior to evoking some sort of texture copy phase to nab the contents of the current rendered buffer.
Something perhaps like the QC plugin does:
Thanks! Its hard for us to keep track of who releases what. Can you link me to icons with alpha for me to include?
Yup, totally doable.
* Video Glide compatible drivers: https://www.echofx.com/videoglide.html
* High quality pro capture gear from Aja and BlackMagic
And other solutions. All work, all can be seen by compatible syphon software like VDMX, Jitter, QC, Max, Resolume etc.
Totally doable 🙂
No video – the entire show was realtime generative 🙂
Hm. That crash doesnt indicate that v002 Open Kinect is the cause of the error:
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x00007fff82ea2286 __pthread_kill + 10
1 libsystem_c.dylib 0x00007fff8c0afb53 abort + 129
2 libsystem_malloc.dylib 0x00007fff9046bf3c nanozone_error + 524
3 libsystem_malloc.dylib 0x00007fff90459a5c _nano_malloc_check_clear + 370
4 libsystem_malloc.dylib 0x00007fff9045bb48 nano_calloc + 73
5 libsystem_malloc.dylib 0x00007fff9045bacb malloc_zone_calloc + 78
Id be curious if this is an incompatibility issue. Hrm..
I put it on github not to walk away from it, but because packaging shit up is a pain in the ass. Anyway.
v002 Open Kinect does not require the install of 3rd party libraries. I don’t use Synapse or any of that other stuff, perhaps there is some weird incompatibility? I bundle all the libraries in with the plugin, and have had no issues – its toured for Square Pusher, for Childish Gambino, etc, with no problems, but those systems didn’t have other shit installed.
Not saying there isn’t an issue. Ill look at the crash log in a bit. Thanks for the info!