Working Limit of Syphon Instances

Home Forums Syphon Syphon Implementations – User Working Limit of Syphon Instances

  • This topic has 4 replies, 3 voices, and was last updated 6 years ago by vade.
Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
  • #58924

    How should I go about determining when there are too many instances of Syphon? I.e. how many Syphon connections does it take to make a problem?

    What is the limiting factor: number of instances? Resolution of video? Complexity of operations inside each program?

    I would like to get a better grasp of what kind of working limit there is when using many (3+) instances of Syphon in different programs.

    In my particular scenario I am taking a video feed from this PS3 Eye app, then doing some transformations and scaling in Quartz Composer, then out to Processing for some OpenCV, and finally into MadMapper.

    Thank you for any insight,


    The answer is really “when you see problems” – do you?

    The complexity of operations you perform will cause problems before the number of Syphon steps does. Try running each stage in isolation and note resource usage. Computer vision in particular is resource-hungry.

    For what it’s worth, Syphon’s main cost is a chunk of video memory for each server – but your problem probably isn’t Syphon, but what you’re doing at each step.


    Yea, the Readback from the GPU to OpenCV, and then the added latency of doing the computer vision work is going to massively drop your frame rate.

    Syphon is optimized for running on the GPU – so ping ponging between Camera (CPU) to Syphon (GPU), To Quartz Composer (GPU) to PS3 Eye (OpenCV, CPU) to MadMapper (GPU).

    Now, if all of those steps were on the GPU you’d likely have little problems (I’m aware of users running content for shows with more servers in realtime sans issue).

    Some suggestions:

    See if you can run the camera at lower resolutions – remove the Quartz Composer step to resize frames.

    Run an OpenFrameworks OpenCV app instead, which has much more control over camera -> OpenCV and resizing and hinting to OpenCV which fframes to use. Its more work (and if you don’t program, yea, its likely out of the question) and will likely take longer.

    You could cut it down to:

    Camera ->(CPU) Openframeworks convert to OpenCV (CPU), and then to Syphon to whomever and keep it on the GPU.

    Notice there is no ping ponging happening there.

    Optimizing video can be difficult, especially when you don’t always have control of the black boxes you run it through.


    Thanks for the words.

    So what I am hearing is it is more about the complexity and workload in each instance, not the Syphon linking itself?

    I like the segemented workflow because in Quartz, I can easily change video feeds and patches without recompiling – plus I am fluent in it. However I can see that incorporating some of the functions I am using in QC should be integrated.

    Using openCV however, is a bit necessary, but I am also going to explore other tracking environments such as the other Blob libraries for processing. Maybe these are less intensive? I really just need the contours, centroid, and ID for each blob. I like OpenCV for processing because it’s well documented and powerful – but perhaps too resource intensive as you mentioned.

    I’ll be adding some boids on top of all this and I noticed the last time i did this install it was running slow so I am trying to scout out the landscape before a few weeks of solid coding.


    Another option a lot of people don’t realize is much of OpenCV’s functionality is still usable at lower resolutions. Try scaling your video down to 320×240’ish or less – way less CPU and less shit to shunt between CPU and GPU.

    You can do analysis on downscaled frames, and then scale the values back up for tracking on top of the full res original image from the camera.

Viewing 5 posts - 1 through 5 (of 5 total)
  • You must be logged in to reply to this topic.