Mirroring texture horizontally and vertically

Home Forums Syphon Syphon Development – Developer Mirroring texture horizontally and vertically

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
  • #26256

    I am developing an application acting as Syphon server. Based on user input I need to mirror the texture horizontally and/or vertically. Does Syphon support this without my application doing the mirroring at pixel level? I noticed SyphonServer publishFrameTexture-function has parameter flipped, but that only flips it vertically(?).


    This is easily done in OpenGL via the servers

    bindToDrawFrameOfSize: and unbindAndPublish

    methods, (see http://syphon.v002.info/FrameworkDocumentation/interface_syphon_server.html#ab5da335ea3e45903eceae51adb363240 )

    What this does is, Syphon framework attaches an internally used and managed frame buffer object, which is attached to the ‘texture’ / surface that it will share. You are then responsible for drawing your scene to OpenGL, as normal (assuming you are drawing directly to the frame buffer object weve attached) – you then call unbindAndPublish on the server – and any drawing youve done will be sent off via Syphon.

    You can then get the Syphon Image from the server you just made, to draw to your own OpenGL View, like you would any texture.

    Essentially you would:

    *Set Up OpenGL Context
    *Set up Syphon Server.
    *Set up your resources.

    in your render loop:

    *attach context

    *server bindToDrawFrameOfSize (you are now drawing “into” Syphon)

    *Draw your OpenGL content, and modify the vertices and texture coordinates of your texture to achieve the desired effect, or use a GLSL shader, or any number of methods appropriate.

    * unbindAndPublish (youve now notified any listening clients youre drawing is done and its ready to be seen elsewhere). This unbinds the frame buffer object, and synchronizes the contents of the shared texture to other applications.

    If you want, you may now:

    * get the SyphonImage from the server (its the most recent thing you’ve drawn above)
    * draw that image as normal (no effects) to your own scene for a live preview of your applications output that will be seen by others, etc.

    I hope that helps.

    We highly suggest avoiding pixel readback to the CPU – it defeats the entire purpose of using Syphon to begin with : keeping things fast on the GPU. Where it belongs.

    One thing to note is that we try to do a good job of isolating OpenGL state before and after our OpenGL calls into your context. Ensure you leave things as they were, if you’ve altered state within the bind and unbind calls on the server.


    Thanks for the reply vade, being fairly newbie to OpenGL and Syphon all this information takes some time to digest and implement. I think I got what you mean, let’s see how it works in practise.

    What my application is actually doing is receiving compressed video, decoding it, sending it to Syphon and displaying it. For decoding I use FFMPEG, but to speed up things I would like to do colorspace-conversion from YUV to RGB using OpenGL fragment shader. Not sure if I can do it with bindToDrawFrameOfSize or if I need to do pixel readback. Any comments on that?


    Just bind the shader and unbind it when you’re in the FBO. Theres no reason at all to hit the CPU for any of what you need.


    If FFMPEG will dispense 422 Y’CbCr frames you can use the Apple YCBCR_422 GL extension to upload the frames directly and you needn’t use a shader at all.



    Thanks again vade and bangnoise. FFMPEG decoder output format is planar 420 Y’CbCr, so I’ll need a shader for RGB conversion. But in future I’m planning to use built-in hardware video decoder that outputs 422 Y’CbCr format. For that I’ll use Apple extension. Thanks for the tip bangnoise, never heard of such extension before. My ultimate goal is to provide four 720p videos to Syphon simultaneously, so all performance gains are very welcome.

Viewing 6 posts - 1 through 6 (of 6 total)
  • You must be logged in to reply to this topic.