AVFoundation to Syphon

Home Forums Syphon Syphon Development – Developer AVFoundation to Syphon

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
  • #59108


    I’ve spent the last day trying to take the output from an AVFoundation AVPlayer and share it using Syphon. Occasionally I have managed to get a single frame, but never a continuous stream. I am setting up my context as follows

        override func viewDidLoad() {
            displayLink = NSTimer.scheduledTimerWithTimeInterval(1 / 60, target: self, selector: "screenRefresh", userInfo: nil, repeats: true)
            let contextAttributes: [NSOpenGLPixelFormatAttribute] = [
                NSOpenGLPixelFormatAttribute(NSOpenGLPFAColorSize), NSOpenGLPixelFormatAttribute(32),
                //NSOpenGLPixelFormatAttribute(NSOpenGLPFADepthSize), NSOpenGLPixelFormatAttribute(24),
                //NSOpenGLPixelFormatAttribute(NSOpenGLPFAStencilSize), NSOpenGLPixelFormatAttribute(8),
                NSOpenGLPixelFormatAttribute(NSOpenGLPFASampleBuffers), NSOpenGLPixelFormatAttribute(1),
                NSOpenGLPixelFormatAttribute(NSOpenGLPFASamples), NSOpenGLPixelFormatAttribute(4),
            context = NSOpenGLContext(format: NSOpenGLPixelFormat(attributes: contextAttributes)!, shareContext: nil)
            syphonServer = SyphonServer(name: "Test Video", context: context!.CGLContextObj, options: nil)
            player = AVPlayer(URL: NSURL(fileURLWithPath: "/Users/Me/Downloads/testvideo.m4v"))
            let bufferAttributes: [String: AnyObject] = [
                String(kCVPixelBufferPixelFormatTypeKey): Int(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange),
                String(kCVPixelBufferIOSurfacePropertiesKey): [String: AnyObject](),
                String(kCVPixelBufferOpenGLCompatibilityKey): true
            videoOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: bufferAttributes)
            videoOutput.suppressesPlayerRendering = true

    And then using this code to publish the frames:

        func screenRefresh() {
            let itemTime = videoOutput.itemTimeForHostTime(CACurrentMediaTime())
            if videoOutput.hasNewPixelBufferForItemTime(itemTime) {
                if let pixelBuffer = videoOutput.copyPixelBufferForItemTime(itemTime, itemTimeForDisplay: nil) {
                    if let newSurface = CVPixelBufferGetIOSurface(pixelBuffer) {
                        if surface != nil { IOSurfaceDecrementUseCount(surface!) }
                        surface = newSurface.takeUnretainedValue()
                        let size = NSSize(width: IOSurfaceGetWidth(surface!), height: IOSurfaceGetHeight(surface!))
                        print("Texture with \(size)")
                        var texture = GLuint()
                        glGenTextures(1, &texture)
                        glBindTexture(GLenum(GL_TEXTURE_RECTANGLE_EXT), texture)
                        CGLTexImageIOSurface2D(context!.CGLContextObj, GLenum(GL_TEXTURE_RECTANGLE_EXT), GLenum(GL_RGBA), GLsizei(size.width), GLsizei(size.height), GLenum(GL_BGRA), GLenum(GL_UNSIGNED_INT_8_8_8_8_REV), surface!, 0)
                        syphonServer?.publishFrameTexture(texture, textureTarget: GLenum(GL_TEXTURE_RECTANGLE_EXT), imageRegion: NSRect(origin: CGPoint(x: 0, y: 0), size: size), textureDimensions: size, flipped: false)

    I consistently get 30fps output in Simple Client but it’s usually just black or white. I don’t have much experience with OpenGL outside of Max/Jitter, so I imagine I’m just doing things in the wrong order, or missing something out?

    Many thanks for any help 🙂


    I don’t use swift, but I think i see some errors in there

    *) Dont specify pixel buffer attributes and AVFoundation will vend frames in a native fast path format. Requesting buffer format will occasionally add overhead as say, RGB frames need to be sent as YUV.

    *) You probably don’t want a planar format, but something like kCVPixelFormatType_422YpCbCr8 or kCVPixelFormatType_422YpCbCr8_yuvs or kCVPixelFormatType_32BGRA. 420 format is usually for DV chroma sub-sampled video and will look like shit 🙂 BGRA 32 is native fast path RGBA upload on Intel x86/x64 machines for 4:4:4 sampling RGB.

    *) kCVPixelBufferIOSurfacePropertiesKey is a dictionary, not a string

    *) you appear to be leaking textures every frame

    *) Your texture format and your buffer format do not match. You’re submitting textures and BGRA,but requesting them as 420. Use kCVPixelFormatType_32BGRA for your format type.

    Otherwise it seems to be on the right path!


    Also I don’t think you need to manually specify the IOSurface increment/decrement use count. Any reason you are doing that?


    Amazing, thanks! I literally just had to update the kCVPixelBufferPixelFormatTypeKey to kCVPixelFormatType_32BGRA and set the syphon server to flip the texture and I’ve got an image 😀

    When you say that you don’t have to specify a pixel format, is there then an easy way to read the format of a surface and pass it back into the CGLTexImageIOSurface2D function?

    Regarding the leaking textures, can I safely just add glDeleteTextures(1, &texture) after publishing to syphon?

    I was changing the use count on the IOSurface as I’m currently holding on to the surface in a class variable between frames (after calling takeUnretainedValue which sounded scary). But basically it’s because I saw it on StackOverflow like that…! Can I throw that bit away? I might actually hold onto the texture between frames rather than the surface thinking about it.

    Also, one last question, I’ve called flushBuffer on my context after publishing each frame, is this something I actually need to do?

    Thanks for your help 👍🏻

    • This reply was modified 5 years, 1 month ago by DiGiTaLFX.

    I think you’re fine to specify the pixel format to 32 BGRA. You’d have to manually handle the native case and switch on the pixel format of the vended pixel buffer and draw each case, which is a royal pain in the ass.

    I don’t think you need to.

    And I think you can remove the flush. As for the takeUnretainedValue – I’m honestly not sufficiently knowledgable about swifts memory semantics to know.

    Just try removing it, and see what happens. 🙂


    Also you can create your texture once, (check if its 0, then make it) and re-use it theoretically. Or gen / delete. Every glGen needs a corresponding glDelete at some point, or its a leak.


    Yep you’re right, I’ve made all those changes and everything works great! For future reference if anyone needs it, I’ve uploaded a sample project to github:


    Thanks again for your help 🙂


    Thanks for the sample code – super useful to others I expect.


    In case anyone comes across this post the example has now moved to:


Viewing 9 posts - 1 through 9 (of 9 total)
  • You must be logged in to reply to this topic.