Virtual Screen Device

Home Forums Syphon Syphon Development – Developer Virtual Screen Device

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #5757
    Andrea Cremaschi
    Participant

    Hi,
    I got back on the idea of a virtual screen device that pipes its content to a Syphon server, and began working on it. It would bring to something like Soundflower for video! Since I am not a openGL guru as you are, maybe I could use your help…
    I am writing a front-end of Enno Welbers’ EWProxyFrameBuffer (https://github.com/mkernel/EWProxyFramebuffer), I managed to get it up and running and tested it with goto10’s Desktop_Broadcaster. It works. Now I would like to write the Syphon related code to avoid slow copies to system memory. Which pattern to follow? Two ways to make screen capture in user space:
    1. direct access to the openGL front buffer: as vade knows, it only works in 10.6
    2. AVFoundation – but this is not as fast, and it doesn’t work before 10.6

    But: as I understand it, writing code in kernel space could give us direct access to video memory buffer and workaround the Apple limitation? Here below is Enno Welber’s code that copies graphic memory to a memory buffer in user land. What do you think about it? Has it something to do with OpenGL front buffer?

    Let’s get through this: a virtual device like this as the output and iGlasses virtual device as the input would give Syphon a new born!

    Andrea Cremaschi

    IOReturn info_ennowelbers_proxyframebuffer_driver::UpdateMemory()
    {
    if(fbuffer->State()!=0)//fbuffer->connected==1)
    {
    //source memory
    IODeviceMemory *mem=fbuffer->getApertureRange(kIOFBSystemAperture);
    IOMemoryMap *map=mem->map(kIOMapAnywhere);
    unsigned int *buf=(unsigned int*)map->getVirtualAddress();

    //target memory
    IOMemoryMap *bmap=buffer->map(kIOMapAnywhere);
    unsigned char *destWalk=(unsigned char*)bmap->getVirtualAddress();

    //assumption 1: the system just wants some memory to play with, data start at 0
    //assumption 2: each row has 32 byte ahead
    //assumption 3: each row has 32 byte at the end
    //assumption 4: each row has 32 byte ahead + 128 byte ahead of everything
    //assumption 5: each row has 32 byte at the end + 128 byte ahead of everything

    //assumption 3 is correct (32 byte at end of each frame, 128 byte at end of buffer)
    IODisplayModeInformation information;
    fbuffer->getInformationForDisplayMode(fbuffer->State(), &information);
    for(int y=0;y<information.nominalHeight;y++)
    {
    for(int x=0;x>16;//R
    destWalk++;
    *destWalk=((*buf)&0xFF00)>>8;//G
    destWalk++;
    *destWalk=(*buf)&0xFF;//B
    destWalk++;
    buf++;

    }
    buf+=8;
    }
    map->release();
    bmap->release();
    mem->release();
    return kIOReturnSuccess;
    }

    return kIOReturnError;
    }

    #5758
    Andrea Cremaschi
    Participant

    Bad news.. I wrote the developer, his code doesn’t deal with graphic memory but with a fake graphical framebuffer residing in RAM. 🙁
    What next? Is there somebody in this forum able to read through NVidia open source driver code and to carry on this project?
    http://opensource.apple.com/source/IOGraphics/IOGraphics-409/IONDRVSupport/

    #5759
    vade
    Keymaster

    AVFoundations screen capture is the only way to do this with remotely decent performance. There is also CGWindowList APIs, which allow for some smarter window capturing, but at the expense of performance.

    #5764
    Andrea Cremaschi
    Participant

    In CoreGraphics API I’ve found CGDisplayCreateImage that best fits the needs. This path implies a copy from the virtual device frame buffer to a new buffer in RAM, and an upload to GPU texture every timer tick. But ok, it works and it’s the best we can have.. until a tough IOKit programmer comes out.
    Coming soon.

    #5768
    vade
    Keymaster

    I’ve read posts where people have found the backing of the Window Servers CGImage using private API’s, allowing one to skip the copy phase. However I’m surprised that method beats out AVFoundation, because in theory the AVFoundation class can be used to stay entirely on the GPU, as far as I understand it. I’ve be interested in benchmarking the two. Oh time.

    #5769
    Andrea Cremaschi
    Participant

    Well, I suppose too that AVCaptureVideoDataOutputSampleBuffer would be faster than CGDisplayCreateImage when dealing with framebuffers residing on GPU.. Sadly I have to stick with the second because the open source virtual frame buffer I am dealing with is just a chunk of RAM memory, so an upload to a GPU texture is unavoidable. I am afraid that this means also that it won’t give us GPU acceleration when rendering to the virtual screen. But still it’s great when no other choice is possible (I plan to use it to enable Syphon output for QLab).
    I’ll take a look to the private API you point out, it seems interesting..

    #5771
    vade
    Keymaster

    Im confused. What exactly are you trying to do? QLab allows you to use QC, so use a Syphon Server QC plugin and everything stays on the GPU? You should never have to read back to the CPU, that defeats the entire purpose of Syphon.

    For AVFoundation, you can use:

    AVCaptureVideoPreviewLayer, coupled with AVCaptureDeviceInput, AVCaptureDevice, and of course AVCaptureSession. You use a CARenderer to render your AVCaptureVideoPreviewLayer directly on the GPU. You would bind a Syphon Servers FBO, and render directly into the shared IOSurface backed texture.

    Code from a test QC Plugin:

    Make session and what not

    self.captureSession = [[AVCaptureSession alloc] init];

    self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;

    self.captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    self.captureDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.captureDevice error:NULL];

    [self.captureSession addInput:self.captureDeviceInput];

    self.capturePreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];

    self.capturePreviewLayer.bounds = CGRectMake(0, 0, 640, 480);
    self.capturePreviewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
    [self.captureSession startRunning];

    Assuming a CALayer and CARenderer, when you bind the SyphonServers FBO, do:


    glViewport (0, 0, imageBounds.size.width, imageBounds.size.height);

    glMatrixMode(GL_PROJECTION);
    glPushMatrix();
    glLoadIdentity();
    glOrtho (0, imageBounds.size.width, 0, imageBounds.size.height, -1, 1);

    glMatrixMode(GL_MODELVIEW);
    glPushMatrix();
    glLoadIdentity();

    // Do we need this? I suspect not but...
    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BUFFER_BIT);

    if(!self.renderer)
    self.renderer = [CARenderer rendererWithCGLContext:cgl_ctx options:nil];

    renderer.bounds = imageBounds;
    renderer.layer = renderLayer;
    renderer.layer.position = NSMakePoint(imageBounds.size.width/2.0, imageBounds.size.height/2.0);
    renderer.layer.bounds = imageBounds;
    renderer.layer.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;

    [renderer beginFrameAtTime:CACurrentMediaTime() timeStamp:NULL];
    [renderer addUpdateRect:imageBounds];
    [renderer render];
    [renderer endFrame];

    glMatrixMode(GL_MODELVIEW);
    glPopMatrix();
    glMatrixMode(GL_PROJECTION);
    glPopMatrix();

    You can probably optimize this more by disabling blending and removing the clear, etc.

    #5772
    Andrea Cremaschi
    Participant

    Well, QLab QC support is still weak.. In fact you can write a custom patch to modify the way a single video is composed. I used to put in this composer a syphon server, but with some issues – even stability issues, when the movie is stopped and played again in short time intervals.
    What I am working on is a tool that emulate a fake monitor out, so that you can choose that monitor as the main output in QLab (and other applications), a “syphon virtual screen”. Then this output is piped to the GPU on a Syphon server for further video processing. Sort of what Soundflower does for audio, but for video..

    #5773
    Andrea Cremaschi
    Participant

    Ok, so here it is:
    https://rapidshare.com/files/759792021/Syphon_virtual_screen.pkg

    I put all my code on a git repo: https://github.com/andreacremaschi/Syphon-virtual-screen/tree/90c64ee2983aaa99f5ecb2b1ce548e75e75bce2c
    But the application won’t work without the EWProxyFramebuffer kext driver, that is included in the installer package.

    Again: since the frame buffer reside in system RAM and not in GPU RAM this is not the best solution for everyday Syphon work, but can be helpful to hijack an application video output that is not Syphon-enabled.
    Have fun! 🙂

    #5774
    Andrea Cremaschi
    Participant
Viewing 10 posts - 1 through 10 (of 10 total)
  • You must be logged in to reply to this topic.