March 22, 2012 at 7:24 am #5757
I got back on the idea of a virtual screen device that pipes its content to a Syphon server, and began working on it. It would bring to something like Soundflower for video! Since I am not a openGL guru as you are, maybe I could use your help…
I am writing a front-end of Enno Welbers’ EWProxyFrameBuffer (https://github.com/mkernel/EWProxyFramebuffer), I managed to get it up and running and tested it with goto10’s Desktop_Broadcaster. It works. Now I would like to write the Syphon related code to avoid slow copies to system memory. Which pattern to follow? Two ways to make screen capture in user space:
1. direct access to the openGL front buffer: as vade knows, it only works in 10.6
2. AVFoundation – but this is not as fast, and it doesn’t work before 10.6
But: as I understand it, writing code in kernel space could give us direct access to video memory buffer and workaround the Apple limitation? Here below is Enno Welber’s code that copies graphic memory to a memory buffer in user land. What do you think about it? Has it something to do with OpenGL front buffer?
Let’s get through this: a virtual device like this as the output and iGlasses virtual device as the input would give Syphon a new born!
unsigned int *buf=(unsigned int*)map->getVirtualAddress();
unsigned char *destWalk=(unsigned char*)bmap->getVirtualAddress();
//assumption 1: the system just wants some memory to play with, data start at 0
//assumption 2: each row has 32 byte ahead
//assumption 3: each row has 32 byte at the end
//assumption 4: each row has 32 byte ahead + 128 byte ahead of everything
//assumption 5: each row has 32 byte at the end + 128 byte ahead of everything
//assumption 3 is correct (32 byte at end of each frame, 128 byte at end of buffer)
}March 22, 2012 at 8:01 am #5758
Bad news.. I wrote the developer, his code doesn’t deal with graphic memory but with a fake graphical framebuffer residing in RAM. 🙁
What next? Is there somebody in this forum able to read through NVidia open source driver code and to carry on this project?
http://opensource.apple.com/source/IOGraphics/IOGraphics-409/IONDRVSupport/March 22, 2012 at 8:16 am #5759
AVFoundations screen capture is the only way to do this with remotely decent performance. There is also CGWindowList APIs, which allow for some smarter window capturing, but at the expense of performance.March 23, 2012 at 3:05 am #5764
In CoreGraphics API I’ve found CGDisplayCreateImage that best fits the needs. This path implies a copy from the virtual device frame buffer to a new buffer in RAM, and an upload to GPU texture every timer tick. But ok, it works and it’s the best we can have.. until a tough IOKit programmer comes out.
March 23, 2012 at 6:30 am #5768
- This reply was modified 8 years, 11 months ago by Andrea Cremaschi.
I’ve read posts where people have found the backing of the Window Servers CGImage using private API’s, allowing one to skip the copy phase. However I’m surprised that method beats out AVFoundation, because in theory the AVFoundation class can be used to stay entirely on the GPU, as far as I understand it. I’ve be interested in benchmarking the two. Oh time.March 23, 2012 at 6:59 am #5769
Well, I suppose too that AVCaptureVideoDataOutputSampleBuffer would be faster than CGDisplayCreateImage when dealing with framebuffers residing on GPU.. Sadly I have to stick with the second because the open source virtual frame buffer I am dealing with is just a chunk of RAM memory, so an upload to a GPU texture is unavoidable. I am afraid that this means also that it won’t give us GPU acceleration when rendering to the virtual screen. But still it’s great when no other choice is possible (I plan to use it to enable Syphon output for QLab).
I’ll take a look to the private API you point out, it seems interesting..March 23, 2012 at 8:25 am #5771
Im confused. What exactly are you trying to do? QLab allows you to use QC, so use a Syphon Server QC plugin and everything stays on the GPU? You should never have to read back to the CPU, that defeats the entire purpose of Syphon.
For AVFoundation, you can use:
AVCaptureVideoPreviewLayer, coupled with AVCaptureDeviceInput, AVCaptureDevice, and of course AVCaptureSession. You use a CARenderer to render your AVCaptureVideoPreviewLayer directly on the GPU. You would bind a Syphon Servers FBO, and render directly into the shared IOSurface backed texture.
Code from a test QC Plugin:
Make session and what not
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
self.captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
self.captureDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.captureDevice error:NULL];
self.capturePreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
self.capturePreviewLayer.bounds = CGRectMake(0, 0, 640, 480);
self.capturePreviewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
Assuming a CALayer and CARenderer, when you bind the SyphonServers FBO, do:
glViewport (0, 0, imageBounds.size.width, imageBounds.size.height);
glOrtho (0, imageBounds.size.width, 0, imageBounds.size.height, -1, 1);
// Do we need this? I suspect not but...
glClearColor(0.0, 0.0, 0.0, 0.0);
self.renderer = [CARenderer rendererWithCGLContext:cgl_ctx options:nil];
renderer.bounds = imageBounds;
renderer.layer = renderLayer;
renderer.layer.position = NSMakePoint(imageBounds.size.width/2.0, imageBounds.size.height/2.0);
renderer.layer.bounds = imageBounds;
renderer.layer.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
[renderer beginFrameAtTime:CACurrentMediaTime() timeStamp:NULL];
You can probably optimize this more by disabling blending and removing the clear, etc.March 23, 2012 at 9:25 am #5772
Well, QLab QC support is still weak.. In fact you can write a custom patch to modify the way a single video is composed. I used to put in this composer a syphon server, but with some issues – even stability issues, when the movie is stopped and played again in short time intervals.
What I am working on is a tool that emulate a fake monitor out, so that you can choose that monitor as the main output in QLab (and other applications), a “syphon virtual screen”. Then this output is piped to the GPU on a Syphon server for further video processing. Sort of what Soundflower does for audio, but for video..March 23, 2012 at 12:12 pm #5773
Ok, so here it is:
I put all my code on a git repo: https://github.com/andreacremaschi/Syphon-virtual-screen/tree/90c64ee2983aaa99f5ecb2b1ce548e75e75bce2c
But the application won’t work without the EWProxyFramebuffer kext driver, that is included in the installer package.
Again: since the frame buffer reside in system RAM and not in GPU RAM this is not the best solution for everyday Syphon work, but can be helpful to hijack an application video output that is not Syphon-enabled.
Have fun! 🙂March 24, 2012 at 2:53 am #5774
link broken, this works: https://rapidshare.com/files/3799395488/Syphon_virtual_screen.pkg
- You must be logged in to reply to this topic.