Forum Replies Created
-
AuthorPosts
-
Andrea Cremaschi
Participant@bangnoise: thanks for the code!
Actually, it does something different in respect of the one i posted: you are drawing in a -single- PBO, while the CVOpenGLBufferPool code manages -a pool- of frame buffers, using Core Video framework.
This helps when a new frame may be available in Syphon server before the old one has been entirely downloaded in CPU memory (i.e. in case of heavy GPU image manipulation retarding the frame download). This is also a bit more time consuming, since a new buffer must be attached to the openGL context for every new available Syphon image (ie the “CVOpenGLBufferAttach” method call). But with a slightly more complex code than the one I posted (in a multithread app), it is possible to avoid this delay, attaching the buffer as soon as the new Syphon image has been copied to the opengl buffer.So, always speaking to the ones who want to copy syphon images to CPU memory: who just wants speed, doesn’t want to mess with multithreading and doesn’t need to do great image manipulation on the new Syphon image can go with the “opengl PBO method”. Who needs to do heavy GPU image manipulation, or need to process ALL the frames should go with the “CVOpenGLBufferPool method”..
ciao 🙂
ac
Andrea Cremaschi
ParticipantFrames were 640×400 (webcam input).
I’m developing an application for basic computer vision tasks (ie presence/motion detection) to use in theater shows, live performances ecc.. Of course syphon-enabled! The code-name is kineto.
I will let you know as soon as I release a first beta.a.c.
Andrea Cremaschi
Participantok, here i am
the method createImageForDelegate: takes about 6-7ms to complete on a i5-Geforce 330M
That’s not bad.
What about FBO-PBO method? Who wants to try ? 🙂
a.Andrea Cremaschi
Participantwell, I haven’t done a in-depth profiling yet, but everything is smooth right now and I am quite happy with it. CoreVideo is supposed to take care of of PBOs’ creation and mantainment, but it is difficult to say what is going on under the hood (ie how many buffers are alive ecc…).
Anyway I will give you some performance feedback as soon as I’ve done some profiling ok?
ciao!
a.c.Andrea Cremaschi
Participantyeeeh! it would be great..
please!Andrea Cremaschi
ParticipantWell, I am making some presence / motion analysis, using openCL (wrapped for development convenience in CIFilters), so I can’t avoid accessing pixels in CPU memory..
You pointed me in the right direction: now I can access picture data from a FBO with glReadPixels, and everything is fine again. Thanks!!As a gift to the growing Syphon community (that will become huge!!) I can now suggest a workaround for whom may be interested in taking a bitmap snapshot of a Syphon picture (i.e. to save it to a file?) and wants to avoid messing with OpenGL.
Note that this is NOT fast at all!!// 1. receive syphon image in a CIImage wrapper valid in openglcontext cgl_ctx SyphonImage *image = [[syClient newFrameImageForContext:cgl_ctx] autorelease]; GLuint texture = [image textureName]; NSSize imageSize = [image textureSize]; const CGRect r = {.origin = {0, 0}, .size = {imageSize.width, imageSize.height}}; CIImage *ciImage = [CIImage imageWithTexture:texture size:r.size flipped:YES colorSpace:cs]; // 2. create a CIContext shared with the openGL context used to create syphon image NSOpenGLPixelFormatAttribute attributes[] = { NSOpenGLPFAPixelBuffer, NSOpenGLPFANoRecovery, NSOpenGLPFAAccelerated, NSOpenGLPFADepthSize, 24, (NSOpenGLPixelFormatAttribute) 0 }; NSOpenGLPixelFormat* pixelFormat = [[[NSOpenGLPixelFormat alloc] initWithAttributes:attributes] autorelease]; CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB(); CIContext *ciCtx = [CIContext contextWithCGLContext: cgl_ctx pixelFormat:[pixelFormat CGLPixelFormatObj] colorSpace:cs options:nil]; // 3. create a Quartz 2d Image copy of syphon image in this last created CIContext CGImageRef cgImage, *imgPtr = &cgImage; CGLSetCurrentContext(cgl_ctx); *imgPtr = [ciCtx createCGImage:syphonCIImage fromRect:rect]; glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_NEAREST); CFRelease(cs); CIImage *myCGImage = [CIImage imageWithCGImage: cgImage]; // 4. finally, copy this quartz 2d image in a bitmap NSBitmapImageRep* bitmap=[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes: nil pixelsWide: [syphonCIImage extent].size.width pixelsHigh: [syphonCIImage extent].size.height bitsPerSample:8 samplesPerPixel:4 hasAlpha: YES isPlanar: NO colorSpaceName: NSCalibratedRGBColorSpace bytesPerRow: 0 bitsPerPixel:32] autorelease] ; NSGraphicsContext * context = [NSGraphicsContext graphicsContextWithBitmapImageRep: bitmap] ; CGRect rect = [syphonCIImage extent]; [[context CIContext] drawImage: myCGImage atPoint: CGPointZero fromRect: rect];
Andrea Cremaschi
Participantwell, actually what I am trying to do is to copy syphonimages in CPU memory space (i.e. in a NSBitmapImageRep). Since I am making some processing in realtime I need to do this in the fastest possible way. I tried to reuse some of the code that I wrote for a simple QTKit stream ( where CIImages were created with [CIImage imageWithCVImageBuffer:pixelBuffer] method), but the result now is: black images. Why? Here is the pseudo code:
CGLContextObj cgl_ctx = [[openGLRenderContext CGLContextObj]; // openGLRenderContext is a previously created openGL context SyphonImage *image = [[syClient newFrameImageForContext:cgl_ctx] autorelease]; GLuint texture = [image textureName]; NSSize imageSize = [image textureSize]; const CGRect r = {.origin = {0, 0}, .size = {imageSize.width, imageSize.height}}; CIImage *ciImage = [CIImage imageWithTexture:texture size:r.size flipped:YES colorSpace:cs]; NSBitmapImageRep* bitmap=[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes: nil pixelsWide: [ciImage extent].size.width pixelsHigh: [ciImage extent].size.height bitsPerSample:8 samplesPerPixel:4 hasAlpha: YES isPlanar: NO colorSpaceName: NSCalibratedRGBColorSpace bytesPerRow: 0 bitsPerPixel:32] autorelease] ; NSGraphicsContext * graphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep: bitmap] ; CGRect rect = [ciImage extent]; [[graphicsContext CIContext] drawImage: ciImage atPoint: CGPointZero fromRect: rect];
I suppose that it is not working because [graphicsContext CIContext] is not shared with cgl_ctx. So, how to proceed? Which is the “best practice”??
Andrea Cremaschi
ParticipantThanks for the replies! They have been really useful. So the point is I have some more Core Video and thread management to study !
And good work on Syphon, I’ll keep in touch and I am looking forward to the stable version!
-
AuthorPosts