Home › Forums › Syphon › Syphon Development – Developer › Syphon -> CoreImage
- This topic has 8 replies, 3 voices, and was last updated 9 years, 8 months ago by Andrea Cremaschi.
-
AuthorPosts
-
July 2, 2011 at 10:20 am #5165Andrea CremaschiParticipant
beloved developers,
as I fear you don’t feel gratified enough for your efforts, here I am thanking you again for your great project 🙂
I have a question and a feature request:
1. the question: as far as you know, how much of Syphon features will get in Lion’s AVFoundation framework? Isn’t syphon an obvious developing of the new IOSurface class?
2. the feature request: it would be great to have a method in SyphonClient class to extract new frames directly in FBO-backed CIImages… will it ever happen? 🙂greetings,
a.c.July 2, 2011 at 11:48 am #5166bangnoiseKeymasterThanks for the thanks!
1. I’d guess none.
2. Unlikely, but if you are keen you could open an issue on the framework project and we’ll consider it/ see if it accumulates interest. What are you trying to do?July 2, 2011 at 8:10 pm #5167vadeKeymasterYou can generate a CIImage directly from an SyphonImage (a texture) output from a SyphonServer via
+ (CIImage *)imageWithTexture:(unsigned int)name size:(CGSize)size flipped:(BOOL)flag colorSpace:(CGColorSpaceRef)cs
Just make sure the GL context you init your Core Image Context with (via is shared or the same as the Syphons, via
+ (CIContext *)contextWithCGLContext:(CGLContextObj)ctx pixelFormat:(CGLPixelFormatObj)pf colorSpace:(CGColorSpaceRef)cs options:(NSDictionary *)dict
As for AVFoundation, it is a replacement for Quicktime, and sits “above” the layer Syphon works (IOSurface / Core Video). IOSurface is already integrated heavily with other imaging technologies in 10.6, and 10.7 adds a few small changes/additional features. As far as AVFoundation incorporating anything from Syphon, I have no idea why that would ever happen.
Syphon is built on top of IOSurface, and leverages some of Apples inter-process communication APIs to handle announcing frame availability. As long as IOSurface is around, Syphon will be around.
July 3, 2011 at 1:41 am #5168Andrea CremaschiParticipantwell, actually what I am trying to do is to copy syphonimages in CPU memory space (i.e. in a NSBitmapImageRep). Since I am making some processing in realtime I need to do this in the fastest possible way. I tried to reuse some of the code that I wrote for a simple QTKit stream ( where CIImages were created with [CIImage imageWithCVImageBuffer:pixelBuffer] method), but the result now is: black images. Why? Here is the pseudo code:
CGLContextObj cgl_ctx = [[openGLRenderContext CGLContextObj]; // openGLRenderContext is a previously created openGL context SyphonImage *image = [[syClient newFrameImageForContext:cgl_ctx] autorelease]; GLuint texture = [image textureName]; NSSize imageSize = [image textureSize]; const CGRect r = {.origin = {0, 0}, .size = {imageSize.width, imageSize.height}}; CIImage *ciImage = [CIImage imageWithTexture:texture size:r.size flipped:YES colorSpace:cs]; NSBitmapImageRep* bitmap=[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes: nil pixelsWide: [ciImage extent].size.width pixelsHigh: [ciImage extent].size.height bitsPerSample:8 samplesPerPixel:4 hasAlpha: YES isPlanar: NO colorSpaceName: NSCalibratedRGBColorSpace bytesPerRow: 0 bitsPerPixel:32] autorelease] ; NSGraphicsContext * graphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep: bitmap] ; CGRect rect = [ciImage extent]; [[graphicsContext CIContext] drawImage: ciImage atPoint: CGPointZero fromRect: rect];
I suppose that it is not working because [graphicsContext CIContext] is not shared with cgl_ctx. So, how to proceed? Which is the “best practice”??
July 3, 2011 at 4:42 am #5169bangnoiseKeymasterYour diagnosis seems correct.
Avoid readback to main memory entirely if at all possible (what are you doing with these buffers?).
If you must do it, don’t use CIImage, do it all directly in OpenGL.
Use a pair of PBOs and do buffered readback. The only gotcha is that you can’t usefully glGetTexImage2D a texture from Syphon so you have to draw it into an FBO and then glGetTexImage2D the texture backing the FBO.
July 3, 2011 at 11:06 am #5170Andrea CremaschiParticipantWell, I am making some presence / motion analysis, using openCL (wrapped for development convenience in CIFilters), so I can’t avoid accessing pixels in CPU memory..
You pointed me in the right direction: now I can access picture data from a FBO with glReadPixels, and everything is fine again. Thanks!!As a gift to the growing Syphon community (that will become huge!!) I can now suggest a workaround for whom may be interested in taking a bitmap snapshot of a Syphon picture (i.e. to save it to a file?) and wants to avoid messing with OpenGL.
Note that this is NOT fast at all!!// 1. receive syphon image in a CIImage wrapper valid in openglcontext cgl_ctx SyphonImage *image = [[syClient newFrameImageForContext:cgl_ctx] autorelease]; GLuint texture = [image textureName]; NSSize imageSize = [image textureSize]; const CGRect r = {.origin = {0, 0}, .size = {imageSize.width, imageSize.height}}; CIImage *ciImage = [CIImage imageWithTexture:texture size:r.size flipped:YES colorSpace:cs]; // 2. create a CIContext shared with the openGL context used to create syphon image NSOpenGLPixelFormatAttribute attributes[] = { NSOpenGLPFAPixelBuffer, NSOpenGLPFANoRecovery, NSOpenGLPFAAccelerated, NSOpenGLPFADepthSize, 24, (NSOpenGLPixelFormatAttribute) 0 }; NSOpenGLPixelFormat* pixelFormat = [[[NSOpenGLPixelFormat alloc] initWithAttributes:attributes] autorelease]; CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB(); CIContext *ciCtx = [CIContext contextWithCGLContext: cgl_ctx pixelFormat:[pixelFormat CGLPixelFormatObj] colorSpace:cs options:nil]; // 3. create a Quartz 2d Image copy of syphon image in this last created CIContext CGImageRef cgImage, *imgPtr = &cgImage; CGLSetCurrentContext(cgl_ctx); *imgPtr = [ciCtx createCGImage:syphonCIImage fromRect:rect]; glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_NEAREST); CFRelease(cs); CIImage *myCGImage = [CIImage imageWithCGImage: cgImage]; // 4. finally, copy this quartz 2d image in a bitmap NSBitmapImageRep* bitmap=[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes: nil pixelsWide: [syphonCIImage extent].size.width pixelsHigh: [syphonCIImage extent].size.height bitsPerSample:8 samplesPerPixel:4 hasAlpha: YES isPlanar: NO colorSpaceName: NSCalibratedRGBColorSpace bytesPerRow: 0 bitsPerPixel:32] autorelease] ; NSGraphicsContext * context = [NSGraphicsContext graphicsContextWithBitmapImageRep: bitmap] ; CGRect rect = [syphonCIImage extent]; [[context CIContext] drawImage: myCGImage atPoint: CGPointZero fromRect: rect];
July 3, 2011 at 1:58 pm #5171bangnoiseKeymasterGreat – glad you’re on the right path.
I’d really recommend using PBOs for your GL path – you’ll see a big performance improvement from the async read-back.
July 3, 2011 at 5:04 pm #5172vadeKeymasterIn a future update to Syphon Framework, it might be possible to request a CIImage directly from the Syphon framework. There are some calls to create a Core Image from an existing IOSurface. No promises, but it might be useful in some cases, especially applications not using OpenGL, and leveraging only Core Image for their processing pipeline.
Let me discuss with bangnoise, see if it makes sense. There are lots of gotchas, so again, no promises.
July 5, 2011 at 12:43 am #5173Andrea CremaschiParticipantyeeeh! it would be great..
please! -
AuthorPosts
- You must be logged in to reply to this topic.