Receive Syphon input into Processing Sketch?

Home Forums Syphon Syphon Implementations – User Receive Syphon input into Processing Sketch?

  • This topic has 19 replies, 5 voices, and was last updated 9 years ago by L05_.
Viewing 20 posts - 1 through 20 (of 20 total)
  • Author
  • #5359

    To my knowledge, the Syphon implementations in Processing have centered solely around sending video output from a Processing sketch to another program by way of the Syphon framework. For instance, I can send the output of a Processing sketch to MadMapper or Modul8. However, I want to send video from Modul8 into Processing via Syphon. Is this possible?

    I have read that Syphon uses OpenGL so that it can stay on the GPU and keep things more speedy, so if I’m trying to receive video from elsewhere, will I even be able to use Syphon. If not, are there other options?


    Yes this is currently a limitation of our Processing / Java JNI implementation. We lost inertia when developing it. Obviously its just a matter of implementing it. If you are up for it, you can implement it by building off of our Java JSyphon JNI code base in the Google code project.

    Syphon uses OpenGL, but it only requires a texture and a context. So if your application does not support OpenGL natively, but has buffers on the CPU, you could manually create a GL context, a texture out of the buffer and then leverage Syphon to publish that frame.

    What is “elsewhere”. Can you be more specific about your goals?


    Thanks for your response. It’s been really hard to find support on the Processing forums.

    I wrote code for a Processing sketch that produces real-time, generative video content for a projection mapping project I’m working on. Here’s a video of it in action:

    I want to take video input into Processing from Modul8’s syphon output and map it to quad faces in my Processing sketch, essentially using Processing to handle the quad mapping I would use a program like MadMapper for.

    I am down to give implementing it a shot, but my experience is limited. I’ll go check out the Google code project now… do you have any suggestions on where to start?


    JSyphon, the Java JNI implementation, needs work to finish the client so it dispenses frames – this will require work in JSyphonImage to wrap SyphonImage and JSyphonClient to complete the wrapping of SyphonClient – so look at those files as a place to start. JNI isn’t particularly joyful to work with in my limited experience, so uh… good luck!


    hey guys, I made some progress with the client side in JSyphon.

    To avoid writing a JNFTypeCoercion protocol to convert the native SyphonImage class to the Java counterpart, I just return the texture id, width and height in a dictionary with (string, int) key-value pairs.

    I managed to get a notification in Processing when a new frame is available in the client, so things look promising, but when I retrieve the dictionary, all the values are zero.

    I committed all the changes to the repo. Just in case, here is the native code I added to return the dictionary with the image information:

    JNIEXPORT jobject JNICALL Java_jsyphon_JSyphonClient_newFrameDataForContext(JNIEnv * env, jobject jobj)
    jobject imgdata = nil;

    NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];

    [(SyphonNameboundClient*)mClient lockClient];
    SyphonClient *client = [(SyphonNameboundClient*)mClient client];
    SyphonImage* img = [client newFrameImageForContext:CGLGetCurrentContext()];

    NSSize texSize = [img textureSize];

    NSNumber *name = [NSNumber numberWithInt:[img textureName]];
    NSNumber *width = [NSNumber numberWithFloat:texSize.width];
    NSNumber *height = [NSNumber numberWithFloat:texSize.height];
    NSDictionary *dic = [[NSDictionary alloc] initWithObjectsAndKeys:
    name, @”name”,
    width, @”width”,
    height, @”height”,

    JNFTypeCoercer* coecer = [JNFDefaultCoercions defaultCoercer];
    [JNFDefaultCoercions addMapCoercionTo:coecer];

    imgdata = [coecer coerceNSObject:dic withEnv:env];

    [(SyphonNameboundClient*)mClient unlockClient];

    [pool drain];

    return imgdata;

    The dictionary object is passed correctly to Java, but as I said, all the values are set as zero.

    My experience with Objective-C and JNI is limited, let me know if you see anything obviously wrong.


    Great – thanks so much for giving this more time.

    1. Zero-values: Are you giving up after your first try? Keep trying – a new SyphonClient doesn’t receive frames instantly.

    2. A leak. newFrameImageForContext: returns a retained SyphonImage, you need to release it

    3. In fact you do need to retain the image such that it lasts as long as anyone might be using the values in the dictionary, because if it isn’t retained the SyphonClient is free to destroy the underlying texture. If you were just passing the dictionary as a NSDictionary I’d say stuff the SyphonImage in the dictionary as well, but I don’t know what the coercion process does. Are arbitrary NSObject subclasses retained after coercion, or only classes JNF knows how to coerce? … It might actually be simpler to write a Java wrapper class for SyphonImage.

    4. This conversation may have been had before, but are you sure you want to be wrapping SyphonNameboundClient and not simply SyphonClient? Sorry if I’m repeating myself!

    Anyway thanks so much for all this – I look forward to taking a proper look soon.


    I left the client application running for a while, say 20 minutes, and it always gave name=0, width = 0, height = 0. However, it knows that there is a server sending out frames, because the hasNewFrame method returns true (I tried the client without any syphon server running, and never got any new frame notification, meaning that at least the notifications are not bogus).

    Will look at the SyphonImage issues later.


    I’m assuming you get a nil value for img which is why everything it returns is nil?

    I’d consider my point 4 and not using SyphonNameboundClient – it’s adding a layer of complication you probably don’t need. Perhaps you call its methods to set the name and app-name every render cycle, causing it to recreate the client every loop? That would mean the client never exists long enough to receive a frame.


    Actually, I was making a very silly error: calling the newFrameDataForContext() method from a thread different from the animation thread (where Processing’s opengl context is created), which resulted in getting nil every time I called [client newFrameImageForContext:CGLGetCurrentContext()] in native code…

    Doing all the calls from draw() results in valid id, width and height… I think we are close to having the client working 🙂

    Once I get this part working, I will look at replacing SyphonNameboundClient, as you suggest.


    It is working!

    I will clean up the code and examples and upload a new version of the Processing lib.


    Excellent! So many thanks for the effort.


    I’m testing on Lion now, where the new client functionality doesn’t seem to work, while it does on Snow Leopard. The Processing client does not connect to the running Syphon server, without giving any error messages.

    Most likely this is a glitch in my code, but just to make sure, is there any issue I need to be aware on Lion?


    Looks like there’s an embedded copy of the Syphon.framework in the Processing project? It’s probably beta 1, which has an issue with server discovery on Lion.

    Ideally don’t embed a copy of the framework but set things up to build a version from the project in SVN root instead, so it’s always up to date. For OpenFrameworks the directory structure on SVN does not reflect the distributed directory structure – instead there’s a project to build the framework and lay out the directory structure for distribution. Maybe more than you can be bothered with for now, but definitely something that should happen for Processing at some stage…

    If you want to just get things going for now, grab a built copy of the framework from the SDK at


    yeah, I was using an outdated version of the framework in the Processing library, thanks for the hint.

    I just uploaded the 0.4 package to the downloads section.

    Remaining issues to solve for future releases:

    1) reorganize the project structure, to make sure that both JSyphon and the Processing library always use the latest version of the framework

    2) use SyphonClient instead of SyphonNameboundClient. So the way this should work is by first getting the list of available servers and then letting the user choose one to create the client. Something like:

    Dictionary[] servers = Syphon.listServers()
    client = new SyphonClient(this, servers[0]);

    I was looking at the implementation of Simple Client, and the server selection appears to be done in the setSelectedServerDescriptions method. But things are not entirely clear to me. Where is setSelectedServerDescriptions called from? And how is the descriptions array computed?

    3) The Processing client has two ways of getting a frame: one using the getGraphics(), which basically draws the frame texture to an FBO, and the second with the getImage() method, which copies the texture to the image pixels array using glGetTexImage. The first works fine, and it is the fastest because there are no CPU-GPU copies involved, however the second doesn’t. The glGetTexImage call just returns a buffer filled with zeros. Any ideas?

    4) When I close a sketch with a running server, I get this error:

    2012-02-21 17:49:19.952 java[3808:7703] *** __NSAutoreleaseNoPool(): Object 0x1020a5fb0 of class NSCFDictionary autoreleased with no pool in place – just leaking

    I don’t create any dictionaries in the native code of the server class, so what could it be the reason for this error?



    1. Great yep.

    2. Simple Client uses Cocoa bindings for the interface, so descriptions is an array of one object which has come from the array of all servers from an instance of SyphonServerDirectory selected via the menu in the interface. Which isn’t so “Simple” – [[SyphonServerDirectory sharedDirectory] servers] will give you an array of all servers.

    3. GL: Won’t Processing just let you hand out the original texture? What’s the point of the FBO stage? Pixel buffers: unfortunately glGetTexImage is allowed to fail if it can’t be bothered with the pixel transfer – and it seems it never can when using an IOSurface – you have to draw to an FBO with a texture attached then glGetTexImage from that texture.

    4. Make sure you have an NSAutoreleasePool in place for EVERY call into Cocoa code. If you set a breakpoint on __NSAutoreleaseNoPool() you should be able to see where it’s missing.


    Hey Andres,

    This sounds like great progress! I just downloaded the library and it worked great with the Simple Server that ships with Syphon, but I couldn’t get it to read from my FaceOSC Syphon app. That app serves as “FaceOSC Camera” and is perfectly visible in the Simple Client. You can download it here:

    Am I just getting the server name wrong? “FaceOSC Camera” is what I set it as in the OpenFrameworks code. Am I missing something here or is this a bug with the library?



    Hey Andres, just wanted to chime in and thank you for the work you’ve been doing for the JNI and Processing implementations. Thats huge!


    yes, I actually had a difficult time to connect the Processing client to a Quartz server. I tried the names that appeared in the patch components, at the end I was able to setup the connection using “Quartz Composer”, not exactly sure why because the Simple Client listed another name for it… so this is something that definitely needs more work. I think it is just a matter of properly implementing the server directory query, as bangnoise mentions earlier in this thread ([[SyphonServerDirectory sharedDirectory] servers] , etc).


    I too got it working with Simple Server, but I’m having trouble getting input from other sources… I’m also getting that memory leak with the SyphonServer. Great work though, Andres! Super excited about this.

    • This reply was modified 9 years ago by L05_.

    Got it working with Modul8!! Simple Client was reading it as “Main View – Modul8”, but I used “Modul8” in my sketch, similar to what you did with QC.

Viewing 20 posts - 1 through 20 (of 20 total)
  • You must be logged in to reply to this topic.