Forum Replies Created
By the way, love the moniker.
This is definitely something we plan to have.
Well, not to toot our own horns, but thanks to Toms amazing work Syphon is very stable. We’ve got absolutely 0 crash reports from anyone so far that are actually caused by Syphon. We urge you to try it out before the full 1.0, which, at this point is probably only going to be a few small API additions.
You can init and run QCRenderers from multiple threads (like, a CVDisplayLink thread), but you have to be sure to init them, call executeAtTime and dealloc them all from *the same* thread. You cannot bounce them across threads.
So you can have multiple QCRenderers on multiple threads, assuming they never leave where they were alloc/initted from.
Syphon does not init or run QC Compositions / QCRenderers in the framework, its agnostic on rendering back ends, as long as its OpenGL based 🙂 So it cant help you with regards to opening compositions.
To be clear however, you can setValue: forInputKey from values that come from different threads. IE, if you render a comp, get out a CVOpenGLBuffer image from it, you can then move that to another thread. I dont think that should leak, as you are responsible for cleaning up the value of the output key. You do need to retain it, and ensure locking happens in threaded scenarios.
I believe that is correct, some software does use QCRenderers on threads other than the main thread without issue.
Is that information useful?November 11, 2010 at 2:20 pm in reply to: jit.gl.syphonclient: unable to load object bundle executable #4533
No worries, happens to the best of us 🙂November 11, 2010 at 1:25 pm in reply to: jit.gl.syphonclient: unable to load object bundle executable #4531
Read the read me ?
Im not 100% sure. Part of the issue is that Photoshop plugins tend to be destructively used, meaning, if you apply a filter the filter exists only during processing of pixels, not before or after, and only for the duration of the processing time.
Other plugin times, from looking at the SDK do not seem appropriate for use as an ‘always on’ reading the canvas / document sort of a solution. However, I know very little about Photoshop plugin SDK. I’d love to be proven wrong!
That would be amazing.
If arbitrary LD_PRELOAD overwriting or ‘insertion’ could happen, and be turned on optionally, that would be an awesomely powerful solution. Currently Syphon has a working client for Unity, and since frames will stay on the GPU, it would be *very very fast*
I will admit, Im not familiar with that technique at all. Let us do some research, but LD_PRELOAD should work on OS X since, OS X has a posix / linker , so theoretically shimming into some app could.. maybe.. work?
This was a thought I had too when starting out, did not find any info on it. Thanks for the heads up.
To answer your questions specifically though.
1a) right now, no. The app has to ‘opt-in’ to Syphon either via a plugin or adopting Syphon.framework natively.
2) I would say yes, as Syphon is optimized to stay on the GPU (no slow readback to the CPU), and can support very high resolution and framerate because of this.
If SyphonNameBoundClient is not working you ought to double check you are passing both the appName and the serverName correctly with the correct case. All implementations of QC and FFGL and Jitter uses the same class with no issues.
As far as GL_TEXTURE_2D, Syphon will *never* be able to output those. You will have to render into your own FBO with a 2D texture attachment to “change” the texture target. Sorry, this is a current limitation of the underlaying API (IOSurface).
Sure, I have a beta of Isadora Core and have messed with it a bit. Because it can load Quarz Composer comps and Core Image units does not mean Isadora is using textures throughout the image processing pipeline. 20fps indicates to me not only is it not using textures all the time (or, in fact at all), its using less than adequate readback from GPU to CPU if even using the GPU for any intermediate processing.
Im not familiar enough with Isadora to make grand assumptions, but you should ask if Isadora uses the GPU throughout the image processing pipeline, and if not, what you should do in the patch to avoid any readback to the CPU.
I’m not sure if Isadora has a completely GPU accelerated pipeline to be honest, which is one of the things that makes the Unity, Quartz Composer, FFGL and Jitter implementations so fast, everything stays on the GPU.
Thanks for the explanations though, glad it was somewhat working. Can you post a how to to get it up and running, and what is needed? Maybe someone can experiment and see if a faster method is possible.
Thanks for the help. I just committed a working ofxSyphonClient to SVN along with a new project file. Feel free to base the Cinder port of a client off of it, it should be pretty straightforward!
Thanks for your help and enthusiasm!
Hi. I just checked in a working ofxSyphonClient into the SVN, which is more of an addon than a static implementation in the app. Feel free to try that out and let us know how that works for you. I’ve also updated an example so both clients and servers operate in the test app.
Thanks for the enthusiasm! You can find the code on Google Code svn 🙂
Oh, I never looked at main.mm since its usually “the same old thing”, sorry about missing that. Im going to attempt a client shortly. Ill put it up on the SVN, add a readme, put your info on it, and what not.
Thanks for helping out, its much appreciated! I have not used Cinder or gotten heavily into it, but ill have to check it out. Thanks again!
Ok, here is an updated and fixed example using auto release pools and proper server retirement protocol. I also extended the server to be able to publish a passed in texture. Technically speaking it should not need its own internal ofTexture but having a one-shot publishScreen function is kind of nice. I think it could use a bit of cleaning up, but all in all its pretty nice and works exactly as the QC implementation does, which is nice.
Here is a link to the updated XCode Project.
Hey, interesting. I notice you are doing readback to the CPU via ofImage, you can make this a lot faster by using an ofTexture and using
Rather than using the ofImage. When using ofImage for that, as far as I know, you readback to the CPU, rather than keeping the data on the GPU which is the main advantage of Syphon in general. Also, NSAutoreleasePools need to be made, and server needs to be released and stopped in the deconstructor. Otherwise, awesome!
I’ve changed your ofxSyphonServer, if its ok with you, I can throw this in SVN, giving you credit for the initial implementation.
Thanks so much!
Hey that is *fantastic*. Let me check those out, and I can put those in SVN and build off of them.
Thanks so much!
Yes. This is how async read works. I suggest you check the Max/Jitter/GL docs. Of course the syphon client needs to render to the destination context before the async read reads it, otherwise there will be nothing there to capture? This is called the painters algorithm, and is expected in all drawing apps.
If its not there yet, how do you expect anything to read it?
Hi Karl, we don’t have a JNI example right now, but maybe if you could drum up a very basic java AWT example that draws a buffered image, we could look at it, and think about how Syphon might shim into it? No Promises.
Take a look and maybe chime in to the Syphon Processing Implementation Development thread, where user skawtus is attempting a JNI + Processing implementation?
Perhaps some basic JNI example can first be done, to help bootstrap other environments?
Brian, is the texture tearing a VBL sync issue? Unity has VBL disabled by default.