Forum Replies Created
-
AuthorPosts
-
vade
KeymasterI have no idea what you are asking, or saying – apologies if English is not your first language, but can you try to be a bit more clear and specific?
From what I can gather, It sounds like Final Cut is not seeing the full length of your movie. That sounds like how you are importing it into your Final Cut Pro project. If you are using FCPX, you are out of luck.
Does Quicktime play it back correctly?
vade
KeymasterCollada has worked well for me. We used it for the Aphex Twin tour with the model loader. Give that a shot.
As for separate track ID’s, this is really dependent on how the exporter and model format work. Collada is, again, the best choice for the format.
vade
KeymasterEnsure that your Syphon objects share the named OpenGL context that the v-module patches use.
Ensure that they are banged (rendered) at an appropriate time.
It looks like you’d want to put the [jit.gl.syphonserver mrender1FXslab @servername v-module] after the red jit.gl.slab above the video plane in this image.
vade
KeymasterSyphon will output and the native resolution of VDMX, pixel for pixel. Have you set your VDMX preferences, or layer sizes appropriately?
vade
KeymasterSure 100%. One “issue” with Quartz Composer (actually, ahem, Quicktime) is that some capture devices are not reported when Quartz Composer runs in 64 bit mode. Get info on the QC Editor Application, and select launch in 32 bit mode.
Now you can make a “Video Input” patch, get info on it, and you should be able to select your capture card. Now make a Syphon Server object, and pass the video out of the Video Input patch into the Syphon Server’s image input port. Now name your server if you want, and run the comp. Done and done. For a small performance increase, ensure you have “background erasing” off, even though you are not drawing anything.
vade
KeymasterThis could also be an issue simply with :
a) Fill Rate – you may be drawing way too much. GPUs can only fill so many triangles per second. Both Unity and VDMX can easily swamp a GPU with draw calls and fill rate issues on their own. Going full screen 100% effects fill rate, as you are painting a larger portion of the screen.
b) VRam – you may be using more VRam and hitting swap. VRam in OS X is virtual, so you can allocate textures all day long, but once you get over a hardware limit, you will use software (main memory) and things will drastically slow down. Going full screen changes back buffer sizes which, if you are near VRam max, can push you over the edge to swap land.
You can check for both with OpenGL Driver monitor. Look for your video memory and see if it gets too high. For fill rate, I think its “GPU core” usage, but CPU wait for GPU is also a good indicator of rendering latency.
Does changing the unity scene to something simpler make a difference? Does VDMX with a single movie the same size, no effects, one layer, cause the issue? Basically I am curious if you are abusing your system, or if this is a genuine weird issue. Not saying there is not an issue, but this info will help us trouble shoot the issue.
vade
KeymasterIt might make sense to rather than use and pass dictionaries, to use the “SyphonNameBoundClient” which is found in the QC, Jitter and Open Frameworks implementations. The idea is you can pass strings for just a server name, or an app name, and the internals in the name bound client will find the best match.
Thanks *so much* for being interested and lending your talent. We really appreciate it.
vade
KeymasterIn a future update to Syphon Framework, it might be possible to request a CIImage directly from the Syphon framework. There are some calls to create a Core Image from an existing IOSurface. No promises, but it might be useful in some cases, especially applications not using OpenGL, and leveraging only Core Image for their processing pipeline.
Let me discuss with bangnoise, see if it makes sense. There are lots of gotchas, so again, no promises.
vade
KeymasterYou can generate a CIImage directly from an SyphonImage (a texture) output from a SyphonServer via
+ (CIImage *)imageWithTexture:(unsigned int)name size:(CGSize)size flipped:(BOOL)flag colorSpace:(CGColorSpaceRef)cs
Just make sure the GL context you init your Core Image Context with (via is shared or the same as the Syphons, via
+ (CIContext *)contextWithCGLContext:(CGLContextObj)ctx pixelFormat:(CGLPixelFormatObj)pf colorSpace:(CGColorSpaceRef)cs options:(NSDictionary *)dict
As for AVFoundation, it is a replacement for Quicktime, and sits “above” the layer Syphon works (IOSurface / Core Video). IOSurface is already integrated heavily with other imaging technologies in 10.6, and 10.7 adds a few small changes/additional features. As far as AVFoundation incorporating anything from Syphon, I have no idea why that would ever happen.
Syphon is built on top of IOSurface, and leverages some of Apples inter-process communication APIs to handle announcing frame availability. As long as IOSurface is around, Syphon will be around.
vade
KeymasterWell, part of the issue is how Open Frameworks handles paths, add ons and the like. Its really kind of awful and difficult to deal with in my opinion. This is clearly a subtle linking issue, so its one of those things where stepping through and examining every path assumption with a fine toothed comb is *usually* the solution. It sucks, but i’ve had to deal with it myself for a bunch of times myself for other add ons and things. One big gotcha is usually spaces. Fun times.
Glad you are able to keep working.
vade
KeymasterIf you check the framework search paths for your target in both projects, does *everything* match? You might not be searching for frameworks in the right location, thus the linker is failing?
vade
KeymasterPossible maybe. Let me see if there is a plugin for the forum software to do that 🙂
vade
KeymasterAlso, you may be having an issue that both instances of the OSC composition are attempting to use the same port. If that is the case, do this:
0) Open the kinect composition.
1) Copy.
2) Close the above original composition, ensuring both editor and viewer are closed
3) Open the Syphon comp
4) PasteYou (generally) cant have 2 applications or instance of a socket open on the same port (listening or sending). If you have 2 open versions of some OSC plugin, it wont like that I suspect.
vade
KeymasterAlso try making a new blank composition and pasting the Kinect composition into it. if the exception happens, it has nothing to do with us.
June 16, 2011 at 12:23 pm in reply to: simple /stupid question processing & resolume with syphon #4225vade
Keymastersketch_jun15a
vade
KeymasterWell, Modul8 does not support native clients, so that means you are using QC Rehab and a Quartz Composer patch. This is really an unofficial way of getting Syphon into Modul8.
Does “Simple Client” show slow performance from the video sent from MaxMSP/Jitter?
Does the Syphon Server help file for Jitter show slow performance in “Simple Client”?
Unfortunately due to how GarageCube implemented Syphon in the latest Modul8 they chose not to expose “clients” (getting frames from outside of Modul8), only a single “Server” to send frames out (To Mad Mapper, etc).
vade
KeymasterSo you are sending from QC to Modul8? Modul8 is showing a lower framerate? Can you *please* give specifics as to what you are doing and how?
Please consult:
June 11, 2011 at 9:45 pm in reply to: jitter opengl optimization for syphon, jitter to vdmx, vdmxout vs syphonclient, #4764vade
Keymaster1) Yes. use the same texture upload path but send the texture to Syphon, and or a video plane.
2) Beta 8 has Syphon built in. Consult the VDMX forum for how to use it. basically, its a built in source. Suggest you use that, its a touch more efficient than using the QC Plugin method to get Beta 7 working with Syphon (beta 7 requires the separate QC Syphon plugin for it to work, beta 8 “just works”).
3) Yes. Use the Jitter Syphon Server and VDMX B8 client will see it.
4) Not applicable.
5) Syphon Recorder will takes images from Jitter if you use the Jitter Syphon Server. The jitter Syphon server takes in textures. You can copy the “screen” I think in jitter, to a texture. That should in theory do it. Im sure you can finagle it.
vade
KeymasterNo idea. We did not write Arkaos 😉
It could be how Arkaos handles flipped textures by flipping the modelview matrix of the OpenGL scene when finishing rendering. If it inputs a flipped image, well, its gonna be flipped. You can fix that in Quartz pretty easily though. Does it do it for *every source*, every live camera input and even for rendered sources (like Quartz sources, or built in generative sources?). If so, un-flip it in the QTZ 🙂
vade
KeymasterIf Quartz Composer is crashing you likely have something like QCRehab installed somewhere. I would suggest removing it. Once you get Quartz to open, you can manually specify the size of the render in image patch to the size of the incoming image (via the Image Dimensions patch), or set the pixels width/height to 0 so it will size to the *destination* context (usually the full window). I would suggest using the input image size, as outputting to the context/destination size is kind of unnecessary in my opinion.
-
AuthorPosts