Have an idea, need help choosing the pieces.

Home Forums Syphon Syphon Implementations – User Have an idea, need help choosing the pieces.

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #19776
    DiscoStew
    Participant

    I have an idea to create a web-based show that is co-hosted by an actor and an animated (live mocap)character. I figure that syphon would be needed to correct timing from the animated character.

    Can anyone point me towards the software and hardware I would need to accomplish this?
    I don’t think I can afford a Vicon system for mocap, but have not yet really looked into the cost. As for software, what 3D animation system can capture live actor and turn that into a video stream for syphon? Unity 3D pro can’t stream live output of the 3D actor.

    I am sure there are a million ways to accomplish this, but I am hoping to do this as a “one-man band.”

    #19779
    vade
    Keymaster

    “Unity 3D pro canโ€™t stream live output of the 3D actor.”

    Sure it can.

    We had a demo of a Unity3D rig setup used at Framestore (a very large VFX company) that streams live mocap data from London to NYC into Unity3D live.

    Using Syphon for Unity3D, you could capture the scene live, and send it to any compositing or other software you’d like.

    #19824
    DiscoStew
    Participant

    Hi Vade ๐Ÿ™‚ Thanks for the reply.

    I was under the impression that it can put out live data, but was basically for previs. I better go read it again.

    Could you point me in the right direction of a workflow and what hardware/software is needed? Obviously some type of mocap system will be needed, but unfortunately, they are really expensive (minor hurdle) or only run on Windows (Major hurdle since all my puters are Macs and/or Hackintoshs) compounding the expense to get a mocap system AND a Windows PC. It also seems to be even tougher to find a mocap system that does it all..body/facial/fingers. I can maybe live without finger-capture, and possibly without facial capture since I can do voice sync with Papagayo through Blender. But then this takes too much time, and voids the ability to go “live” broadcast through the internet.

    (I wish I could explain it all better..but I’m a bit brain-farted at the moment)

    I also thought of getting individual mocap systems and running them in-line with each other…ie a body mocap system and a separate facial mocap system. Syphon would be most usable for keeping things synced as long as I can adjust the output timing of everything to get around the latency issues.

    If you have any suggestions or need any more info about my ideas, don’t hesitate to let me know.

Viewing 3 posts - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.