Hi Vade 🙂 Thanks for the reply.
I was under the impression that it can put out live data, but was basically for previs. I better go read it again.
Could you point me in the right direction of a workflow and what hardware/software is needed? Obviously some type of mocap system will be needed, but unfortunately, they are really expensive (minor hurdle) or only run on Windows (Major hurdle since all my puters are Macs and/or Hackintoshs) compounding the expense to get a mocap system AND a Windows PC. It also seems to be even tougher to find a mocap system that does it all..body/facial/fingers. I can maybe live without finger-capture, and possibly without facial capture since I can do voice sync with Papagayo through Blender. But then this takes too much time, and voids the ability to go “live” broadcast through the internet.
(I wish I could explain it all better..but I’m a bit brain-farted at the moment)
I also thought of getting individual mocap systems and running them in-line with each other…ie a body mocap system and a separate facial mocap system. Syphon would be most usable for keeping things synced as long as I can adjust the output timing of everything to get around the latency issues.
If you have any suggestions or need any more info about my ideas, don’t hesitate to let me know.