Forum Replies Created
This is definitely the best way to go about things. So much better than the previous routes I was trying.
It’s difficult to get the actual image data out. Not impossible, but difficult enough that it’s taking me longer since i can’t work on it 24/7.
If I knew more about webrtc, I think I could do a better job of dealing with it. Right now, I’m previewing the capture in a video element, then writing that to a canvas, from which I grab the pixel data. It’s kind of sucky, and each frame is uncompressed at 8mb. When I send that over a websocket, v8 just chokes and node dies with some sort of segfault. I’m going to move to an objective-c socket server in the near future, which would be better to get it into openGL / Syphon faster anyway.
Working on it, just slowly.
I can verify that WebGL is captured via the tab capture output. I’m trying to nail down the framerate / quality stuff before I go too much further. For me it’s full framerate or nothing.
I’m 99% sure it does. I have a chromestick so I could test it tomorrow and let you know.
It’s not going to be as fast as keeping it all GPU bound, but I’m not too worried. There might be some consistent lag, but I estimate it’ll be a few frames at most. (Early hopeful assumption)
I’m actively working on this now. I’ll be building a two part app… one is a chrome extension to send video out via the new Tab Capture api. Then, I’ll have a receiver app which can send the output to Syphon, Blackmagic cards, or Tricaster.
The first proof of concept will work with Syphon exclusively, because Vade and company have done such a great job making it simple to publish frames / read frames from Syphon.
Anyone who’d like to beta test can email:
Oh also, license is either MIT or GPL. (Probably MIT)June 30, 2011 at 8:42 am in reply to: V002 Movie Player beta 4 issues with MP4 containers #4237
I second the request for email updates, for the record. 🙂