It’s difficult to get the actual image data out. Not impossible, but difficult enough that it’s taking me longer since i can’t work on it 24/7.
If I knew more about webrtc, I think I could do a better job of dealing with it. Right now, I’m previewing the capture in a video element, then writing that to a canvas, from which I grab the pixel data. It’s kind of sucky, and each frame is uncompressed at 8mb. When I send that over a websocket, v8 just chokes and node dies with some sort of segfault. I’m going to move to an objective-c socket server in the near future, which would be better to get it into openGL / Syphon faster anyway.
Working on it, just slowly.