Previously, Colin Burgess wrote in qdn.public.qnxrtp.advocacy:
Steve Munnings, Corman Technologies <> firstname.lastname@example.org> > wrote:
I read about it the other day…
Apparently, CBS has about 17 “robot” cameras, all trained at the same spot
on the field around about a 170 degree area of the stands.
They are all synchronized.
Apparently that “jerky” rotatation is when they change the view-point from
one camera to the next.
Looks wierd, but cool…
I was a little dissapointed that they didn’t “morph” between the
shots. Surely a cluster of PC’s could do that in realtime these days?
To make it look any good, you’d need to do some amount of feature-matching between the different viewpoints, and that’s really an ai-complete problem without some external assistance or very small deltas between viewpoints.
Then again, it might be entertaining to see the players morphing into eachother, the football, the 20 yard line, etc, through the panning. “Hey, wasn’t that player on the other team in the last frame?”
It would probably be doable with about 5 times as many cameras, but by then you have enough data that a standard cluster would be completely swamped, which only leaves the option of writing a cheque with a really really big number on it to someone like SGI for a real supercomputer.
Tony Mantler | Proud ---- Days since the last
QNX Consulting | of our | 27 |
email@example.com | Record ---- “Gerbil Incident”