Saturday, June 12, 2010

Is it just me, or does modern iterative modeling simply average out assumptions over large patched sets of pseudo-data?

The modern implementations remind me of the Youtube video which was uploaded, downloaded, and reuploaded 1000 times. The final result bears more similarity to any other source video undergoing the same process than the source video actually used, and that the final outcome is predictable from applying the most aggressive information-minimizing settings to the sampling function a handful of times: http://gizmodo.com/5555359/the-weirdness-of-a-youtube-video-re+uploaded-1000-times

What am I missing?

No comments:

Post a Comment