I have this idea of tagging video with video. Tagging video with text (like in YouTube or Google Video) is extremely practical, but what if you wanted to create an association that was less about being machine readable and more about a wholly visual/human experience? Because the thing about the reduction from image to text is that you often lose information in the process. You are also being told what you see before you see it, so in a way, your mind is being made up for you as to what it is. But besides all that, if you could just move from one video to another, it would simply provide for a more fluid visual experience.
I have several ideas of the way it would work. This is a versatile, functional model. When watching a video, you would be given the option to tag it. A tagger would be asked to provide the following information (in some form):
video to be tagged (URL)
start point
end point
tag with this video (URL)
start point
end point
The video you were watching would appear in a main viewer and other videos that were associated with that video would appear in a line up below. If you wanted to see an associated video, you could click on it and it would shift up into the main viewer, cued to the point specified by the tagger. Once you reached the end point specified by the tagger, the associated video would slide back down into place and the main video would reappear, cued to the point where you left off.
Perhaps there could be two modes: one where videos only appeared below when you were passing through the timecode of the main video with which they were associated and another where all associated videos appeared below and you could click the associated video to be brought to the point in the main timecode with which it was associated.
This is a simple interpretation based off of a single video. Eventually it would make sense to have a whole web where you could move from one video to another to another. But I like the idea of being able to come back to where you started. It would be great if there were some sort of dynamic diagram that accompanied the viewing setup where it spatially mapped your movement from video to video so it would be easy to get back to your own beginning.
The first potential application I think of is media coverage. This setup would provide the ability to see multiple angles/sources of an event or happening. It could also be used for dissention or to provide an alternate viewpoint. If there is a point in a politician’s speech that you disagree with, tag that point in the video with the point in another video that gives a different side of the story. Or if he says something that is completely contradictory to something he said in another speech, you could easily link those two moments together for comparison.
It could also be used for reference or educational purposes – a moment in a video where a certain theory is touched on could be linked to a video that provides a more in-depth look on that theory. Or it could be used in the world of spoofs – the spoof could be directly linked to that which it emulates. And it would make sense with the videoblog conversations that have been popping up – you could create a direct link in timecode between where Ze talks about Amanda and where Amanda talks about Ze so you could almost curate the conversation between them. Why you would want to do that, I don’t know. But the possibilities are endless.
On a more discrete, subtle level, you could create associations between images that remind you of each other. Or the relationship between the two pieces could be completely personal and only understood by you, but others could stumble across it and create an entirely new understanding for themselves. I guess that’s the part that I like the most – is it doesn’t have to be so literal and sensible. For a more artistic interpretation, I have a vision of video clouds where associated videos pop up automatically and layer on top of each other and move around and oscillate in transparency, but I think that’s more for my own work.
Anyway, this is just the start of an idea. But I would actually like to work on it and at least try to actualize some form of it on a small scale. If anyone has any advice for HOW to actually do this, I’d love to hear it.