Author: katehartman

  • USING SMIL

    This is just a basic presentation integrated video and still images. The smil file can be found here.

    [QUICKTIME http://itp.nyu.edu/~kh928/video/mx.mov 530 225]

  • LOOKING AT STREAMING MEDIA

    The main forms of streaming media I came across were live webcams, television, and radio. The webcams seemed to be mostly devoted to street scenes and traffic (or lack thereof in Fort Kent, Maine) and the occasional bird nest surveillance. The television and radio that I found was exactly that: television and radio. It’s great to be able to listen to Icelandic radio and access media from all over the world, but I have to say I’m kind of disappointed. The webcams have an initial voyaristic thrill and access to all kinds of media is obviously valuable, but I wonder, am I missing something? Are there great things going on out there with these tools that I just haven’t stumbled across? Because everything I have seen so far just seems to be a regurgitation of something that already exists. It seems like there’s a lot of potential in anyone being able to broadcast from anywhere, but I have yet to see that potential realized.

  • TAGGING VIDEO WITH VIDEO

    I have this idea of tagging video with video. Tagging video with text (like in YouTube or Google Video) is extremely practical, but what if you wanted to create an association that was less about being machine readable and more about a wholly visual/human experience? Because the thing about the reduction from image to text is that you often lose information in the process. You are also being told what you see before you see it, so in a way, your mind is being made up for you as to what it is. But besides all that, if you could just move from one video to another, it would simply provide for a more fluid visual experience.

    I have several ideas of the way it would work. This is a versatile, functional model. When watching a video, you would be given the option to tag it. A tagger would be asked to provide the following information (in some form):
    video to be tagged (URL)
    start point
    end point
    tag with this video (URL)
    start point
    end point

    The video you were watching would appear in a main viewer and other videos that were associated with that video would appear in a line up below. If you wanted to see an associated video, you could click on it and it would shift up into the main viewer, cued to the point specified by the tagger. Once you reached the end point specified by the tagger, the associated video would slide back down into place and the main video would reappear, cued to the point where you left off.

    Perhaps there could be two modes: one where videos only appeared below when you were passing through the timecode of the main video with which they were associated and another where all associated videos appeared below and you could click the associated video to be brought to the point in the main timecode with which it was associated.

    This is a simple interpretation based off of a single video. Eventually it would make sense to have a whole web where you could move from one video to another to another. But I like the idea of being able to come back to where you started. It would be great if there were some sort of dynamic diagram that accompanied the viewing setup where it spatially mapped your movement from video to video so it would be easy to get back to your own beginning.

    The first potential application I think of is media coverage. This setup would provide the ability to see multiple angles/sources of an event or happening. It could also be used for dissention or to provide an alternate viewpoint. If there is a point in a politician’s speech that you disagree with, tag that point in the video with the point in another video that gives a different side of the story. Or if he says something that is completely contradictory to something he said in another speech, you could easily link those two moments together for comparison.

    It could also be used for reference or educational purposes – a moment in a video where a certain theory is touched on could be linked to a video that provides a more in-depth look on that theory. Or it could be used in the world of spoofs – the spoof could be directly linked to that which it emulates. And it would make sense with the videoblog conversations that have been popping up – you could create a direct link in timecode between where Ze talks about Amanda and where Amanda talks about Ze so you could almost curate the conversation between them. Why you would want to do that, I don’t know. But the possibilities are endless.

    On a more discrete, subtle level, you could create associations between images that remind you of each other. Or the relationship between the two pieces could be completely personal and only understood by you, but others could stumble across it and create an entirely new understanding for themselves. I guess that’s the part that I like the most – is it doesn’t have to be so literal and sensible. For a more artistic interpretation, I have a vision of video clouds where associated videos pop up automatically and layer on top of each other and move around and oscillate in transparency, but I think that’s more for my own work.

    Anyway, this is just the start of an idea. But I would actually like to work on it and at least try to actualize some form of it on a small scale. If anyone has any advice for HOW to actually do this, I’d love to hear it.

  • (re)connect gets around

    (re)connect, my final for Wearables, has made a few recent appearances.

    Physically, it took a trip to Italy for Cute Circuit‘s Future Fashion Show in Pisa. The event consisted of a conference, runway show, and exhibit. Unfortunately, I was unable to make it, but my friend Megan MacMurray went and took some great video of the exhibit. Also, Cute Circuit has an extensive photo album on their site where you can see all of the incredible projects that were at the show. Some of my favorite wearables projects were there. It was definitely a privilege to have my work included.

    Virtually, (re)connect appeared on MAKE Blog in a video podcast by Tikva Morowati. She did a great job with it and this is my first time having my work blogged – very exciting!

  • GETTING STARTED WITH BLOGS

    So I just subscribed to my own RSS feed in iTunes and it wasn’t until actually doing it that I realized how incredibly easy this whole process is. I guess I just never thought through the idea that a podcast is simply audio within an RSS feed. The steps between consuming and producing are so few. It’s pretty cool.

    So now I have my blog. Last semester I technically had a blog, though I hacked up the template so that it would look as un-bloglike as possible. I wanted the ease of updating without having any blog-associated features such as comments. But it was setup up with Movable Type, which I ended up having a lot of trouble with. Last week I switched to WordPress which seems great so far. I moved over all of my previous entries without too much trouble and tweaked this theme a bit and now I’m up and running.

  • APARTMENT SOUNDS [podcasting]

    My first ever podcast can be found here.

  • Sunday at SummerStage [videoblogging]

    I went to check out this concert on Sunday that was a part of Central Park’s SummerStage. It featured some incredible South American musicians, including Seu Jorge, who I knew from Wes Anderson’s Life Aquatic with Steve Zissou. Good music and a great crowd. At some point towards the end, an impressive capoeira session broke out:

    [QUICKTIME http://itp.nyu.edu/~kh928/video/capoeira.mp4 160 137]

  • (RE)CONNECT DEMONSTRATION

    [QUICKTIME http://www.katehartman.com/projects/reconnect/reconnect.mp4 320 257]
    Demonstrated by Joo Youn Paek and Michael Horan.
    Video by Gabe Barcia-Colombo.

  • (UN)DRESS – FINAL PROJECT

    [QUICKTIME (un)dress 320 257]
    Inspired by issues of censorship and surveillance, (un)dress is a library of videos that examine the repetitive act of dressing and undressing. Twenty people were asked to perform this daily ritual in front of a camera. The video was processed live during shooting, causing clothing to be the only element recorded from the scene. The result is a series of anonymous yet intimate portraits ranging from the shy to the bizarre.

  • SENSOR REPORT

    My sensor report, “Soft Sensing: Using Conductive Fabric and Thread as Input“, can be found on the wiki for Tom Igoe’s Sensor Workshop class.