Author: katehartman

  • Day 1: Prototypes for Relational Devices

    A slow start, but here are some prototypes that emerged from Day 1…

    Kiss Shield: a clear signifier for awkward embraces

    Connected Soles:

    Face-to-Face Mask:

  • Coming Soon: Soft Circuit Workshop: ITP Summer 2008

    I’m excited to announce that I’ll be teaching a new class called Soft Circuit Workshop this summer at ITP! It’s all about investigating materials and techniques for creating soft, flexible, and resilient circuits that rock the ways we use and view electronics.


    Here’s the course description:

    Have you ever snuggled with a circuit? Standard electronic components can be hard, brittle and unfriendly. They are often unwelcome additions to soft environments like clothing, toys, tapestries or furniture. Materials such as conductive fabric and thread open up new possibilities for tactile electronics. This class provides an in-depth investigation of materials and construction techniques for creating soft and flexible circuits. There are no prerequisites – introductory electronics and sewing techniques will be reviewed. To begin, we survey both basic (conductive fabric and thread) and advanced (Luminex, etc.) materials that are on the market, assessing where to get them, how to use them, and where their potential lies. We then investigate fresh applications for commonly available materials, such as organza, steel wool, and metallic threads. Students choose a material to explore on a deeper level, defining its electrical and physical capabilities and documenting their research as part of an online reference project. In addition, discussion is devoted to outstanding needs and students are encouraged to imagine and develop materials that are entirely new. Now we’re ready to put our knowledge of materials to work. We explore construction methods and connectors for successfully integrating soft materials with standard components, learning how to create circuits that are flexible, durable, and aesthetically pleasing. Also covered are techniques for power management and insulation, wash and durability testing, laser cut fabric circuits and the Arduino Lilypad environment. Students produce a final project, creating pliable artifacts that are ground-breaking in both construction and concept.

    The class runs May 19th to June 27th – six delicious weeks spent exploring the intersection between craft and electronics, body and tech. It’s 4 credits of grad-school goodness, and since it’s a summer class, you don’t have to be an ITP student to enroll. Information about registering can be found here.

    Questions about the class? Email me at katehartman[at]nyu[dot]edu.

  • Hats in “Threads” show at Jersey City Museum

    On Sunday, February 24th, come see the Muttering Hat & Talk to Yourself at the Threads wearable fashion show presented by the _gaia art collective at the Jersey City Museum. Included are projects by several ITP alum, including my fabulous officemate Jenny Chowdhury. More information can be found here.

    PS- Even dream of being a model? Now is your chance. I need two to show off these lovely creations. If you’re interested, let me know!

  • Thesis Presentation: This Device is for You

    I, Kate Hartman, presented my Master’s thesis.

  • Screen Reader

    I’m interested in exploring ways in which we can physically move through text. This is something that is inherently a part of reading a book or newspaper – there is a tactile sense of how much you have consumed and how much you have left to go. But this is something that we get further away from as we do more and more of our reading on the screen. I started thinking about preexisting ways that we move through or break apart text for different purposes, whether it be text scrolling on teleprompters, lines broken apart on cue cards, or words that are physically assembled into magnetic poetry.

    I’ve also been thinking about ways to liberate the screen – to make into something we can touch and hold and make it responsive to it’s own physical environment. There are lots of examples of explorations being done in this direction, like the Interactive Digital Wall by Onomy Labs, but I wanted to embark on some explorations of my own.

    In considering how to combine these two interests of physically exploding text and liberating the screen, I considered possible actions that could be used to reveal text with a portable screen including scrolling, attaching, throwing, and banging. For this project I settled on attaching and made a prototype of what I call the “Screen Reader”.

    The Screen Reader is a setup that allows you to physically move through text in a linear fashion by attaching a small screen to specified locations on a wall. When the screen is attached to the wall, it shows a word. When you move it to a different location, it shoes a different word. When you reach the end of the line and then return to the beginning, you will reveal a new line of text. The idea is that you can read and reveal a text one word at a time.

    The back of the screen is covered with the hook side of conductive velcro. On what would be the wall are patches of loop side conductive velcro. When the screen is attached to any of the available locations, the velcro on the screen creates a connection between the two small pieces of velcro on the wall and in effect closes a switch. Each location where the screen can be attached is a switch. All switch information is monitored by an Arduino which then sends information serially to Processing as to which switch is activated. According to which switch is activated and in what order the switches are activated, the Processing code determines which word to display. For this situation I hooked up the LCD to my computer via the s-video out and showed the applet on the secondary display.

    A video of what I showed in class can be seen here:

    [QUICKTIME http://itp.nyu.edu/~kh928/a2z/screenreader320.mov 320 257]

    I was pretty satisfied with the result. It’s a rough prototype, but I enjoyed turning the act of reading into a more physically engaging activity. I started very simple approach but have ideas that I plan to pursue as to how I could expand this into a larger, more dynamic system, both in terms of content and physical interface.

  • Device Testing in NYC

    Calling all New Yorkers!

    I’ve been developing a new line of absurd communication devices that explore the subtle ways we relate to ourselves and to others (think “muttering hat“, if you know what that is).

    From April 16th-20th, I’ll be running some rapid user testing here in NYC which means that YOU will have the chance to try these out yourself. This is an opportunity to take these devices ANYWHERE you like in the city, try them out, and make a video of your experience. Pick up & drop off of the devices will take place at 721 Broadway in Manhattan and device tests will run on a 24 hour turnaround.

    Information about the project can be found here:
    http://www.thisdeviceisforyou.com

    Information and signup for NYC device testing can be found here:
    http://www.thisdeviceisforyou.com/wiki

    Questions? Send an email to mail[at]thisdeviceisforyou[dot]com

  • Solar Panel as Power Source & Light Sensor

    For the solar assignment, I worked with Rob Faludi to augment a project that we already have been working on for quite a while. Our decided task was to use solar cells both as a light sensor and as a power source.

    solar_rob.jpg solar_kate.jpg
    The project is called Botanicalls and it is a system where plants are able to call people on the telephone to express their needs. The existing physical computing setup include a soil moisture sensing circuit, a microcontroller, and a ZigBee radio. There were two issues that we wanted to address:

    solar_wallwart.jpg
    1. In terms of the of transference of information, these plants are entirely wireless. In terms of power, they are tethered. Because we did not want to deal with the possibility of batteries dying, we have been using wall power for our circuit. This is both inconvenient and a conflict of interest. You should not have to unplug your house plant when you want to move it to a new location. One of the goals of this project is to help people develop successful relationships with their plants. We are specifically targeting people who have trouble keeping up with plant maintenance, so adding the task of changing batteries on top of taking care of the plant does not make any sense. The focus should be on maintaining the plant, not the technology. In addition, this is intended as a consciousness-raising project. We want draw people’s attention back to the natural world and to encourage them to think about the consequences of their actions. We do not want to do this with yet another device that demands the consumption of fossil fuels. In terms of options for alternative energy sources, using solar power for circuitry for a plant makes sense – a plant already has an inherent need for light, so why not use that light to power the circuit? Or in standard SAT format:
    photosynthesis:plants::photovoltaics:Botanicalls

    solar_solarcell.jpg
    2. We’ve started off with soil moisture as the initial need that we are sensing & responding to for the plant. But we’ve been itching to move on to light and actually had some of the code already written for it. We were originally intending to just use photocells, but when brainstorming about this assignment, Rob and I thought it might be a good idea to be efficient in terms of our components and use the solar cells as light sensors in addition to a source of power.

    Our existing circuit is powered by a regulated 3.3V, which means we need a source voltage of at least 4.5V. We decided to use 3.6V 50mA solar panels. Here we can see two of them in series held in a window with a fair amount of light generate 5.44V and 104mA:
    solar_windowV.jpg solar_windowmA1.jpg

    Here you can see the circuit being powered directly off of 2 sets of 2 panels in series in parallel:
    solar_panelsstraight.jpg

    We decided to use this setup to trickle charge a setup of 4AA batteries. Here you can see that the voltage supplied to the circuit when the batteries are not attached is 6.52V:
    solar_withoutbattery.jpg

    When the load of the battery pack is added, it drops to 5.24V:
    solar_withbattery.jpg

    And the analog sensor value, which is measured directly off the panels before the battery charging circuit is 5.86V:
    solar_analogsensor.jpg

    Our only concern is that the sensor readings will be affected by the charge level of the battery. We are going to setup a datalogging scenario so that we can monitor the sensor values and battery charge over time to see if this is actually an issue.

    Here you can see the overall setup:
    solar_system.jpg
    In this image we are using the breadboard to swap out different resistors to see which would give us the ideal range for our analog sensor values.

    Our circuit diagrams can be found below – both the diagram for the overall system and a detailed diagram of the part of the circuit that pertains to this project.
    Solar_Sensor_Battery_Boost.jpg solar_circuitdetail.jpg

    For the real life version, the microcontroller does value averaging, so it responds to the amount of light that it is getting over several days. But we also created a version made for demonstration purposes – when the solar panel is covered, a phone call is made requesting that the plant be moved into the light. When the solar panel is exposed to light again, a call is made to say thank you.

  • Slam! Wham! (pause)

    A script for performance is usually a static entity – something that is written, printed, memorized and performed. But what if it were somewhat more dynamic? What if it were pieced together live from disparate sources? For my midterm, I explored the idea of generating a script through the process of crawling the web. In this example, I crawled a movie script site while using different pattern matching techniques to determine what was sort of lines are spit out. You can see in the example below I was looking for lines with more violent words (slam, wham) and lines with pauses – an attempt to create a sort of ebb and flow in the narrative. Though the lines move along too fast to be read in completion, you can get a sort of general sense of a story.

    In this instance I’m returning mostly descriptive, stage direction-type stuff, but the same technique could obviously be used for generating dialogue. I’d like to work on developing more specific rules, so that it’s a bit more elegant and also the work on having better control of the pacing. I’m interested in working from unlikely sources (how would a scene from the New York Times be performed?) and in having two actors work off of two drastically different sources and see how they can play off of each other and if any sense could be made. The main thing I enjoy about this setup is the sense of anticipation and unpredictability – you can never know when the next line will come and you never know ahead of time what the scene will be.

  • A2Z Midterm Pseudocode

    I’ve decided to go with my idea for constructing a dialogue for two people out of language or lines from their favorite “texts”. I’d like to eventually look at all sorts of texts including books, television show scripts, etc. But for the sake of ease and clarity, I’ll be starting with movie scripts. This is inspired by the way people repeat lines back to each other from things they have read or seen. I often wonder – what if these were the only words we could use to communicate? What if our vocabulary was populated only by what we consumed? I guess, in a sense, it is, but it would be interesting to look at it through the lens of popular culture.

    Here’s how I see it working:
    -Get access to my myspace & Friendster accounts
    I’m not sure how to do crawling with anything that’s password-protected. In the end, these don’t necessarily have to be my friends, but since I want to get these people to “perform” the scripts after they are created, I’ll stick with who I know for now.
    -Scrape profiles for favorite movies and make list for each friend
    -Search for movie scripts
    -Accumulate source files
    -Analyze each script
    It seems as though I should be looking for what sort of language is significant or prominent. I need to figure out how to discern what lines are sort of iconic, representative, or memorable from the movie as a whole.
    -Select “pairings”
    In other words, decide who will be talking to who. I will probably do this manually, but it would be interesting to think of how to automate this. Should it be random? Should there be some sort of logic to why these two people should be talking to each other?
    -Pull out a set number of lines for each person
    -Arrange lines in dialogue form to form a new script
    I need to figure out some sort of algorithm for this – questions followed by answers, etc.
    -Send script to actors with request to generate a “performance”
    This could be done via email OR using whatever service I scraped for the information. More likely than not, I’ll have to actually arrange for these people to get together and “perform” the script. The result could be a video created in-person or an audio recording made via telephone.

    The way I’m looking at it now, each person will be the embodiment of many characters, or channeling the language of many characters from many movies. The thing I wonder is if I should just focus on the language of one character per movie, or whether it should just be a free-for-all. Also, I wonder if I should just start out simple and do one movie per person. I guess I’ll just experiment and see what’s interesting & engaging and also what I can accomplish in a short amount of time.

    I imagine that I will only be able to accomplish chunks of this separately and paste it together myself. It won’t be the most elegant thing, but it’s a good way to get results quickly and see if this would be interesting enough to pursue & polish for the future.

    Obviously not for the midterm, but if I could get the full system running, I’d love to create an installation-based scenario where this could be used. There would be two stations next to each other in front of a camera. People enter their favorite movies and then hit “analyze”. While the camera was running, they would then be prompted, line-by-line on their individual screens, with their lines that they would say aloud to each other. The fun part would be that the script would unfold as they were “performing” it – they would never know what the next line was or where the conversation was leading. The video would then be automatically uploaded to a website along with the text of the script. I like the idea because it is seemingly innocuous – listing your favorite movies does not seem particularly personal. But the dialogues might lead to some pretty bizarre places and either way, it causes you to have a full conversation out loud that you might not have had otherwise.