Saturday, 23 May 2009

Post 11. The Composition( part 2)

Date: 23 May 2009
Time 10:00 am
Place: Home

I have done the composition that is 33 bars or 1:06 minuets. I have record the composition using the Ninjam software and two laptops. Laptop 1 is the MacBook Pro laptop and Laptop 2 is the MacBook(white) laptop. Specifications can be found in previous posts. I have done similar setting with experiment one. I have recorded the source, the composition, played back from the iPod as well as the incoming signal from the network. Again the latency was different between laptop 1 and 2. For laptop 1 the latency was 2 seconds and 9 frames where for laptop 2 was 3 seconds and 13 frames. The following two videos show this.


Video 1 shows the latency as it is experienced from Laptop 1


Video 2 shows the composition as it is heard from Laptop 2. Here the source and the latency sound are presented in two different tracks.



The recording of Laptop 1 could not be recorded in two separate channels. The latency again can be seen clearly in the recording of Laptop 2.The latency that we notice here as well as the previous experiments suggests an alternative compositional approach. Since there is latency between the 2 and 7seconds it should be included as an aspect of the composition. In the composition above, look at the score in post 10, I have done similar thing. I include sections with small durations going from simple to busier playing. In this particular software, as it was mentioned earlier, the latency can reach up to 7 seconds. As a compositional approach it can be described as a latency fugue with the difference that each line is repeated exactly the same with no variations. The difference here with any compositions that used delay-repetition, like Jonathans Harvey- Ricercare una melodia, is that the delay/latency is unpredictable. For instance Harveys piece use a delay of one, two, three and four bars, each delay for a different loudspeaker. Knowing this it is possible to create harmonies, rhythmical patterns and so on. In the use of delay/latency it is impossible to predict for sure the amount of latency. However there are some thresholds like a minimum of two-second and a maximum 7 seconds. The following is an example using the sound material from the composition mentioned in post 10. I include rests varied up to 5 seconds. This approach will allow the latency to be heard more clearly as it was not so clear in the above examples that I only two laptops where used.



Here is a version with the rests mention above.






To stimulate the latency effect I have used four MIDI tracks with a different starting point. Assuming that each of the four laptops will have the same sound source, the piano, the following video is the sound experienced from one of the four laptops, laptop 1.I have used the latencies mentioned in Post 9 that were monitor from the Ninjam software. The four tracks represent the four laptops. Laptop 1 is with no latency. Laptop 2 has latency of 4 seconds and 9 frames. Laptop 3 has a latency of 3 seconds and 15 frames and Laptop 4 has latency of 2 seconds and 15 frames.






For the next video I have shift Laptop 4 further to the right to produce a latency of 7 second and 3 frames. Laptop 1 and 3 are heard from the left speakers where 2 and 4 form the right speaker. There is a noticeable difference between the two videos even though only one of the laptops has a different latency.





Conclusion



For experiment 2 we notice that latency can be used from an aesthetic point of view rather as a dysfunctional performing aspect. From the last two videos, even though that only one of the laptops had a different latency the difference was noticeable.

OVERALL Conclusion.


Experiment 1.

Even though it is possible to have the phasing effect using an adjustable latency is not functional enough to be used in a composition or a performance. Ejamming is indeed quality software that eliminates latency using a variation of techniques. However the primary idea of the suggested performance for using 2 laptops in each location could not be achieved here as many of the hardware equipment was not available. Most of all the locations were not available since the university network could not allow this. However this approach could be included to future works asking the university to provide an open Internet connection. In would be interesting to include eJamming and Ninjam in the performance. Since two laptops in each location would be used having different software in each laptop could provide with interesting results. In general experiment 1 was not a productive one in terms of latency creativity but nonetheless it was a good approach in understanding the experiment approach point of view.

Experiment 2

These experiments provide very good material to work with for future works. First of all the different in sound distribution from the server to the clients is enough to work with. As it was mentioned in the related post this function could serve as latency surround system that depends and change for each performance. The output of each of the laptops could be routed to one of the speakers and a surround environment. Again this was not possible to be done since I don’t have a surround system (yet).
Another aspect that was mentioned but not explored has to do with the audio sample. Changing the output sample rate for 44100 Hz to 48000 Hz it sounds almost one tone above the primary sound. This in relation the latency surround system can provide a rich musical piece using few notes.

The idea of the server did not work so well. Since it is free to log in any time to any server available there were many people interrupting my tests. As a future work I will setup my own private serve to deal with this matters.
Experiment 2 show that latency could be used creatively and in fact from various points of view.

In general these experiments were a good start for any future work.


Friday, 22 May 2009

Post 10. Ninjam composition/performance. Experiment 2

From post 9 it was obvious that audio distribution through the network from the source to the client differs. This means that if I sent audio from one computer to the other two, they will not receive my signal at the same time but rather in different times. Also the huge latencies, mentioned in post 9 up to 7 seconds, can create a very unusual sound space from the composer and performers point of view. The 7 second latency though was not able to documented as it happened with the other latencies

The composition.


Four laptops use Ninjam and the Internet to communicate. The source of sound for the four laptops will be the same. There will be two approaches here. One is predetermined composition and the other is a live performance improvisation. For the composition and the improvisation Logic Pro 7 and a keyboard synthesizer were used. For the composition the synthesizer function as a MIDI to trigger the loaded sound files from Logic. The composition uses only piano so the improvisation performance direct sounds form the synthesizer are used. The following diagram can provide a visualisation to the thinking behind the performance and composition. The yellow line (sound) reaches all destinations the same time. Each sound distribution is achievable through the Internet. A,B,C,D are the four different laptops that uses Ninjam software. Notice that sounds from A (black) to C need less time that to D.




Another aspect of this experiment/composition has to do with the sample format for input and output. In Utilities/Audio MIDI Setup these ratios can be changed resulting to a variation of pitch and speed in relation to the source. This approach will be discussed as a future work.



The thinking behind this approach has many musical qualities. The Internet bandwidth connection, as it was monitor in post 9, is responsible for the outcome of the composition. The musical material, the notes, is not so much in the listener’s interest as the combination of them from other computers network playback. The composition that is discussed in experiment 2 has a unique outcome since every time it is performed the outcome change due to the time network traffic. Here I introduce a different aesthetic musical outcome based on the actual time that is performed. Early morning hours 05:00 am the latency will be lower than at a 20:00. At 05:00 am it is most probably to find the server free and therefore the audio will be travel faster. This indeed is an important aspect that will be discussed in relation with the compositional approach.

The aim of experiment 2 is to show two things.

1. How different latencies, can suggest different aesthetic approach based on the same musical material. How can musical material, the composition, have aesthetic rules in relation to latency?
2. Apart from the composition itself, is it possible to create sound environment using latency in a surround setup.

The composition.

There was no particular approach for doing the composition. As a starting point I have composed a short piece in order to understand better the way it worked. Having in mind the delay effect form the latency I approach the composition using mostly rhythmic patterns. As this composition is an experimental one, it will be developed through using the latency. At a first stage here is the starting score of the composition.








This is the audible midi score.

Tuesday, 19 May 2009

Post 9: Experiment 1, Ninjam approach continue

Having a static latency, even though it might be many second, does not allow us to have the phasing effect. However this amount of latency suggest an alternative approach to creativity that I will discuss in later posts. Latency in Ninjam is variable meaning that the redistribution of sound through the network differs from client to client. This was noticeable since I control both locations A and B. Sending sounds from A to B has different latencies in sound that travels from B to A. This trigger more questions and thinking about the way that Ninjam works. As an extended part of experiment 1 or rather the transition to experiment 2, I monitor the uploading and downloading allowance that I have from my house network. I have done this using four wireless laptops all connected to the same wireless router that distributes the Internet. Specifications of three from the four laptops have been mentioned in earlier posts. The fourth laptop has the following specifications

Laptop 4:

Model Name: PowerBook G4 15"
Processor Name: PowerPC G4 (1.1)
Processor Speed: 1.5 GHz
Memory: 1 GB
Mac OS : 10.5.2


The following are screen shots made from measuring the download and upload speed from the http://www.speedtest.net/. Time and date are at the upper right of the screen shots. There was big difference doing the measuring at 20:00 and 1:00 am.

Laptop 1MacBook, White (location A)



Laptop 2 PowerBook G4 12”(location B)




Laptop 3 MacBook Pro (recording laptop use in previous posts)




Laptop 4 PowerBook G4 15”



There was more than 1Mb/s difference in downloading speed between laptop 3 and 4 at 20:00. Notice the difference in downloading for laptop 4. Going back to the recording I found some very interesting observations as well. I have done two sets of recording. The first set was done at 12:15 and the other on the 17:10 on 18th of May 2009. At the first set I have recorded sound from the source of location A and the output form location B. So I have record in a vary the signal that was feed in to the laptop at location A and then then the same signal that came out from laptop and location B. I have done the same thing for the other location. Recorded sound from the source at location B and the output form location A. The recording was done using two separate channels for each laptop/location. The difference between the recorded sounds is the latency. To make thing clearer, the recorded sounds are:

1. One channel with no latency since is recorded from the laptop straight to the sound card.
2. The other is the audio signal that travels over the Internet from one laptop to the other and then recorded to the sound card.


Recording session set 1


Time 12:15
Date: 18 May 09
Place: Home

Latency from location laptop A to location laptop B was 4 sec and 9 frames. The following video shows this.



Latency though from B to A was 3 sec and 15 frames. The following video shows this.



Recording session set 2

Time 17:10
Date: 18 May 09
Place: Home

The recording material shows there is a change in latency. Latency from location laptop A to location laptop B was 2 sec and 15 frames. The following video shows this.



Latency though from B to A was 3 sec and 15 frames. The following video shows this
.

All the videos show that in Ninjam time performance is also an issue that could be used in compositions. As we saw with the difference in download and upload speeds during different time and using Ninjam in different times during the day can give different outcome. In the next post a new experiment/ composition will be presented using the latency that varies between the laptops and time during the day.

Monday, 18 May 2009

Post 8: Experiment 1(part4), Ninjam approach

As an extension of the experiment, whether latency could suggest the phasing effect, I will use the same setup used in post 7 but with Ninjam software instead of eJamming.

Ninjam is free software that allows the audio distribution through the Internet via a server. In ejamming the connection is done via peer to peer between the sender and receiver thus the lower latency. Latency in Ninjam varies depending on the bandwidth of the server as well as sender and receiver. However, this variable latency can be a good thing for the purpose of the experiment.
Let me go once through the equipment used.

Location A


Hardware:
Model Name: MacBook (white)
Processor Name: Intel Core 2 Duo
Processor Speed: 2.4 GHz
Memory: 4 GB
Mac OS: 10.5.7

Ipod : iPod Touch 8Gb(2nd generation)

Software:
Ejamming Audio 2.0

Location B


Hardware:
Model name: PowerBook G4 12”
Processor Name: PowerPC G4(1.5)
Processor Speed: 1.5 GHz
Memory: 1.25 GB
Mac OS: 10.5.5

iPod: iPod 40Gb (dock connector)

Software:
Ejamming Audio 2.0

For recording the following were used:

Model Name: MacBook Pro 15"
Processor Name: Intel Core 2 Duo
Processor Speed: 2.16 GHz
Memory: 2 GB
Mac OS 10.4.11

Soundcard: Edirol UA-25 (2 in-2 out)
Speakers: Yamaha SH-50
Headphones: Roland RH-50

Software:
Logic Pro 7.2.1


I have tried the same approach, having the two marimba notes low and high form the two locations A and B. For location A the iPod( marimba sound low pitch) output was connected to the MacBook (white) line input. The output from the MacBook (white) was recorder through the Edirol sound card as left input. The same thing was done with the G4 PowerBook. The output form the iPod was connected to the line input and the output form the G4 PowerBook to the right channel of the sound card. This software, Ninjam, does not allow any latency manipulation as it happens with eJamming. However, there are some extreme latencies that can go up to 7 seconds, as we will see later on. Having no control over latency there is no way to have the phasing effect since when a connection is established through a server it is somehow stays tha same. Even though the latency is high is stable at a certain extent.

The beat, even though it is not synchronized, it is the same throughout the recording. The video starts from the 2:22 minutes onwards. There are also some interesting features in this software. Looking at the picture below you can mute or solo your input. Also you can control the output volume that you sent to the other players. The videos left channel is coming from location A and right channel from location B.




Ninjam interface


The part above tycho@88.106.4.x - Latency test 2 is the audio received from lacation B. You can also mute and solo each track and player. Said that, in the video the variation in the beat are due to the fact that I switch the mute button on and as a result it is audiable the audio feed in from the iPod as well as the audio recived form the opposite location. As mentioned earlier, the yellow track is the left channel coming from the output of location one. If you where where in location A you would hear the same things as the left speaker. Everything from location B is through the right speaker. This is clear onwards at the 0:29 second when I mute the receive audio from both locations. The low marimba sound can be heard through the left speaker and the high pitch marimba note to the right. The figure below is a representation of the approach. Sound from the iPods is feed into the laptop. From there sound is sent to the other location/laptop through the Internet. The received sound as well as the sound feom the iPods is then recorded via a sound card





Sunday, 17 May 2009

Post 7: Doing the experiment 1 (part 3)

Time 10:00- 18:00
Date: 16 May 2009
Place: Home

Since the different phasing between the two locations cannot be clearly heard I have tried the following approach in order to make it clearer. For location A I have recorded a marimba note C4 playing in every beat at tempo 120 bpm. For location B I have recorded the same marimba sound but with the note C5 playing in every beat at tempo 120. This way suggests a better approach in showing, and understanding that latency can be added. The ipods were triggered the same time and therefore they were in unison. If changes in latency occur between the two locations then latency would allow the dislocation of beats. During the procedure of documenting this there were many problems like the sound is feeding back from the sound card output to the input, low sound quality and noticable noice. In order to remove all this I use the two accounts of eJamming with the MacBook(White) as location A and MacBook G4( a third laptop) as location B. The output from each location was feed into the sound card of the MacBook Pro that both were recorded using Logic Pro. Moreover, to distinguish the two locations, location A is rooted to the left speaker and location B to the right.



The test above showed that different amount of latency between the two locations could have the phasing effect in the final audible outcome. However using the eJamming software was indeed hard to produce this since the software is built in such a way in order to eliminate latency. The next thing is to test with the same setup is the Ninjam software (http://ninjam.com/) which is an open source/free software doing similar thing with eJamming. However a major difference is that latency is huge in relation with eJamming.

Two things that were not included in this experiment but mentioned in the suggested performance.

1. There are four laptops used, two in each location.
2. Using real pianos there will be microphones that will capture any sound. Thus a feedback effect will be present.

What is next?

1. Try the same setup with Ninjam
2. Use four laptops (two accounts in ejamming and two in Ninjam)
3. Use microphones if possible.

Saturday, 16 May 2009

Post 6: Doing the experiment 1 (part2)


Time 10:00- 18:00
Date: 16 May 2009
Place: Home

To stimulate the suggested performance as well as the ON/OFF effect I have composed three different musical fractions of 2,4 and 8 bars. After that I created a long audio file with the effect of ON/OFF (music-silence-music-silence e.t.c). I have also created audio files with the opposite effect OFF/ON. These file are around 8 minutes long. Then, I have imported the music files into two different iPots that were connected as an input source for the two locations/laptops A and B. The music composition and the creation of the two audio files were produced with Logic Pro.

The main outcome from this experiment is:
1. To find out whether latency can be added between two places when the performers play the same tempo with no time variations.
2. To investigate more how audio network works in relation with latency
3. To explore existing software’s that uses audio network and therefore latency

The experiment.


Laptop A plays the 2 bar music fragment, so 4 seconds of music at 120 bpm, through the internet using the eJamming Audio 2.0 software. The software interface has various modes that deal with the quality of sound and the latency.


In the upper right there is the section that allows the changes of latency and audio quality streaming. Before going any further it is wise to give a short explanation about the functions that are in the software. There are three modes Jam, Sync and VRS that deal with latency in a different way.

In Jam mode you cannot record but you adjust latency, from auto to manual to distance. Auto, automatically translate the average ms between the two locations in order t have a reasonable audio with no dropouts. Manual is the manual adjustment of the latency. Distance, refers to the predetermine tempo in millisecond so that it sounds one beat behind. For example in tempo 120 the 500ms refer to one-quarter beat. The Sync mode does not allow any latency since is mostly used for recording sessions. The VRS, virtual recording studio allow changes in latency but you can only record one track at any time.


In the suggested performance one of the two performers starts by the time that hears sound from the other location. It was also mentioned that when A in ON B is OFF. However, thie following video attempt did not follow that plan.
This was done because the experiment was aiming to find out whether latency could be added in order to allow the phasing effect. The software used here, as mentioned above has the options to adjust latency. As you can see from the video nothing happened in terms of adding the latency. This is because the software in sync mode is not allowing any latency interference and since the two laptops where less then a meter apart there was no noticeable latency. As you can see from the video nothing changes when I mess around with different latencies (look at the upper right at 1:07 min).



I have also try another way to find out whether latency can be added in order to create the phasing effect. A and B start at the same time, meaning that the play button was pressed simultaneously since both laptops are in the same location, my house. Since I was not able to record through the eJamming the quality of the sound is not good. In order to record I have use another laptop with the following specifications.

Hardware:


Model name: PowerBook G4 12”
Processor Name: PowerPC G4(1.5)
Processor Speed: 1.5 GHz
Memory: 1.25 GB
Mac OS: 10.5.5

Software:

Audacity 1.3.7

This had some positive and negative results. The positive result is the possibility to have the phasing effect. The negative is that when, during playing, there is a change in the amount of latency, clicks and scratches are noticeable. The following video is the playback sound file from location A that was transformed into a video to be able to upload it in the blog.


Friday, 15 May 2009

Post 5: Doing the experiment 1 (part 1)

Since the suggested performance cannot be done for technical reason like, being in two different location, two pianos, two performers e.t.c I tried to simulate the experimental performance in my place using computers an other equipment.

In the suggested performance there are two locations, Athens and Birmingham, so I will call them location A and B. So one laptop is A and the other is B.

Technical issues and specifications about the laptops

Location A therefore Laptop A has the following hardware:

Laptop A

Model Name: MacBook Pro 15"
Processor Name: Intel Core 2 Duo
Processor Speed: 2.16 GHz

Memory: 2 GB
Mac OS: 10.4.11
External devises for A:

Soundcard: Edirol UA-25 (2 in-2 out)
Speakers: Yamaha SH-50

Headphones: Roland RH-50

Midi Keyboard: Roland SH-201 Synthesizer
Microphone: AKG 1000s

Software
Ejamming Audio 2.0
Logic Pro 7.2.3

Location B has the following hardware:

Laptop B

Model Name: MacBook (white)
Processor Name: Intel Core 2 Duo
Processor Speed: 2.4 GHz

Memory: 4 GB
Mac OS: 10.5.7

Software

Ejamming Audio 2.0

I have installed the Ejamming Audio 2.0 software in the two laptops A and B in order to establish audio communication through the network. Ejamming software allows high quality compressed audio signal between performers using peer-to-peer technology. The following will explain it better.

“First, the eJamming software decreases the file sizes sent over the network. To do this, the company's engineers developed their own compression and decompression algorithms that shrink the file size, yet still maintain an audio quality higher than MP3, a common compression scheme, says Glueckman.
Second, each musician is directly connected with the other musicians in a jam session, instead of being routed through a server. This peer-to-peer configuration "results in a lower latency by routing the audio stream directly to your jam mates rather than, on average, doubling that transport latency by directing the audio stream through a remote server," says Bill Redmann, chief technology officer of eJamming.”
From http://www.technologyreview.com/Infotech/18783/


In the experiment, hypothetically, latency can produce this effect if the milliseconds that are produces form the audio communication are added. The same musical material played from two different locations can shift the music further apart every time.


The above score is the from the Piano Phase by Steve Reich.

Thursday, 14 May 2009

Post 4: Experiment One 13/05/09

INTRODUCTION

In an overview the experiments that are presented in this blog will seek to discover and realize ways that latency can be used creatively. The prime intention of these approaches is based on the argument that latency can be seen as a creative rather than a dysfunctional feature in compositions and performances. This experiment labelled Experiment One is aiming to prove in a way the suggested performance/theory from post 1. Let me refresh the reasoning behind this experiment.

Hypothesis: If the latency between the two locations can be added then it is possible to create the music phasing technique introduced by Steve Reich. Latency can therefore be used as compositional feature.

Suggested Performance: “Four laptops are placed in two different locations, Birmingham‐ Athens. Laptop 1 and 4 are based in Birmingham and laptop 2 and 3 in Athens. There is a performer between laptop 1 and 4 that is called Red and the performer in Athens is called Blue. Laptop 1 is connected via Internet network to laptop 2 as well as 3 to laptop 4 via Internet. Laptops 1 and 4 are not communicating together as well as the laptops in Athens. The piece is similar to Steve Reich’s phase musical approach. Instead of having the performer phasing the music in and out the network latency will phase the music in and out as natural effect. Two same instruments, pianos, are located in each location. Red and Blue will have the same score, 8 bars of music or 16 seconds of music at 120 (bpm). A clock will be needed to count the seconds and beats. There will be an ON/OFF situation between the two. When the Red is ON Blue is OFF and vice versa. The ON signal means that the performer will play the 8 bars of music. The OFF signal means the pause for 16 seconds. Red will start as ON by playing the 8 bars. As soon as Blue hears the first note from Red that is the signal to start as OFF, so counting the 16 seconds of rest before setting to ON. After this Red and Blue have to play on time and count with the watch in order to be as strict as possible. Laptop 1 is the input of sounds from Red and laptop 2 the output of Red and the input of Blue. Laptop 3 is the input of Blue and laptop 4 is the output of Blue and the input of Red. Anti-clockwise circuited starting from laptop 1.The network latency in relation with the strict timing from Red and Blue will phase the music. A close set of headphone will be needed so that latent sounds and notes coming from laptops 2 and 4 cannot affect expectations and the performances of Red and Blue. The piece will stop when the two locations are in phase playing the 8 bars in unison. The outcome of the piece is based on the amount of latency that exists between the two locations.”

Undocumented attempts: As mentioned earlier the university’s network was not available to use it. So I had to do all the experiments from my home network. Trying to configure the setup and communication between the two I realise many technical related issues and problems that where not included or thought in advance.


Tuesday, 12 May 2009

Post 3: Some comments

Some comments

In relation to Post 2 in this blog and after a short discussion with Greg today about the experiment I have decided the following:

  • Instead of writing down all the equipment in advance I will write them separately for each experiment
  • There will be 3 experiments over all. Each one of them is aiming for different latency creativity.
  • In order to present the experiments I intend to screen record, from the laptops monitors, the whole process. A written documentation will also accompany each experiment starting from the rational behind it, the process and the final findings and thought about it.
In today’s discussion Greg also mention the idea of using the ON/OFF play/silence mentioned in Post 1, as a binary representation of 0s and 1s. Furthermore, about the locations with Internet availability there were some unpleasant findings. I have tried connecting with the software’s, eJamming Audio and Ninjam but none of them worked from the VRU. The university network is not allowing any Internet communication outside the port 80. Basically, the university allows only the default port for http//: traffic to be open. However, both software works at my place since there is no restrictions in my home network.

I will not spend time by trying to find out a workable way for the university's network. Since these experiments do not have to be presented in a public performance to obtain a mark, I will do the experiments from my place.

The location for the experiments is set up for my home network and the software that will be used is the eJamming Audio 2.0 http://www.ejamming.com/ and Ninjam http://ninjam.com/

Post 2: Hardware and Software



Hardware and Software.


As a starting point for this experiment it is wise to lay down all the hardware and software available. Working with the equipment that is already available from the VRU labs will be beneficial in two ways:

  • 1. No money will be spend
  • 2. It will save time as well, since there will be no alternative root for thinking the usability of different equipment. For example, an old and slow wireless rooter is possible to give a different outcome from a fast and new wireless rooter. However, this debatable approach might take place in future, since it is not under the scope of this experiment.
  • 3. By using the same hardware and software it will provide easier any dysfunction in the experiments. Also, it will be easier to monitor the outcome since the hardware and software relation will be the same and out of any concern.
The next stage is to write down the possible needs for the experiment and to find out if there are available.

Location.


There are four locations so far that the experiments can take place.

  • 1. VRU lab(Margaret Street)
  • 2. VRU Studio (Digbeth)
  • 3. Conservatoire
  • 4. Tychonas House

There is an uncertainty about the functionality of these locations since I have to make sure that there is a possibility to undertake the experiment. So things to do this week are:

  • 1. Write down all the hardware and software needed.
  • 2. Make sure what is the availability of each location

Post 1: The suggested experiment and thoughts

For creative reasons I’ve decided to create a blog about this latency experiment that will include all the necessary technical information as well as thoughts and approaches.

First I will transfer to this blog all the writing related to this experiment from my main blog.

The suggested experiment and thoughts

The following is an extract of the suggested performance mentioned in the essay.


“Four laptops are placed in two different locations, Birmingham‐ Athens. Laptop 1 and 4 are based in Birmingham and laptop 2 and 3 in Athens. There is a performer between laptop 1 and 4 that is called Red and the performer in Athens is called Blue. Laptop 1 is connected via Internet network to laptop 2 as well as 3 to laptop 4 via Internet. Laptops 1 and 4 are not communicating together as well as the laptops in Athens. The piece is similar to Steve Reich’s phase musical approach. Instead of having the performer phasing the music in and out the network latency will phase the music in and out as natural effect. Two same instruments, pianos, are located in each location. Red and Blue will have the same score, 8 bars of music or 16 seconds of music at 120 (bpm). A clock will be needed to count the seconds and beats. There will be an ON/OFF situation between the two. When the Red is ON Blue is OFF and vice versa. The ON signal means that the performer will play the 8 bars of music. The OFF signal means the pause for 16 seconds. Red will start as ON by playing the 8 bars. As soon as Blue hears the first note from Red that is the signal to start as OFF, so counting the 16 seconds of rest before setting to ON. After this Red and Blue have to play on time and count with the watch in order to be as strict as possible. Laptop 1 is the input of sounds from Red and laptop 2 the output of Red and the input of Blue. Laptop 3 is the input of Blue and laptop 4 is the output of Blue and the input of Red. Anti‐clockwise circuited starting from laptop 1.The network latency in relation with the strict timing from Red and Blue will phase the music. A close set of headphone will be needed so that latent sounds and notes coming from laptops 2 and 4 cannot affect expectations and the performances of Red and Blue. The piece will stop when the two locations are in phase playing the 8 bars in unison. The outcome of the piece is based on the amount of latency that exists between the two locations.”

There are three stages in this experiment that, for now, need to be explored:
1. The unrelated facts (hardware, software, place, instruments, score e.t.c)
2. The process (the way to put all this together and how they will work, interact with each other)
3. The outcome, the actual performance, what it is expected, the hypothesis of the experiment and the desired outcome.


The third stage, the hypothesis and the assumption, is a good starting point since it has a major reflection on stage one and two.


The Main outcome and the reasoning of the experiments.


A general outcome of the experiments is the following:

THE MAIN DIRECTION OF THE EXPERIMENT IS TO MAKE AVAILABLE ALL THE COMPONENTS SO THAT LATENCY IS EXPRESSED AS IT IS, SHOWING A CREATIVE PERSPECTIVE.


Experiment 1


The outcome of experiment 1, as mentioned earlier, was to allow latency to phase in and out the music from two separate locations in order to experience similar music phases
The debate between Greg and me in a small chat was whether

  • (a) this will work in the first place and if so,
  • (b) the 16 seconds of silence will become 32 as they will play along (Greg’s suggestion)or
  • (c) they will start towards unison direction from the very beginning (Tychonas suggestion)

There are also many other questions to be answered but most of them will appear during the process. There is an ON/OFF situation between the two locations. When Red is ON, Blue is OFF and vice versa. The assumption, for experiment 1, is that latency within the two locations will be added every time that Red or Blue is ON.

Moreover the following questions will be try to ask.

  • 1. Is there added latency between the two locations and how much is it, how it can be measured?
  • 2. How long it will take to have a noticeable effect?
  • 3. How Red audience will listen in relation to Blue’s?
  • 4. Is there a direction towards unison or a separation of the two leaving 32 seconds of silence and 32 seconds of music?
  • 5. How to synchronize the two to know that there is a reference point to study the experiment.

The next step is to shape the design so that it is possible to know the equipment, place and time, instruments and performers.