Mikhail Bakhtin has a posse!
we are not making music - it may be boring and that is ok - don't have ideas about what would be "good"
I like the idea of a system where we are listening for some trigger, and then when that trigger happens, it triggers a rotation or shift in the system. Here is an instance of the famous self-playing Krell patch. It seems related: https://youtu.be/Y4hdvmix9Uo . I am enough of a Cage disciple to be suspicious of intuitive improvisation all the time. Aleatoric rules are a great way to break "intuitive" habits.
I personally have a hard time with the idea of no words, because words to me are sound - the same way that words have meaning, also sound does, so why should words be eliminated? Words from languages I do speak have more intrinsic meaning than words from languages I do not speak - but my brain will always be connecting, or trying to connect some meaning, be it to sounds or words. While I agree that a part of improvisation is letting go of meaning, I think that it's in that letting go that we find connections we didn't know were there - hence "building" something...
In short, I feel that to go into foreignness, it could be fun to short circuit words, but I'm not sure it'd serve us best so early on in our process. Maybe it's about building constraints around language that bind us, but do not bind language? Like, being able to use only a first or last syllable of a word, or its vowels, or its consonants?
I would say let's try to do a session with no "speaking", no "words" at all, nothing we could recognize as such. Is that possible? Can we find a foreign territory? an interesting territory? can we give room to each other's strangeness? can we listen? can we also allow ourselves to be bored ?
Try to have a unified background for your image.
Connexion one hour before: talking together, adapting, attuning
sound tests
Listen individually to some of Derek's music ...
Set an alarm
Blindfold on

Tuesday Mai 5th, 21h Paris time.
Duration two hours
using a zoom interface
Will we use triggers again?
This time we decided to take off all noise reducing in zoom and on our mic computers settings.
In the future we will try other platforms - explore the differences in mixing streams, compression and noise suppression in there.
But we will always do "with"! With algorhytms, with time lags, with bandwidth cuts, with ...
Original sound allows you to preserve the sound from your microphone without using Zoom's echo cancellation and audio-enhancing features. This is ideal if your microphone or sound equipment has these features built-in and you do not need the additional enhancement.

Zoom Disable noise, and activate original sound ? or not ?? You will first need to enable the setting for yourself, a group of users, or your account in the Zoom web portal. After enabling it on the web portal, you need to check the option in your Zoom client to display Turn On Original Sound in the meeting. Once enabled in the client, you can turn this setting on and off in your meetings as needed.
Tools to research? :
Soundjack https://www.soundjack.eu/ Recommended by Ximena Alarcon
Second Life also offers interesting soundpossibilities ???
more sophiticated and also complicated Jack Trip - Peer to peer.
Artmesh by Ken Fields.


The World-Wide Tuning Meditation https://operawire.com/q-a-raquel-acevedo-klein-ione-claire-chase-on-the-virtual-performance-of-pauline-oliveross-the-world-wide-tuning-meditation/
Zoom recommandations :
When you enter, your video and audio will be turned OFF, please leave it that way, if you click on the "up arrow" next to your camera icon ((lower left-hand toolbar) your VIDEO SETTINGS can be accessed, go to “video settings”, then click the box "Hide non-video participants," which will then feature performers only, eliminating all of the non-participant video boxes in the gallery view. Select "Gallery View" in upper right corner of the ZOOM window to see all performers at once.
Maybe it's just because I've got modular synthesizers on my mind, but it really felt like we were each modules in a modular synthesizer.

I was thinkng of the whole performance as breaking up into:
1. ambient drone tones
2. percussive sounds
3. animal mouth sounds
4. human language (like phonemes and word parts and words)
5. affective/expressive human language (like sentences with emotions, even if in tongues)

But there is really a continuum between all these things. They don't divide up so easily. Even with a synthesizer, if you slow down a tone-generating oscillator, it turns into percussion. And the control voltages (triggers, gates, and bursts) that tell modules what to do when are still just electrical voltage waves, same as the waves that make the tones and the percussion sounds. It's all just electrical waves. There are even "formant" filters that can make the sound of human language phonemes (cf: https://youtu.be/SSRaGH9nu9A ).

Sliding from on category into another (drone into percussion, percussion into animal sounds, animal sounds into phonemes), or hanging out in the liminal spaces between, was the coolest part this time for me.

Back to modular synthesizers! Some modules specialize at doing certain specific things, but then there are other modules that can do a little bit of everything, depending on how you patch them and set them (like this module: https://frap.tools/products/falistri/ ). In the performance, it was like we were each on of these multi-function modules: able to do any one of the five functions above; able to receive, recognize, and send any trigger, gate, or burst; and able to internally reconfigure (repatch) our own functionality, improvisationally in real time. Yet each of us maintained our distinct approach (we were not all the same exact multi-function module). So it was like a giant Artificial Intelligence modular synthesizer, except the intelligence was not artificial but actual. Like a synthesizer where each module decides what she wants to do, without any single meta-"composer" at the helm.

The triggers that evolved for me this time were more complex:
1. If a drone develops (everyone doing a drone), make percussion noises (unless I initiated the drone)
2. If percussion noises develop (everyone doing percussion), make drone noises (unless I initiated the percussion)
3. If animal sounds develop (everyone doing animal sounds), listen and improvise.
4. If language develops (everyone "talking"), join in with language.
5. If more than one of the above things is happening, listen and follow what interests you.
6. If a loud burst sound happens, go to silence.

With just these rules, things became quite complex, and some duets, trios, and quartets emerged.


This album (two sets of live improvisational modular synthesizer performance) seems very relevant:

And regarding the body motion triggers, this performance seems relevant:
The audio is all pre-recorded, and the percussion ensemble basically just "dances" to the audio, but it looks like they are making the sounds. I got to see this live and it was trippy.

Hate that I missed the end. Like distance running, the real benefit of a 30-minute run happens in the last five minutes. So I probably missed the best part.

I liked having time to let things develop, and I liked some of the things that developed. Endurance performance evolves you from:
1. trying to get all your tricks in
2. trying to fill the space with something
3. giving up "performing" and doing whatever happens when you give up performing
It was long, it was short.
Sometimes I was "looking" for contact / communication through my utters, at other moments I was co-constructing a sound environment and often I lingered in between, I was wandering through a soundspace.
Already early on I forgot about my trigger, but still somehow it stayed with me for the whole 2 hours - like an anchor - :

I could only use the word "vertal". Whenever I felt a connexion, I could "talk" with other words, but should stay in the same register. When I felt I lost the connexion again I would have to go back to a free "vertal" only.

Listening, wandering I found myself a few times lacking "voice"; I mean I was unable to make the sound I felt I needed. I viscerally understood that I missed a trained voice. I decided to go on anyway: "F, Annie, you just go".
Sound became expulsed delivered ...

Relevance theory aims to explain the well recognised fact that communicators usually convey much more information with their utterances than what is contained in their literal sense. To this end, Sperber and Wilson argue that acts of human verbal communication are ostensive in that they draw their addressees' attention to the fact that the communicator wants to convey some information. ...



What are we conveying? want to ? trust attention affect flow life itself?
Utterings highlight video -
excerpts utterings 3 (12 min)