Chapter 1.3.1: Some Basic Definitions It's time to talk about real randomness now, randomness, which we can add to the networks, that we have build so far. To get at least a whiff of a systematic approach to the surprisingly huge amount of techniques and modules wearing the word “random” in their names I ́d […]

It's time to talk about real randomness now, randomness, which we can add to the networks, that we have build so far.

To get at least a whiff of a systematic approach to the surprisingly huge amount of techniques and modules wearing the word “random” in their names I ́d like to make some musically useful definitions before jumping into patching and patches. On the following pages I ́ll call a process “**completely random**”, when the probability of all technically possible next events is equally high (or low), and any prediction of what ́s going to happen would be nothing else than guessing like guessing numbers at a lottery. “random” doesn ́t even mean “changing”. To give an example: the successor of a pitched note event in a (completely) random process can be the same pitched note event again, as well as a note of any other pitch. “Completely random” means: I cannot exclude anything, that ́s technically possible – not even 100times the same event in a row. Even if the probability of hearing 100times the same pitch of a note in a row in a random process is quite low – I cannot exclude the possibility that it happens.

If there are borders, if I can exclude certain goings on, then I ́ll call it “**limited random**”. When I use a quantizer to limit the occurring pitches (to stay with the aforementioned example) to a certain scale, but the successor of a note can be any note (included the same note again), then we have limited randomness (you may even call it “tamed” randomness though).

In probability theory the terms “random” and “stochastic” are used interchangeably, but I ́m going to give them slightly different meanings in this book. I call a process “**stochastic**”, when the probability of the occurrence of certain events is higher or lower than the occurrence of other events. If e.g. the occurrence of the note G5 in a scale is set to double (or 2 or any other number - depending on how the module, that I am using calls it), this note G5 will occur twice as often as any other note in the scale when I am patient enough to wait for a long time and to let a high enough number of events going by, because I am still not able to predict, WHEN G5 will occur.

The last example shows, that a limited random process can be a stochastic process as well: I limit the number of possible note events to a certain scale, AND influence the probability of a certain note in this scale. I think it ́s self-evident, that a probability of 100% means no randomness at all.

An interesting method is using “**probability masks**”.

This means creating developing/changing probabilities. I could – just an example - modulate the probability of the occurrence of lower frequencies or higher frequencies. To avoid a misconception: I don ́t mean setting developing (changing) limits, but changing probabilities. Probability masks play a huge role in granular sound processing and granular compositions (see my ebook “In the World of Grains – Part 1”). The following graphs show examples of a random process, that produces different pitches. It ́s a stochastic process (in the meaning of my aforementioned definition) with an overlaid probability mask. The pitches of the occurring notes lie more and more in higher pitch regions at first, but develop to lower pitch regions after some time. Nevertheless, there are still a number of pitches outside the regions, which is defined by the probability mask. How many pitches lie outside the mask depends on how high/low I have set the probability, how “strong” the probability is set.

With 100% probability of the occurrence of pitches within the blue borders we have a **limited random process with changing limits**.

I'll return to these more basic (and a bit theoretical) aspects later in the chapter about compositional aspects, but now it ́s time to make some sounds and do some patching.

It might not seem to you, but there are some aspects of using sample & sold techniques, that a remarkable amount of people are not aware of. The “classic” set up reads “patch white noise into the S&H unit, which is then triggered by a dedicated clock generator or LFO”. The values, that the S&H unit gives out are completely random (the limitation to voltage levels which normally/mainly don ́t allow to cross the borders of audibility are not of any musical meaning - I call it “completely random” therefore).

These random voltages can be used anywhere in our networks, either as additional components or substituting any of the aforementioned network components, e.g. to randomly change the frequency of an LFO or to randomly take a sample by a shift register etc. As long as I use unfiltered white noise as the source for the unit to take samples from, things can be quite nice and fun but not really exciting.

Things get more interesting when I start filtering the white noise before I feed it in the S&H unit. And as even filters with quite steep filter curves (e.g. 48 dB) don ́t completely cut off at the cut-off frequency, we get a stochastic process. With a low-pass filter e.g. the **amplitudes **of higher frequencies get lower the

farther they are above the adjusted filter frequency – and so do the randomly sampled voltage levels. Or – in other words – the probability of higher CV outputs of the S&H module declines.

But when I limit the amplitudes at all by applying a VCA (between the source of the white noise or between the output of the S&H module and the “rest of the world”) then I get limited random processes, as I completely exclude high voltage levels.

Let me return to filtering. When I modulate the filter ́s cut-off frequency I create a probability mask.

But don ́t be mistaken: the reason for less higher notes being in the resulting random succession of pitches is NOT the missing of higher frequencies in the spectrum of the noise, but filtering certain frequencies out means there are simply less higher amplitudes in the spectrum as a whole, which means the probability of sampling a high amplitude, a high voltage level is reduced.

But white noise (or noise in general) is not the only source a sample and hold unit can take samples from. We can of course take the output of an LFO network and sample its voltage levels. Depending on the structure of the network we get again completely random sequences or limited random sequences or stochastic developments. The sampled network can serve as a modulation source (like in the last sub-chapters) and as a source to take samples from at the same time. The graphic shows the principle.

Another interesting source to let the S&H module take random samples from are recordings of anything you can think of: recordings of your own music, recordings of CV developments, field recordings etc. And if you make recordings especially for being used by S&H units, you can take care of special amplitude distributions. For example you can record rather low amplitudes with some high amplitude peaks here and there (or the other way round), or you can make the amplitudes rise and fall in certain ways. Planning your recording lets you gain either completely random sequences from your S&H unit, or limited randomness or stochastic sequences. Or you can lay the foundation of probability masks.

You can even use the S&H unit to analyse your existing recordings in a very special way - if you think it might bring some insight 🙂 . Patching the output of a sample player module to the input of the S&H unit we can make use of the full playback functionality of the sample player (looping, ping-pong playback, changing start point and endpoint, random jumps etc.) while the S&H module is taking random samples.

Using simple regular waves as sample sources brings the relation of the two participant frequencies into the centre of our attention. Let ́s feed a saw wave of the frequency **Fsaw **in the S&H unit, which is triggered at a frequency of **Ftrig. **The relation **Fsaw **: **Ftrig **determines, if I get a regular arpeggio following the slope of the saw wave, or a quite unsorted but at least kind of regular sounding succession of CV, or a rather regular arpeggio with some irregular CV levels here and there.

With **Fsaw **= **Ftrig **there is no CV change at all at the output of the S&H module, because the regular -e.g. a saw wave – sample source is caught at always the same “point” in its development by the trigger.

Choosing a non-integer frequency relation and modulating the trigger frequency at a not too fast rate, we can make the resulting CV sequence (slowly) walk in and out of regularity.

The video below will show you some examples:

We already met the Turing Machine as well as shift registers in earlier parts of this book, and we will “meet” the Turing Machine in Chapter 5 in detail again, but here in the parts about sources of completely random processes both shall only be mentioned. The Turing machine generates completely random CV developments with its central main knob adjusted to the 12 o ́clock position. Turning it down to the 5 o ́clock position the generated sequences get more and more regular, loose more and more of their randomness, until we get a completely regular repeating pattern at the five o ́clock position (and with that said I don ́t have to mention the Turing Machine again when I start talking about sources with adjustable amounts of randomness a bit later in this chapter).

Shift registers with feedback (see chapter 1.2.3) are as random as the source they get their first register fed with, and as random as the gates, which start and stop feeding them (again: see chapter 1.2.3 for details). Therefore they can generate completely random sequences as well as completely regular repeating sequences as well as anything between.

These Modules are named after the physicist Frank Gray, who invented a binary system, in which each two consecutive numbers differ only in one single bit. You will stand a tiny, tiny bit of mathematics here:

The binary correspondent to the decimal value of “1” is “0001” (in a 4-bit presentation), and the binary correspondent of “2” is “010”. So, two bits have to be changed. But in Gray Code the decimal number of “2” corresponds to “011” - only one bit has to be changed.

At first no output is high (high voltage level, gate open, trigger send etc.) representing the number “0”. The next step makes output 1 high (= “1”). The next step (= “2”) makes output 2 high leaving output 1 high as well.

The next step (=”3”) makes output 1 low, but leaves output 2 high. The next step (=”4”) makes output 3 high leaving output 2 high and output 1 low. And so on according to the Gray Code binary system. The pulses or gates coming out of the 8 outputs seem perfectly random, but indeed they strictly follow the Gray Code formula.

We can use each of these outputs (or some of them at the same time, or even all together) to trigger events, open ADSRs etc. They cannot be used to generate sequences of different pitches directly, because there are only two values (low and high), but the video behind the following link shows how to use them to generate random pitch sequences nevertheless.

Please look at the last picture (steps 0- 7). The first (upper) output switches 2 times to a high during these 7 steps. Output 2 switches only once from low to high, but stays high for 4 steps etc.

**The block diagram of the patch looks as follows:**the upper three modules generate gate signals in seemingly random order. Always when one of the outputs (of the Gray Code module), that is patched into the upper mixer is at “high”, a gate signal is sent to the ADSR. This gate lasts as long as the corresponding output of the Gray Code module is at “high”. In the block diagram there are 3 of the outputs patched into the upper mixer, so that there are 3 different gate levels, which occur at the mixer ́s output, but each of these levels will open the ADSR – the three different

This video may make things even clearer to you, and shall serve as a “base camp” for experiments of your own.

The idea is simple: it ́s all about spreading a certain number of events (e.g. triggers or gates) as evenly as possible across a given area (e.g. the length of a sequence). Let ́s take a sequence of 10 steps length. The most even way to spread 3 triggers is at step 1, at step 4 and at step 7.

Or spreading 5 triggers across a sequence of 13 steps following this principle would lead to triggers at steps 1, 4, 6, 9 and 11.

The mathematical algorithm to calculate these positions goes back to the Greek mathematician Euclid. Therefore the name. There is absolutely nothing random about it – but there wasn ́t any randomness in Gray Code modules either, and – nevertheless - they “sounded” VERY random, didn ́t they?

Well with Euclidean sequencers it ́s a bit different. Their output doesn ́t even sound random – irregular and a bit eccentric, but not random. When we use Euclidean sequencers for creating rhythms with percussion instruments we are in the world of native African rhythms quite soon. So, why are these modules of any importance for the matter of this book?

The answer reads: because we can modulate the length of the sequence (= the area, where we have to spread out our events) as well as the number of events to spread across this sequence. And here it starts to sound random – not that perfectly as with Gray Code modules, but random enough for our purpose.

When I wrote “number of events” I rather should have called it “percentage of steps, which initiate an event”, because increasing the length of a sequence with an Euclidean sequencer increases the number of events accordingly. And reducing the length of the sequence reduces the number of events accordingly too. Otherwise modulating the length of the sequence AND the number of events would lead to impossible situations (e.g. more events than available steps etc.)

Let ́s say I have a sequence of 21 steps and place 9 events evenly (they will be at steps 1, 4, 6, 9, 11, 13, 15, 18 and 20). Then I reduce the length of the sequence to let ́s say 17. The sequencer automatically reduces the number of events from 9 to 7 then (at steps 1, 4, 6, 9, 11, 14 and 16).

Things get quite interesting with a larger number of events (e.g. 7 events in a sequence of 11) and a slow clock.

Let ́s enter the realm of modules with adjustable probability and adjustable randomness. There are quite a few adjustable random trigger modules out there (e.g. Mutable Instruments “Grids” just to name one of them), but the principle is more or less the same with all of them: we can set the pattern length and the module randomly choses the steps where a trigger impulse shall be sent to a percussion unit or any other module, which reacts to trigger pulses.

The main difference lies in the definition of “amount of randomness” (and of “randomness” at all).

Some modules offer a certain (quite high) amount of fixed patterns between which we can make the module choose in a random way (that ́s in short the “Grids”-like method).

Other modules let us set a pattern and the number of active steps (triggers), which the module shall randomly change from cycle to cycle.

And there are modules, which define “amount of randomness” as “position **and **number of active steps”. Using the module “Trigs” I take one of these last mentioned ones here, but in chapter 5 (“Generative Potential of Certain Modules”) I introduce, demonstrate and explain also examples of the other categories of random trigger sequencers.

Trigs offers 4 independent patterns of up to 64 steps length. I can set the patterns and leave them unchanged (= no randomness at all), or I can force the module to change my patterns after having completed one full cycle – or even at any time I want it to change the actual patterns.

The new patterns (set by the module itself then) are always completely (= 100%) random.

The amount of randomness doesn ́t set the random position of triggers in relation to the existing pattern (nearer or farther from the position of an existing “old” trigger), but the maximum amount of triggers in the sequence at all. The positions of these new triggers are always completely random.

The module “Trigs” is just an example. Important is the principle of how to deal with randomness and probability. The modules you own may have a more or less different functionality, but if they belong to this family of random trigger modules, they will follow the principles, which I describe here.

The next picture shows an example of how different amounts of randomness may look like. The positions of the trigger pulses in the changed (second) patterns are completely random even with a low amount of the “Random” parameter.

The video below invites you to go on a journey of your own with “Trigs”.

With stochastic sequencers we can can adjust the randomness (some call it “probability”) of each and every single step (= randomness of pitch) as well as the randomness of the pattern (= randomness of steps).

With randomness of the step sequence most of the stochastic sequencers offer only a few predefined kinds of randomness instead of (continuously increasing/decreasing) random knobs. We can choose between random changes of the direction of the playback, between skipping different fixed amounts of steps, and between a combination of both (changes of direction + skipping steps).

The module, which I use as and example here (“bordL”) offers two different kinds of step randomness: randomly jumping to and fro but only between neighbouring steps, and unlimited skipping and jumping to and fro across all steps of the programmed sequence.

With randomness of pitch the module offers continuously adjustable ranges of possible frequency deviation. With narrow scales of possible pitch deviation the probability of sustaining the programmed pitch

Stochastic sequencers can not only serve as one of the base **sources **of randomness in a patch, but are also a valuable and very flexible **targets **of external modulations. Randomly modulating the clock of the sequencer while adjusting moderate pitch deviations and limiting the step randomness to neighbouring steps and sending the result through a quantizer leads to the effect, that it takes the listeners some time to decide, whether or not they are listening to a composed succession of variations of the melody, or to a random process.

The video below will open the field for experiments and researches of your own.

Imagine a couple of gates, which all receive the same clock impulse. Some of them open and some don ́t – randomly. Each of the gates is equipped with a probability function, that increases or decreases the probability of opening this gate. This is how probability gates work.

Some of these modules offer a choice between multimode and single mode. In multimode more than one of the gates open at a time (but still they are randomly chosen). In single mode only one gate at a time opens (randomly chosen). “At a time” means “per incoming clock impulse”.

More comfortable modules offer even more, e.g. gate outputs **and **trigger outputs **and **clock outputs. Some offer even CV outputs with the CV level being dependent on which or how many gates are open at a time, and some very versatile modules of this kind add even sample and hold functions, logic functions and more.

But even the most simple modules let us randomly switch voices on and off, let us even switch on and off whole modulation chains or whole groups of different modules in our patch, let us randomly switch between different signals paths and more.

The video below may enjoy you and rouse your creativity.

Bernoulli gates could well have been a part of chapter 1.3.8, because a Bernoulli gate is nothing else than one random gate, which randomly choses between only 2 outputs. It ́s like tossing a coin.

Yes, they could – if there wasn ́t Mutable Instruments and their module “Branches”. This module is such a classic, that it deserves a little chapter of its own (even if this is not the part of the book, where I discuss certain modules actually).

“Branches” contains 2 Bernoulli gates, both “toss their coins” independently, but at the same clock input. One clock input generates two independent coin tosses at the same time therefore. But when I input two different clock inputs, then each of the two Bernoulli gates listens only to its own clock input.

We can adjust the probability for each coin toss more to the left or more to the right output (with 100% either way there is no randomness any more), and we can even modulate this probability via CV.

There is a special mode called “toggle”. In toggle mode “Branches” continues sending the trigger impulses to one and the same output as long as the “coin” doesn ́t fall at the opposite side, and in a third mode called “latch” the CV level at an output stays high as long as the “coin” doesn ́t fall at the opposite side. In this latch mode the output of “Branches” send “gate open” levels of randomly different lengths. The VCV rack version of “Branches” doesn not offer toggle mode. The following image shows an example of 5 “coin tosses” (five incoming clocks).

The video below shows some more examples, and may serve as a “base camp” for experiments of your own.

And now go and set up your “random-pseudo random-not random at all- different probability” network as complex as hell and modulate .... hm, modulate what?

Exactly this question I ́m going to answer in the next chapter, in chapter 2 now.