Just by scratching the surface of the subject is possible to see how programming languages are a very powerful tool for developing original artefacts.

For a beginner the main limit trying to interface creativity with the machine of course is knowledge. Learning programming is a long process that takes a lot of theory and application studies. Using the language, a programmer is able to establish and communicate simple and complex operations ready to be read and executed by the machine.

With an advance knowledge a programmer can transform a computer into an high-end digital audio workstation. Also he might be able to spot solid limits about the language itself.

First is important to understand what a programming language is, what are the different types of languages, how they work and how they are used to implement audio solutions.

Finally the research is going to explore the computer music aspect of programming languages through Csound, establishing what type of programming language Csound is, how it works and how is possible to create music or music software with it.

Programming Languages

In informatica, insieme di parole e di regole, definite in modo formale, per consentire la programmazione di un elaboratore affinché esegua compiti predeterminati.” (Enciclopedia Treccani)

In computer science, a set of words and rules, defined formally, to allow the programming of a computer to perform predetermined tasks.” (Translation)

There are high level and low level languages:

  • High Level: It allows the programmer to work on real logical operations, leaving to a program, called compiler, the task of translating them into processor instructions written in machine language (binary code sequences understandable by the processor).
  • Low level: Machine and Assembly are Low level type of languages. Each CPU has its own machine language, which really look like a series of 0 and 1. The assembly language allows users to name a series of numbers in order to create logical structure. This kind of languages operate directly on the hardware, therefore they must not be submitted to a compiler in order to work.

Finally there are the 4GL (fourth generation languages) which represent the class of computer languages closest to human languages.

There are different types of High Level Languages:


Imperative Languages are all those languages object oriented, where the programmer can actually create objects and program their features.
But also procedural languages, where the programmer lists a series of operations ready for the machine to be executed.


Declarative programming languages are those in which the programmer merely declares properties of the desired result, but not how to compute it.
Example are functional, logic and mathematical based languages.
The programmer input the desired instructions as results or solutions of preset functions or mathematical or logic problems.
Sound is in fact measurable and using a functional language, the programmer can really focus on the creative vibe more than computing processes.

Sound in a Computer

“The experience of sound results from our eardrum’s sympathetic response to the compressions and rarefactions of the air molecules projected outward in all directions from a vibrating source. This vibrating pattern of pressure variations is called a waveform.”

(“Theory: Sound, Signals and Sampling. The Csound Book, Richard Boulanger”)

If sound is acoustic, is necessary a transducer to convert the air pressure into electrical pressure wave (analog).

After that is necessary to use a low pass anti-aliasing filter (in order to better represent the wave, Nyquist Theorem) and an analog-to-digital converter (ADC) in order to convert that information into numbers that can be stored on a computer. Finally is necessary to use a digital-to-analog converter in order to output sound from the speakers.

If sound is digitally synthesized, that means that the sound is already represented on numeric form.

Sound is physical so it can be measured. The measurement unit of sound is Hz (1Hz = 1 cycle per second). The human ear can only hear frequencies between 20 Hz and 20 kHz.

The Nyquist Theorem states that a digital system can not accurately represent a signal above ½ of the sampling rate. That’s why we are going to make 44.1k samples each second.
With a 16-bit linear system is possible to sample the analog waveform with 16 bits precision.
In essence the analog signal gets quantized into a series of little numeric snapshots that together represent the analog sound wave.


Csound is the audio programming language used for this application and it is a Sound Compiler or also a DSL (Domain Specific Language).

That means that is specialized in a particular application domain (Audio) differently from the GPL (General Purpose Language) applicable across domains. It is called Csound as it is written in C and it is a functional programming language as it is possible to use opcodes (functions set on a lower programming level).

A more detailed description of Csound is:

“Csound is a sound renderer. It works by first translating a set of text-based instruments, found in the orchestra file, into a computer data-structure that is machine-resident.

Then, it performs these user-defined instruments by interpreting a list of note events and parameter data that the program reads from: a text-based score file, a sequencer-generated MIDI file, a real-time MIDI controller, real-time audio, or a non-MIDI devices such as the ASCII keyboard and mouse.”
(“The Csound Book, Richard Boulanger”)

Users type of programming is really using the tools provided by a lower level programmed structure. In fact Csound is an open source programming language that has a starting tool kit of over 450 signal processing modules, but it also allows any developers implementations to be profitable for other users.

Csound has a main community online, an email list and Facebook group. Also it has instructional/inspirational manual with CD-ROM and a full features documentation website, where each function of the language is explained and demonstrated.

The CsoundQT interface allows users to clearly implement lines of code keeping logic order and orientation.

It highlights and detect system errors, cause and place of the error. It also give different colours to different functions of the code implemented to help orientation.

All of that makes very easy and quick learning, of course with a lot of study and commitment.

The figure below illustrates the process by which Csound make sound.

As a Step 1 is necessary to define instruments. Each instrument is designed to produce a particular type of sound and has input parameters that control the different characteristics of sound, like amplitude, duration pitch…

After that is necessary to provide input data using external input (Ex. MIDI keyboard) or an encoded list of events.

On Step 2 the instrument definition are translated into machine language and the score inputs are compiled in order to convert the score into the proper format for the processing.

The final Step really consist in two stages: Initialization and performance. Once a note has been initialized, the values that must remain fixed throughout the duration are set. During the performance of the note the computer calculates the output corresponding to the sound.

Every research source states that the best existing method in order to learn how all of this works and how that can be useful for creating music is a lot of practice. Starting writing the first lines of code and listen to the output of the speakers is really the best way to get around this language and slowly be able to implement advanced synthesis processors.

That is why the study will start understanding the basic syntax that will allow the process above, to happen.
Then the research will move on synthesis syntax and techniques.

Syntax & Global Settings

The Csound .csd file is composed by 3 main sections between the    and   tags: Options, Orchestra and Score.

Each section is defined in between of an opening and a closing tag.

The section is the master file area, in which is possible to make changes on external devices connections, inputs and outputs settings, custom configurations, graphic interface back-end, sample backup connections, format and filename. There is an full list of different Options on the online manual that helps in case similar implementations are necessary.

The section consist in 2 parts: Header and Orchestra.

The Header is the part of the code in which the global settings for the session are defined, it applies to all instruments and control the entire output. The bottom part of that is often use for channel definition and initiation. Example of that is a reverb aux. An aux is really an empty channel the happen to carry signal. Csound only creates a channel by default when audio in actually generated within the instrument. That is why if the aux is not initiated the computer would not know about the existence of that channel and it is going to cause error.

Sometimes this whole area is replace by an “instr 0” that controls those parameters implementing the setting of the session in instrument style.

The default orchestra header for Csound is:


sr = 44100 ; Sample Rate

kr = 4410 ; Control rate

ksmps = 10 ; SR/KR

nchnls = 2 ; Number of Channels (2 = STEREO)


Other extra settings implementation might affect global amplitude, MIDI configuration, Noise instruments control.

The Orchestra section is the area in which the audio generation, synthesis, processing and manipulation are implemented.

For the Csound program each different module implemented (MIDI instrument, reverb, mixer, synth….) is just a simple instrument that is delimited within an instr and endin tags.

Instruments are defined and so designed interconnecting modules called opcodes (operation codes), that generate, route or modify the signal. Opcodes are a library of functions programmed on a lower programming level that allows to recall thousands of different audio implementations. Example of that is the oscil opcode that activate a simple sine wave oscillator.

An orchestra can contain any number of instruments. Csound provide a 8000 voice sampler, 4000 voice FM synth, 2000 voice multimodal waveguide synth, 1000 band EQ, 500 channel automated mixer, 250 tap delay-line, fractal flanger, convolution reverb, vector spatializer and more.

The section also consist in 2 parts: Tables and Notes.

The Tables are mathematical function-drawing subroutines (GENS) that directly generate function-tables (f-tables). Functions allows the programmer to generate sounds using different waves templates that are also customizable. Some sound generating opcodes require particular GENS routine.

In the Notes section, we type in the note statements. These notes events perform the instruments and pass them performance parameters such as start time, duration, frequency settings, amplitude levels, vibrato rates and attack times.

Orchestra Syntax: Synthesis and Sound Design

The orchestra syntax statement is:

The instrument implementation requires to define instr number, generate sound, patch audio input on output and close the instrument with endin.

Csound renders left to right, top to bottom.

The function parameters is a table-lookup, that provide the wave template needed for the generation of the sound. Tables are encoded in the Score area.

As underlined previously, opcodes are algorithms written on a lower programming language that allow user to recall complicated lines of code in order to execute the required function.

Instr 1 for example, generate sound using a oscil opcode which is the simplest sine wave opcode. Although is possible to completely change the sound generated just by replacing that opcode with a more sophisticated one.

foscil 2-oscillator FM synthesizer
buzz additive set of harmonically-related cosines
pluck waveguide synthesizer based on Karplus-Strong algorithm
grain asynchronous granular synthesizer
loscil sample based wave-table synthesizer with looping

The underlying logarithms of each one of these opcodes is fundamentally different and each one of them requires a unique set of parameters.
Those six opcodes represent the core synthesis technology behind many of today’s most popular commercial synthesizers.

Generating sound is only the first step. Next step would be controlling the sound.
There are 3 control modes of the data rates.

i-rate variables are changed and updated at the note-rate score scalar
k-rate variables are changed and updated at the control-rate (kr) scalar
a-rate variables are changed and updated at the audio-rate (sr) logarithmic

The i-rate variables are used to set parameters value and note duration. They are evaluated at initialization-time and remain the same for the whole length of the note event.
The k-rate variables are used for storing and updating envelopes and sub-audio control signals.
The a-rate variables are used for audio signals in order to keep intact the quality of the sound (rendered at sr).
One can choose the signal control mode by updating the rate letter as a first letter of the variable name (example a1, signal audio rendered at sample rate).
For name coding the variables Csound only require the rate specification (i,k,a), then any name will be good for the purpose of identify signals (a1, asig, aout… are the same).

Applications & Sound Sources

Every Csound expert or book highlights practice and listening to each different implementation as the best ways to learn encoding audio.

Let’s dive straight into the process of making music within Csound.

Making the song I designed my instruments in order to have different kinds of sound sources and be able to stimulate the listener in different ways. I also needed control for mixing my instruments and blend them together.

For this song these are the instruments implemented:

Sample Reader

The sample reader is quite useful. A few lines of code allows me to play mono and stereo audio sample at any time. The sample reader relies on audio sample that must be on the same directory of the .csd file.

In this case the soundin opcode provides audio using a soundin connector on the option section of the Csound file.

The audio is then sent to the opcode inside the instrument and patch to an audio-rate statement. The lines of code detect whether the sample is stereo or mono and it creates as many channels as needed.

It then duplicate the signal and send it to Left and Right output if it is mono or it place the stereo file on the stereo output.

Those outputs are finally routed to the master instruments.

This instrument is a perfect example of connection of Csound with the C language. In fact if and else are typical flexible statements of the C language that allows the programmer to program same functions for different situations.

Drums Synthesis

Those kind of instruments are probably the most difficult to program. The envelopes in this case must have a very quick attack and quick release, or long release if we are imitating cymbals. Noise and distortion are very helpful tools shaping the sound.

As said previously the drums are sound with a very strong and quick attack and short duration. Also distortion and noise can be as important as pitch in order to simulate the sound of a kick for example.

Envelopes are really key for synthesizing drums. Some people also use extra opcodes to give sound more details (freq modulators, noise modulators…).

An interesting way to shape drums attacks is using wave tables to manipulate the attack. In this way is possible to change the actual shape of the attack referring to wave shapes.

Noise instruments can also be really useful in order to create cymbals simulation instruments. Notes are created manipulating the event-parameters accordingly to the sound needed (duration, envelope….).

Sound Effects

Sound effects need to take the absolute attention of the listener and stimulate his ears. One of the best way used to synthesize sound effects is to create sounds that changes during the duration of a single event statement. A swipe or tremolo are perfect examples of that.

In fact the swipe event performance might start with a certain frequency that evolve during the performance of the note until it reaches a different end frequency. It is also to program logarithmic and linear evolutions, as well as melodic, rhythmic or any other kind of implementation. Wide knowledge of opcodes and functioning is required.

One can perform logarithmic frequency modulation by using the expon opcode and setting the duration parameter in between of the 2 end values.

Tremolo pretty much work the same, but it sets its modulation based on the output values of an oscillator.

It is possible to program any kind of sound effect potentially.

Using other people examples online and on the manual is very useful in order to get the head around the matter.

Noise Generators

Noise based instruments are very useful in synthesis of sound for different purposes. They can just be used to modulate other instruments in order to shape complex timbres. Or as said before it can be used to add noise content to synthesized drums.

A low noise signal is also used as frequency mixer. In fact Csound render sound at a very high quality and it can be difficult to melt all the frequencies without a bit of dirt.

Noise is measured with a Signal-to-Noise ratio and it is usually measured on a linear scale between 0 and 1.

Wave based sounds instead, work on a logarithmic scale and this creates different amplitude bugs that are going to be explored on the Limitation chapter.

The noise actual generation is accomplished by using dedicated opcodes as the rand. It is always possible to apply additive or subtractive synthesis.

Melodic Instruments

Encoding melodic instruments is a very exciting process. In fact Csound offer a extremely wide palette of different sounds to start with. Then it is possible to apply filters, amplitude and frequency modulation, dynamic and time-based effect within the instrument and globally. It is necessary to set i-rate controls in order to support event creation and automation.

All of that signal is then inputted on an envlpx opcode that sets the envelope values in order to give the instrument a timbre and finally the signal reaches its output going into a mixer.

Pretty easy. It is like having the ultimate synthesizer.

Once the Orchestra has been built and the event list is also ready to be executed, it would be nice to have extra control on the master output. In fact each instrument can perfectly work as a virtual channel strip with programming features. But it is necessary to built a stage where we can manipulate and process the whole sound.

Something that came out during the research on this kind of implementation is that every single programmer uses its own different way to implement similar solutions. Knowing all of them is very helpful as it is very easy to create bus working on logarithms, especially for beginners. One solution can be more suitable then others in certain situations.

The following implementation provide stereo support for each instrument and aux stereo sends to a reverb and a compressor. Both of these have been built to melt the sounds coming from the different instruments even more. All of this is implemented using the signal flow graph opcode.


To implement a mixer is necessary to route the outputs of each instrument in the orchestra into the input of the mixer. All of this keeping the stereo pan and automation intact.

There are several methods that can achieve the functionalities of a mixer. Starting by creating an instr 0. Another method could be using the mixer family opcodes. During the implementation of the mixer opcodes i did not found much problems transmitting mono sounds, but many problems routing with stereo as well as the single instruments’ amplitude controls. Also the signal needs freq and amp signals of each channel to be sent separately, which is quite confusing and annoying, especially if there are many instruments on the orchestra.

One of the best ways is to control values using global variables. This is a very quick and effective way to implement a mixer. In fact it is very used especially for live coding performances. Perhaps this approach is very direct and implementing complex signal flow operations require an advance knowledge.

Best tool and most flexible is the signal flow graph opcode. This tool allows you to connect inputs and outputs of any instrument with a very simple syntax structure. Implementing reverb and compression instruments as aux channels, is then possible to actually create a signal flow map.

On this particular configuration each instruments as stereo pan parameters. The stereo output of each instrument is going into the left and right input of a master output instrument. The instruments output is also going into the stereo input of a reverb and a compressor instruments and finally their output is flowing into the master output instrument’s stereo input (sends can be controlled with mathematical operations).

All of this basically means that all instruments go through a single stereo output, together with a reverb aux and a parallel compression system.

The whole mixer system has been implemented in order to be able to do a nice and stage controlled mix, but also in order to melt the synthesized frequencies better. In fact without this system all sounds would have been much more dry and separated from one another.

Parallel Compression

The compressor used to create the parallel compression is been implemented using the dam opcode which set parameters of a compressor/expander unit on the signal going through.

This is just an example but pretty useful in order to understand how things are pretty straightforward encoding audio. Parameters are:

icomp1 – Compression ratio for upper zone.

icomp2 – Compression ratio for lower zone.

irtime – Gain rise time in seconds. Time over which the gain factor is allowed to raise of one unit.

iftime – Gain fall time in seconds. Time over which the gain factor is allowed to decrease of one unit.


The reverb unit used for this purpose is the reverbsc opcode which is an 8 delay line stereo FDN reverb, with feedback matrix based upon physical modeling scattering junction of 8 lossless waveguides of equal characteristic impedance.

On this application only delay and cut off parameters have been involved.

Score area (Event list): Event Syntax and Wavetables

As we said previously the Score area is used to define wavetables (f-statements) that provide the sample wave template to start the synthesis with. It is also used to define performance events.

The f-statement in the Csound Score file is:

Most function must be a power of 2 in length in order to fully provide the wave template. That is why that value is usually between 512 and 8192.

Notes Statements (i-statement) are also defined on a single line and give instruction to a certain instrument to play at a certain time for a certain duration. Each statement can be used unlimited times and it allows the programmer to program the audio on note basis.

About this, it is necessary to be aware that within Csound if 2 notes play at the same time their amplitude is not mixed but added. This often leads to have samples-out-of-range and clipping errors.

The note statement is:

It is possible to set extra parameters as amplitude, frequency or pan on instrument definition in the orchestra. To do that it is necessary to create a p# variable on the parameter control. (example, iamp = p4).

Events Creation

Creating an event list is not the best fun creating computer music. In fact each single note must be treated on its own and in contest and in order to compose a 2 minutes computer music piece one might be programming hundreds of note statements.

As said previously a Csound instrument does read MIDI files and this provides a more intuitive way to generate notes. Must be said that the programmer must program the instrument to be able to read MIDI data.

There are different ways that allows the programmer to save time and maybe do a more precise job and those solutions are implemented using note-list shortcuts.

This methods are particularly effective with repetitive sounds or motives.

The a statement in the score area provides playback controls, which is quite useful as there is no graphic interface to pin point the playback exact point.

The s statement is a section divider and it is very useful creating list of notes as it allows to program timing of the sections starting from 0 second each section instead of programming a sound starting at 1379.23 seconds (confusing).

Perhaps this shortcut seems to create a little sound gap between sections, which is quite annoying if you need fluent sound during the whole composition.

The Dummy f statement allows the programmer to load wavetables at any time during the composition. That means that every instrument can change wave table at any time.

The t statement allows to change the tempo of the composition at any time. Csound runs its clock at 60 beats per minute. Obviously this can be change using this function.

If the sound evolution is programmable using mathematical operations or logarithms, well then it is possible to use event shortcut to avoid useless copy and paste.


The automation system is not really that complicated. In fact to change instruments parameters during time, one can just modify the note statements parameters. For example if on a sequence of 3 notes I want the sound to be louder and louder, I will just adjust the amplitude parameters of the 3 event statements accordingly.

It is always nice to be able to make changes along the composition if needed.

Creating the event list each person might have different work flows. In my personal experience for example I have decided to use the cpspch that converts a pitch-class value to cycles-per-second instead of using Hz for frequencies. I used logarithmic values to control my amplitudes and more.
Also it the whole mixer system has been designed in order to keep intact the automation on every parameter, stereo pan included.
In general each programmer use his custom sets of instruments to make his music.
The next 3 examples show how simple is to implement audio solution just by using simple mathematical operations.


The pan system, obviously, is only applicable if the main output is stereo. 

The 2 speakers are represented with 0 (left) and 1 (right). Once that has been set the signal will have an ipan parameter that will shift between 0 and 1 determining the position of that sound in the stereo field.
For instance, if the output of an instrument is called aout, the equation needed to apply pan information to the instrument would be:

outs aout*ipan, aout*(1-ipan) ; Left and Right

As explained previously ipan must be between 0 and 1. Centre is .5.
To keep the pan data on the instrument is necessary keep the stereo order in each output-input stage.


Pad, in sound engineering, means passive attenuation device, and it is a very useful tool to make hot signals smaller.

In particular, generating sounds with Csound, it is very difficult to bring every different instruments on a decent mixing level. For example noise sounds are much louder then wave based sounds.

Applying a pad is pretty easy within Csound. It is enough to apply a subtraction to the amplitude of the instrument in order to take as many dbs off as we need.

Iamp = ampdb(.1-20) ; 20db pad needed with noise generators


Very often, mixing songs, is necessary to apply fades to a track. That means that the track volume will drop at the established time, with the established graphic shape and at the established time.

With Csound that concept must be applied on every single note statement in order to fit within the mix.

Using DAWs with a graphic interface this might seems very complicated. Though thinking at mixing at a note level actually suggest how fades are implemented in computers. Which is quite easy and cool!

In fact fades are really just an envelope that control attack and release of a selected signal.


I want to implement a long fade at the beginning and short fade on the ending of the signal.
The attack will be long, the release will be short! Easy as this.

The power of Csound: Composing and Mixing

The introduction written by Professor Richard Boulanger on the Csound Book, really represent how Csound is a place where music arise, change, develop, grow and express. The first words of the book explain how Csound is finally the ultimate expressing instrument. It shows how dedication and experience allow the creation of the richest textures and timbres. Knowledge is key, limitation is only imagination.

As we said before, programming is really setting a stored number of predefined actions to be ready to be used.

On this project the language is been used to create a scripted musical piece. But the language can be used to create sound engineering gear as mixers or audio processors for other uses like mixing or playing for example.

One can create is synthesiser and patch it to a MIDI keyboard, and just play. One can create never existed sounds and more.

Composing a musical piece with Csound is very interesting not only on the sound generation and modulation, but also on the mixing side.

In fact each note is processed and mixed on its own and that highlight many detail of the sound that we generate and that we want to mix.

Also during the mixing stage, the mixing engineer actually bend and melt together all the frequencies of the song. The Csound approach deeply highlights the characteristics of this process. It shows the evident and effective changes that sound make in relation to the instructions programmed. That is a very useful experience for somebody who mix music.

The community side of it is surprisingly optimal. A part from thousands of documentation papers, there is a very strict community that is pleased to answer your Csound problems within the day with multiple solutions.

This seem to be an incredible tool for a musician.


Any language is really a system of any type of codes, use to create any kind of communication between 2 or more parts.

The uses and limitations of a programming language are easily comparable to the other languages ones. For instance, a literate English speaker might know the best way to express this concept, using each tool provided by the language to describe the message.

Perhaps a newbie (informatics newcomer) would not be able to pin point any definitive limitation of a language, a part from the limitation of his own knowledge.

Most of the knowledge limitations come from the fact that an audio programmer must master different disciplines as mathematics, synthesis, language….. to be able to master the whole thing.

Plus, about Csound in particular, there are many users custom opcodes available on the 100% free website. This is, of course, very a very good thing! Having a community behind the language means that the language is developing, not dying.

On the other side, compatibility between custom opcodes is not assured 100%. It does not happen quite often but in that case, the programmer must either find a different solution or be able to modify the opcode core (advance).

In my personal experience I happened to find situations I could not solve:

– Creating the event list the event shortcut s (that allows to edit note sections timing starting from 0 on each section) would have been very useful. Reason 1, is that it keeps a better order on the timing of the code. Reason 2, it allows to be able to copy and paste entire sections in order to loop event statements. Unfortunately this implementation seems to create a small audio gap between the sections when playback is on. That is quite unfortunate if we need to compose continuous compositions.

– During the implementation of the mixer system I came across the mixer opcodes family (custom opcode). This particular opcode is not the easiest one to implement. A mono signal is sent to the mixer setting 2 parameters: signal and gain. Also amplitudes can be misread on the receiving channel, probably why the whole signal is too processed on the output stage. Finally the stereo system seems to have many issues.

– It happened quite often, that the CPU gets overloaded. On the Csound book there is an entire chapter dedicated to the advance instruments optimization. Optimizations not only allow the programmer to have a finest control, but also it allows to manage the size of the file used and generate.


Csound research references and manuals:

  • http://www.csounds.com/chapter1/
  • https://csound.com/get-started.html
  • https://csound.com/docs/api/index.html
  • https://csound.com/docs/manual/index.html


Csound email list:

  • https://listserv.heanet.ie/cgi-bin/wa?A0=CSOUND-DEV


Programming Languages research references

  • Enciclopedia Treccani (Online) Linguaggi di programmazione (programming languages).

Available at: http://www.treccani.it/enciclopedia/linguaggio-di-programmazione/ (Last accessed: 24 april 2019)

  • Computerscience.org (Online) Computer Programming Languages.

Available at: https://www.computerscience.org/resources/computer-programming-languages/ (Last accessed: 24 april 2019)

  • Webopedia (2017) Programming Language.

Available at: https://www.webopedia.com/TERM/P/programming_language.html/ (Last accessed: 24 april 2019)

  • Ge Wang (2008) A History of Programming and Music.

Available at: https://www.researchgate.net/publication/265164952_A_History_of_Programming_and_Music/ (Last accessed: 24 april 2019)



  • Dodge, C and Jerse, A.T. (1997) Computer Music.
  • Boulanger, R (2000) The Csound Book.
  • Pierce, C.B. (2002) Types and Programming Languages (The MIT Press).