Midi Lib
Volume Number: 3
Issue Number: 7
Column Tag: Assembly Lab
A Midi Library for Pascal
By Kirk Austin, MacTutor Contributing Editor, Austin Development, San
Rafael, CA
What is MIDI?
Before I get too much into the nuts and bolts of this whole thing perhaps we
should take a look at what it’s all about. MIDI is an acronym for Musical Instrument
Digital Interface, and really came into being somewhere around 1983. Originally, it
was created to allow music synthesizers to communicate with each other, but there was
enough foresight in the minds of the originators to leave room for future
enhancements. As far as the scope of this article is concerned, the most important
thing about MIDI is that it allows music synthesizers to communicate with computers,
specifically, the Macintosh.
The need for a standard
To really understand why MIDI came about, you have to know a little bit about the
history of music synthesizers. In the late 1960’s synthesizers were, for the most
part, voltage controlled devices. That is, you could control the frequency of an
oscillator (a tone generating device) by varying a DC voltage that was routed to one of
its control inputs. The higher the voltage, the higher the note and vice versa. The
standard that was used by companies like Moog and ARP was 1 volt/octave. This meant
that if your control voltage changed from 4 volts to 5 volts the oscillator would shift
its pitch higher by one octave.
This “voltage control” concept worked pretty well at the time, but you have to
remember that the hardware itself was pretty primitive by today’s standards. For
instance, most synthesizers in that era could only play one note at a time. Chords could
only be created by using a multitrack tape recorder and overdubbing the different
notes. This was how recordings like “Switched On Bach” were produced.
Now, when you’re only dealing with a note at a time things aren’t too complicated.
Still, you had to make sure all of your oscillators were in tune, because typically you
would have to use more than one oscillator to produce a respectable sounding note.
Then all of the oscillators would have to be scaled so that they would track accurately.
These last two points were no small problem, because the analog oscillators at that
time had a very large problem -- thermal drift. This meant that you could tune and
scale all of the oscillators very carefully, and 5 minutes later they would be out of
calibration because the temperature of the semiconductor junctions had changed. Ahh,
those were fun days.
But, those problems aside, there were other signals that were needed to produce
a note besides just a control voltage for the oscillator. You also needed a trigger pulse
to tell the synthesizer when to start playing a note. Then you needed a way to let the
synthesizer know that you wanted to stop a note when you lifted your finger from the
keyboard. This was usually in the form of a “gate” signal. Okay, so now we’re up to
three signals just to produce one note at a time. Then, as if that weren’t enough, some
manufacturers were using a positive going pulse as the trigger and others were using a
negative going pulse. You could get around this problem with special adaptor boxes and
the like, but then a much larger problem came looming over the horizon -- polyphony.
Polyphony means the ability to play more than one note at a time, and even though
it was a tremendous breakthrough for the musician, it multiplied the problems for
electronic musical instrument designers. Now, to the best of my knowledge, the
polyphonic synthesizer keyboard that we know and love today came into being around
1978 thanks to the advent of the microprocessor and the talents of a couple of guys
named Dave Rossum and Scott Wedge of Emu Systems. Their ideas led to the use of
microprocessor based keyboards by virtually all of the synthesizer manufacturers.
Oberheim was one of the first companies to bring out a polyphonic instrument. It had a
keyboard that was scanned by the microprocessor which then converted the
information into DC control voltages and gate signals for controlling its analog
oscillators, filters, and amplifiers. The amazing thing about this instrument was that
it actually worked, and it provided a great leap forward for synthesizers in general.
But, now another problem began to appear.
Musicians wanted to have a remote keyboard controller that could be worn around
their neck and send signals down a cable to their synthesizers which might be offstage
somewhere. Or maybe they didn’t want a keyboard at all! Maybe they wanted to
control a synthesizer from a guitar or a drum set! Instrument designers were really
starting to get overwhelmed by all of the options that musicians were demanding at this
point, and it became clear that there was a need for some kind of standard way for
controllers (keyboards, guitars, drums) and synthesizers (the sound producing
electronics) to communicate with each other so that instruments made by different
manufacturers could work together.
MIDI is born
In late 1981 a paper was presented to the Audio Engineering Society suggesting a
digital, serial interface for electronic music synthesizers. This scheme was referred
to as the Universal Synthesizer Interface, and was authored by Dave Smith and Chet
Wood of Sequential Circuits. This proposal was, in fact, the precursor to MIDI, and
served as the impetus to get manufacturers of electronic musical equipment to talk
with each other about some sort of communications standard. What finally came out of
all of the discussions was the MIDI specification 1.0.
The data not the sound
Now, probably the most confusing thing about MIDI to the beginner is
understanding that MIDI is concerned with control data, and not the actual sound itself.
For instance, if we talk about a MIDI recorder that emulates many of the functions of a
traditional tape recorder you must understand that the MIDI information that is being
recorded is simply the note on and note off signals. When a key is pressed on a
synthesizer keyboard 3 bytes of MIDI information are sent over the serial connection
telling the sound producing electronics to start playing a note (the details of these 3
bytes will be explained shortly). When that same key is finally released another 3
bytes of information is sent over the MIDI cable telling the sound producing electronics
to stop playing that note. As you can readily conclude from this simple example, a note
of any length requires the same amount of information -- 6 bytes. This is what makes
for such compact use of memory in MIDI recorders. By comparison, actually
recording the sound itself by an analog to digital conversion would take tens of
thousands of bytes even for a very short sound, and a longer sound would require more
memory still. But, even more importantly, the use of MIDI control signals allows
musicians to factor out the actual sound from the note choices and timing information.
This means that I can play a part on a piano-style keyboard, record it on a MIDI