Summer 91 - THREADED COMMUNICATIONS WITH FUTURES
THREADED COMMUNICATIONS WITH FUTURES
MICHAEL GOUGH
Interprocess communication (IPC) promises to provide a solution to problems that
can't ordinarily be solved in a single-tasking, single-machine environment. But
attempts at implementing IPC with traditional programming techniques lead to
cumbersome code that doesn't come close to realizing IPC's potential. This article
shows an example of using threads and futures to do IPC in a way that allows you to
achieve concurrency with clean, robust code.
In the article "Threads on the Macintosh" in Issue 6 of develop , I identified a potential
problem with interprocess communications when you're using the client-server
model. Simply put, if you don't use threads when you're using the client- server model
to implement IPC, the result could be deadlock. The deadlock occurs because each
application is capable of only a single train of thought. The client expects an answer to
a question posed to the server but never receives that answer because the server must
receive an answer to its question before it can respond. The result is that each party is
waiting for answers to its own questions before it can proceed.
Although "Threads on the Macintosh" sounded the alarm about the communications
deadlock problem, it didn't go into detail about how threads can be applied to solve the
problem. That's the purpose of this article. Specifically, this article shows how you
can avoid client-server deadlocks by using threads and a new facility called futures .
The Futures Package has been integrated seamlessly with Apple events. In this article,
we'll use Apple events as the generic facility for implementing IPC. The sample code
presented here appears on the Developer Essentials disc for this issue.
Before discussing futures in detail, let's review some of the basics about threads.
Threads provide multiple trains of thought for your application. If your application is
doing more than one thing at a time, threads allow you to simplify your code. Instead of
juggling between multiple tasks, you start a separate thread to handle each individual
task. You then have multiple program counters, one for each thread. Of course, the
threads don't actually run simultaneously on a single CPU. They share the CPU,
cooperatively trading control by calling a special function that says, "Let the other
threads in this application have some CPU time.
HOW THREADS AND FUTURES FACILITATE IPC
Ideally, when you're writing code for IPC, you'd like things to work such that
whenever the client poses a question, it gets an immediate answer. This situation would
translate into nice linear code, such as the following:
*
*code that prepares the question
*
answer := Ask(question);
*
*Code that uses the answer
*
The semantics would be very simple: A question is prepared, and then it's "asked." The
Ask function waits synchronously for the answer to be returned. When it's returned,
execution continues and the answer is used.
Unfortunately, this code suffers from a fatal flaw: the synchronous nature of the Ask
function will cause a deadly embrace in some situations. What if, as we saw above, the
client never receives an answer because the server needs to ask something of the client
before it can reply? This is an all too common situation.
Threads allow you to circumvent the deadlock problem by making each application have
a client and server portion so that both sides can ask and answer questions of each
other. In other words, the client and server portions of each application are assigned to
separately executing threads. IPC then works as follows: Application 1's client asks a
question of application 2's server, and application 2's server must ask a question
before it can answer. However, application 1 is able to field this question because even
though its client portion is waiting for an answer, its server portion is available to
answer application 2's question. Because the answering and the questioning portions of
each program are able to function independently, a hangup in the client or the server
doesn't bring the application to a halt.
In the "plain threads" situation just described, notice that execution of the client
thread is still held up while the client is waiting for an answer. What futures do is to
postpone or even eliminate this delay in processing, giving the thread a chance to do
related work before blocking. In this way, futures extend the capacity of threads to
maximize the efficient use of the CPU.
In the futures implementation, when a question is posed, the application never has to
wait for an answer; it can continue execution immediately. This may seem impossible:
in the above example, how can the Ask function return immediately when it must
supply an answer to the question? Mustn't it wait until the answer is received? No,
because the answer that's returned by Ask is a future. The future doesn't contain the
information that the real answer contains. Instead, it contains information that says
"this answer isn't 'real' yet." Your code keeps executing, thinking that it has the
answer, but it really doesn't. At some point later, when the real answer is received by
the Apple Event Manager, the future is automatically transformed into the real
answer, with all the information that was requested.
Note that when it comes time to get the contents out of an answer, and the answer could
be a future, you must be executing in a thread other than the main event loop thread,
or the result will be deadlock. This is because a thread that attempts to access the
contents of a future is blocked until the real answer is received. And since the real
answer to a future is received by the main event loop, you can't risk blocking the main
loop by using it to access the future. The solution is to fork a thread to access the
future. This way, your main event loop keeps running, receiving Apple events and
passing them to the Apple Event Manager.