The Difference
Volume Number: 10
Issue Number: 6
Column Tag: Inside Information
The Difference That Makes A Difference
What’s valuable depends on your perspective. Will that be changing
soon?
By Chris Espinosa, Apple Computer, Inc., MacTech Magazine Regular
Contributing Author
Nick Negroponte of the MIT Media Lab defines information as “a difference that
makes a difference.” On Usenet, you hear about this as the “signal-to-noise ratio,”
that is, the kernels of useful wheat in the general chaff of questions, mis informations,
rumors, and flames. In most other circumstances, though, information in digital form
makes a real difference - and this is most true in developing software.
Every bit of your application makes a difference. At the basic level, each bit has
to be a non-buggy bit (as opposed to a buggy bit) or your software will crash, and that
could make a big difference to its users and purchasers. A little above that, the bits of
your program are carefully compiled to run on a specific family of microprocessor;
the system calls in your program are linked to a specific operating system API; and the
logical assumptions are based on the performance and capabilities of a certain range of
hardware platforms. All of these choices are encoded into your finished product, and
they make a substantial difference in who will buy and use it.
Above that, of course, are the features and functions of your product itself. This
is supposedly what yo’'re good at, and ostensibly what your customers are paying
money for. Of all the investments you make in research and development, the
information you learn about how to make your program solve the customers’ problem
should be most worthwhile to you and to them, shouldn’t it?
But as you’re probably aware, your choice of platform often makes more of a
difference to your customers than your choice of features or technologies. Everybody
in the Mac business has been told more than once that ‘your product is great, but if it
doesn’t run on IBMs I can’t use it.’ And you spend much of your time and money simply
porting your application from one system version to the next, or from one hardware
platform to another - and recently, from one microprocessor to another. The
differences are significant, because compiler technology, hardware evolution, and new
system APIs are not simple things; but at least they make a difference to your
customers.
What will happen if these differences stop making a difference? What if, for
example, you didn’t have to worry about what instruction set to compile for? In a
small way it’s true now - if your application is not speed-sensitive, you can just
compile it for the 68K, and the emulator on the Power Macintosh products will
automatically run your software on the Power PC-based models. And while emulation
is admittedly slower than running native, you could be seeing more processor
independence in the future. Apple’s Advanced Technology Group and others in the
industry have been re searching processor-independent object file formats. With
these, you compile and link your application into intermediate code which you ship to
customers; then either the Installer or the segment loader transliterates the code into
the correct instruction set for each machine. The hardware vendor can use different
CPUs, the users get native performance, and you can ship one program that runs on
many brands.
And with processors continuing to get faster and cheaper, and multiprocessor
designs starting to become available, emulators might be the big win after all. If you
can add more processors to run your emulators faster, you might be able to achieve
near-native performance through emulation. Just think: if you want to run Windows
applications faster, just keep adding more Power PC chips to your Macintosh until it’s
fast enough!
Independence from hardware architecture is getting easier as well. In modern OS
architectures, a “hardware abstraction layer” separates the OS kernel from the
particular hardware implementation, making it easier to port the OS to different
hardware platforms. And developers of new platforms are trying an alternative to the
defacto standards of Macintosh (controlled by Apple) and the Intel-based PC
architecture (controlled by nobody in particular). The result is a set of “reference
platforms,” hardware designs that assure certain capabilities in different vendors’
designs. The last major reference platform, ACE, was built around Windows NT and the
MIPS chip; the current hot platform, PReP, is based on the Power PC chip and AIX. If
reference platforms dominate the landscape in the future, it should be easier to write
code that runs indifferently on multiple platforms.
Finally, APIs are crossing the hardware boundaries. Both OpenDoc and OLE 2.0
are cross-platform, though they don’t isolate you from other toolbox calls. Hosting
layers like XVT and Novell Appware Foundation add surprisingly little overhead to run
the same API on different underlying toolboxes. And future operating systems like
Taligent’s Pink system and IBM’s Workplace Shell are meant to host multiple
“personalities” on one OS kernel, so your choice of hardware vendor doesn’t dictate
your choice of API, and therefore applications software.
So five years from now, our old landmarks - the instruction set, the hardware
architecture, and the API - may be rotting and fallen. Will it be a total
mix-and-match world? Will people be running Mac code in an emulator box on
Windows NT on a Compaq Power PC platform, or x86 OLE objects wrappered by
OpenDoc running on OS/2 on a Macintosh with a Cyrix chip emulating the Pentium in
microcode?
I say: yes and no. I expect that the majority of successful commercial software
will be (more or less) compiled and built for a specific class of microprocessor,
hardware platform, and API. It’ll just be easier that way, both technically and in the
marketplace. Though the technology might be able to jump through hoops, the channels
and customers don’t get over such fundamental taboos as “incompatibility” overnight.
But while compatibility may remain a litmus test, it’ll no longer be a barrier.
In-house developers will be able to compile something once and deploy it on their Mac,
Windows, and UNIX machines, letting adapters and emulators take care of details. Or
you could take a product that’s successful on one platform, test-market it in the
emulator community on other platforms and, if it sells, then invest in the native port
to increase your market share and competitiveness. Or (for extra credit) you could
find clever ways to bridge the various environments, perhaps hooking up TAPI in
SoftWindows to the Geoport or AV capabilities on a Power Macintosh.
Old differences die hard. Even after technology has made them irrelevant, the
distinctions of architecture will color peoples’ thinking. Most conventional
development will probably remain the way it’s always been, but there may be some
interesting new opportunities when the gaps between platforms are bridged over.