You know, waaaaay back when computers ran MVS and UNIX, and computer makers had names like IBM, Honeywell, Univac, and Burroughs, virtual systems and thin clients were the normal way of doing business. Of course, we didn’t call them virtual systems and thin clients, mainframes had terminals (dumb terminals usually) and Unix systems ran X-Servers and connected to X-Stations.
X-Stations were a lot closer to what we think of as thin clients today, these terminals could display graphics, deliver sound (usually no more than beeps and clicks, but such was the state of technology back then), and had advanced user interfaces. Colorful windowed screens, multiple user sessions, and multitasking were the work environment back then. In fact, much of what you can do on today’s Linux and Unix systems you could do 10 (or more) years ago on X-Stations.
One huge advantage of working virtual in today’s thin clients, as well as X-Stations of yore, is whenever a station died a user could simply replace it or move to another station, log back in, and start working again. X-Stations were relatively cheap so there were plenty around to replace the occasional misbehaving terminal.
Today, if your PC dies (See inset for Vern’s Crash Theorem), you are pretty much screwed unless you were smart enough to back your data up, and even if you did you might not recover everything. In a robust virtual environment you could pour root beer over your thin client an hour before your deadline, and pick up where you left off on another station. The only thing you’d lose is the time it took for you to find another station.
Speaking of losing data, back then, losing data to malware was unheard of. Those big iron systems had decent security and ran continuously, all day, everyday, 365 days a year, and access was always available, often even during system maintenance. System uptimes were measured in months, and there were true stories of servers running, untouched and un-crashed, for years. It was a great way to run a business, and it was all made possible by virtualization.
I should explain that there are several definitions that can be applied to the term ‘virtualization’, but in this instance it means the ability to access one or more operating systems and the applications that run on them. I’ll also add the notion that this access can be done in parallel, where two or more OSes are available for use at the same time, or in tandem, locally or from a distance, such as from a thin client.
The PC nearly put an end to the virtual desktop; small, fast processors and a wealth of useful ‘personal’ applications, and the promise of a reduced IT overhead made PCs the must-have systems in business, and that carried over into personal use.
Now virtualization and thin clients, after several false starts, are making a major comeback. Mactels, with their dual processor engines, seem to be the perfect platform on which to run 2 or more OSes. With applications like Parallels Workstation and Boot Camp, and with the rumored inclusion of built-in virtualization components in the next release of OS X (10.5, codename; Leopard), Apple seems to be embracing the notion that OS X can, and should be, run anywhere.
Consider, for instance, what the Intel version of Apple’s pro desktop lineup and servers might be like. If 64-bit processors are used, and if the rumors are true that virtualiztion component are in OS X 10.5, then it is conceivable that Apple could be readying a new class of computers, one that makes use of virtualization in the sense that IBM did with its UNIX and X-Stations.
In other words, in conjunction with the release of the new pro desktop and servers, Apple could also offer us under-powered devices running a subset of OS X meant to be used while away from home or office. Upon returning, your ‘MacBook mini’ would bind itself to the Mac or Windows desktop or server you have running in a computer room somewhere. You can then log in and run a OS X or Windows virtual desktop, via a thin client, using the full power of the server. Any data or files that were changed while you were away, be it on your MacBook mini or on the server, gets synced automagically.
Virtual nirvana.
Apple, known for innovation and its ability to take existing ideas and breathe new life into them (USB, Bluetooth), might be readying such a system even as we speak. Of course, I’m only hypothesizing, but consider this:
- The new MacBooks are very nice, but they are not lightweights; at 5.2 pounds (2.36 kg), MacBooks are portable, can’t compete with Windows-only UMPCs in weight and compactness.
- Intel has also proposed versions of the Ultra-Mobile PC, but one of its versions comes with a keyboard, which I believe is a must for any truly usable business device.
- Intel makes the graphics chip set in the Mac mini and MacBooks, this same chip set, along with the processors that power the UMPC, could be more than more than enough to give the MacBook mini processing horsepower to let you work while away from your home domain, and graphic horsepower to let the MacBook mini act as a fully capable thin client once you are at home or in the office.
- Just as the Mac mini provided a means of Mac ownership for those looking for a low cost solution so would this MacBook mini become an inexpensive portable solution, and becomes the basis for a revitalized way to use computer resources.
Such a system would be embraced by small businesses if Apple marketed them as turn-key business systems similar to how it markets turnkey scientific server systems. If they package a server preloaded with business software licensed for multiple users and 2 or 3 MacBook minis, and suddenly Apple is into small business in a big way.
There is a possible dark side to all of this wholesome virtual goodness, however; Virtualization also means that my fictitious MacBook mini could connect to servers running Windows XP or, eventually, Vista. Owners of regular Macs and MacBooks wanting access to Windows apps could just as easily fire up XP, either from Boot Camp or in parallel.
The problem is that Windows is still susceptible to all manner of malware, and could be the perfect avenue to introduce digital nastiness into OS X. All it would take is one virus or trojan horse infecting shared resources, hard drives especially, and not only will your virtual Windows die, but so may your virtual OS X. Even malicious code that has been neutralized by Windows virus scanners may be made to leave a little some-some that might affect OS X.
After maintaining a virus-free record for so long, Apple could find that virtualization circumvents any security measures it might have in place and makes Macs PCs equals in the worse way. No matter how it becomes infected, once a virus is loose in OS X, Apple’s reputation for providing a malware-free OS in OS X will be lost forever, and that is one of the Mac’s main selling points.
Of course, all of this is dependent on which path Apple will take with the pro desktops and servers. It is my opinion that Apple needs to do something ratchet up high end sales. On the other hand, embracing virtualization in any form may be too risky.
We’ll just have to wait and see.