The PC is dead. Rising numbers of mobile, lightweight, cloud-centric devices don't merely represent a change in form factor. Rather, we're seeing an unprecedented shift of power from end users and software developers on the one hand, to operating system vendors on the other—and even those who keep their PCs are being swept along. This is a little for the better, and much for the worse.
The transformation is one from product to service. The platforms we used to purchase every few years—like operating systems—have become ongoing relationships with vendors, both for end users and software developers. I wrote about this impending shift, driven by a desire for better security and more convenience, in my 2008 book The Future of the Internet—and How to Stop It.
For decades we've enjoyed a simple way for people to create software and share or sell it to others. People bought general-purpose computers—PCs, including those that say Mac. Those computers came with operating systems that took care of the basics. Anyone could write and run software for an operating system, and up popped an endless assortment of spreadsheets, word processors, instant messengers, Web browsers, e-mail, and games. That software ranged from the sublime to the ridiculous to the dangerous—and there was no referee except the user's good taste and sense, with a little help from nearby nerds or antivirus software. (This worked so long as the antivirus software was not itself malware, a phenomenon that turned out to be distressingly common.)
Choosing an OS used to mean taking a bit of a plunge: since software was anchored to it, a choice of, say, Windows over Mac meant a long-term choice between different available software collections. Even if a software developer offered versions of its wares for each OS, switching from one OS to another typically meant having to buy that software all over again.
That was one reason we ended up with a single dominant OS for over two decades. People had Windows, which made software developers want to write for Windows, which made more people want to buy Windows, which made it even more appealing to software developers, and so on. In the 1990s, both the U.S. and European governments went after Microsoft in a legendary and yet, today, easily forgettable antitrust battle. Their main complaint? That Microsoft had put a thumb on the scale in competition between its own Internet Explorer browser and its primary competitor, Netscape Navigator. Microsoft did this by telling PC makers that they had to ensure that Internet Explorer was ready and waiting on the user's Windows desktop when the user unpacked the computer and set it up, whether the PC makers wanted to or not. Netscape could still be prebundled with Windows, as far as Microsoft was concerned. Years of litigation and oceans of legal documents can thus be boiled down into an essential original sin: an OS maker had unduly favored its own applications.
When the iPhone came out in 2007, its design was far more restrictive. No outside code at all was allowed on the phone; all the software on it was Apple's. What made this unremarkable—and unobjectionable—was that it was a phone, not a computer, and most competing phones were equally locked down. We counted on computers to be open platforms—hard to think of them any other way—and understood phones as appliances, more akin to radios, TVs, and coffee machines.