Part 1: Retrospectus

What I got right; what I got wrong

“In computing”, I wrote, “predicting up to ten years ahead is relatively straightforward”. This was largely true in 1995, but has become much less so. In this, and in several other ways, my predictions turn out to have been more-or-less equally right and wrong at the same time! so I shall evaluate their successes and failures together.

Methodology

The rise of just-in-time manufacturing and global supply chains generally has greatly reduced the time-to-market for new products, though new technologies can take even longer than “ten years”: in January 2019, television manufacturer LG announced a retractable flat screen as a flagship product—we seem no closer to the cheap roll-up light-emitting polymer screens promised back in the ’90s. As Moore’s Law has slowed, its first noticeable “failure mode” was that CPUs stopped doubling in speed every 18 months; indeed, top speeds have now been roughly constant for a decade. Instead, we have gained performance through adding more cores, relying on the fact that many use cases are either already multi-process (for example, users running multiple applications on a laptop or phone, or several tabs in a browser), can be decomposed without too much work (for example, running the user interface and application logic in separate threads to stop an application becoming unresponsive when it’s busy—this can improve perceived performance greatly without needing greater absolute performance). Some applications are naturally parallelizable, such as rendering the increasingly life-like graphics of so-called “triple-A” games, an industry that has overtaken Hollywood in size. Indeed, GPUs have continued to speed up as CPUs have flat-lined, and their parallel architecture has also proved to be suited to AI applications as well as perhaps the most unexpected major development of the last ten years: cryptocurrencies (Bitcoin and its ilk). Relational database engines are now being written or rewritten to take advantage of GPU performance.

On the other hand, it currently seems easy to predict which CPU family or operating system one will be using in ten years: in all likelihood, the same as those we’ve been using for the past 20 or 30.

The diversity of CPU families in the ’90s has, outside the still-varied embedded arena, reduced to two: ARM and Intel/AMD. For most form factors, there’s one practical choice: ARM for tablets and below; Intel/AMD for anything bigger, with IBM’s POWER architecture deserving an honourable mention for use in supercomputers and those who want their systems entirely open, without closed-source binary blobs of dubious trustworthiness. The only new entrant is RISC-V, a fully open, no-royalty architecture; it remains to be seen whether this logical development (compared with Intel/AMD’s proprietary architecture manufactured in-house, and ARM’s proprietary architecture licensed to manufacturers) takes off.

Meanwhile, Linux has become the operating system kernel of choice for most vendors other than Microsoft (Windows NT) and Apple (Mach); Windows NT is the only widely-used non-embedded kernel that is not a UNIX derivative, and even it has been adding UNIX functionality. Increasingly software is written based on the UNIX model, which has become more entrenched thanks to its being the basis for most internet technologies, and the increasing (and accelerating) complexity of software means this is unlikely to change any time soon; there’s simply too much to rewrite to escape from it.

In 1995 I based my predictions on “current products and trends that point to [my predictions’] fulfilment, and…[on] current research.” Many of the biggest disruptions since have come from outside academic or industrial research labs, whether the bolt from the blue that was Bitcoin, or the wave of economic, regulatory and political re-engineering represented in different ways by Uber, Airbnb, Amazon and Facebook.

The personal computer is dying; long live the phone!

The personal computer as it was, a mostly desk-bound, isolated, personally-maintained device, is on the way out.1 The most commonly-used device now is the mobile phone, whether “dumb” or “smart”.2 Phones are often rented rather than owned by their users, have no user-serviceable or upgradable parts other than extra storage; the system software tends to be manufacturer-supplied and cannot be altered or replaced by the user, while user applications must be installed from “app stores” operated and controlled by the operating system vendor, in a market where two vendors account for more than 97% of units sold, and support a $100bn advertising industry, driven by data extracted by the phones from their users, with a combination of free-service carrots, weak privacy laws, and downright subterfuge.

At the same time, as billions more of the world’s population have come online in different ways according to their different levels of wealth and income, the one constant is that whether in sub-Saharan Africa or wealthy Western metropolises, most users still have an expensive-for-them personal device that they use for most of their interactions. Individual spend on computing has gone up sharply, despite the expected collapse of price per unit performance, and the more stringent power, weight and size requirements of mobile phones as compared with desktop computers has meant that despite the slow-down that allows a 2007 model PC, suitably upgraded, to work perfectly happily as a personal computer in 2019, most phones still tend to be replaced within two years, if they aren’t broken, lost or stolen before then.

Pronits rule; no word for them was needed

Like electric motors before them, computers have proliferated everywhere. Turing machines, at under 10¢, are now found in devices as cheap as fairy lights, while passive devices such as RFID and NFC tags infest all but the lowliest goods. All but the least of these are internet-connected, so that the Internet of Things now exceeds by orders of magnitude the internet of humans, even the “internet of shopping”, today’s titan which was newborn in the mid-’90s.

Also like electric motors, no new word was needed to describe the phenomenon, because it’s invisible and not something that users have to interact with directly.

Everything is networked, but “cheap” trumps “shared”…so far

The promise of the Desk Area Network has not been fulfilled; instead we have the trivial networking of Bluetooth at one end, and the zombifying “cloud” at the other, in which Amazon, Microsoft, Apple et al have centralized services with smart (but dependent) heavyweight clients. Airbnb, Uber, Zipcar etc. present themselves as promoting shared use of physical resources, but in reality it’s a clever use of technology and venture capital to deregulate once-regulated industries (taxis, hotels, car hire) and the labour market at the same time: rather than allowing people to share underused capital peer-to-peer, the capital provision has simply been pushed out to individual workers: many Uber drivers and Airbnb hosts are effectively working at least part-time in their chosen industry, buying a car or decorating a room specially; a far cry from the touted “sharing economy”.

The convergence of different types of networks is complete with the lumbering introduction of “5G mobile”, which, unlike its predecessors, which were essentially a raft of increasingly-sophisticated technical standards for mobile data networks of ever-increasing bandwidth, is an entire ecosystem of technologies for just about every sort of communication that isn’t intra-machine. Under the hood, of course, complexity continues to blossom: the point is not that there’s one single technology to rule them all, but that there is no longer any clear demarcation between the different sorts of networking.

However, computing devices still retain their individual identity, and it’s still largely based on ownership (whether corporate or individual) and use. While most web sites work reasonably well on both desktop and mobile browsers, good sites tend to have desktop and mobile versions, and many wildly popular apps have limited or absent desktop support, or, like the popular messaging app WhatsApp, require the app to be installed on a phone to make it work on a computer, which is merely an appendage of the phone.

This “PC writ small” architecture is both driven by economy—under the hood, little has changed, and while the emphasis has changed, with phone operating systems being much more serious about security than desktop OSes, there are few new ideas, with modern security based on implementations in OS kernels from the ’90s of mechanisms developed in the ’60s and ’70s—and suits the personalised advertising industry well, as devices can be used as an easy and accurate proxy for individuals—there’s less need to track an individual across multiple devices when so much of their online life is mediated by so few. Hardware trends have backed this up: while components are, as predicted, increasingly standardised, integration levels have continued to increase, as various sorts of custom chip become cheaper, so that many smaller devices now consist of a single main “System on a Chip”, plus a handful of hard-to-integrate support components.

Data is more portable: one’s data and settings migrate from phone to phone, provided one stays in the same device ecosystem, and documents are even easier to move around, though a few centralised “cloud” providers such as Dropbox, Apple and Microsoft dominate the market. “[People] will start to use archive services”, and they have, though many still lose precious data. “Much information will be provided free,” but I didn’t foresee Wikipedia! The idea of universal applications that can be “teleported” from one device to another has not caught on, and this is no surprise: quite apart from the complexities of transferring the state of a running application, rather than just the checkpointed state of a saved document, nobody really needs it: our phones are always with us, and we use different devices for different things; few people want to continue to edit on their phone the article they started on their laptop. Nonetheless, web browsers, the most widely-demanded application on all devices, have for a few years provided some degree of “teleportation”, such as the ability to access one’s browsing history across multiple devices, and to “send” a browser tab from one device to another. (Your author rarely uses this for anything more complex than being able to read on the train an article he first opened at his desk.)

What price privacy and security?

To the surprise of many, users have rapidly become comfortable with handing over sensitive data to large corporations, despite a constant stream of negative publicity and security breaches. This has however offered a solution to another problem, namely that of reliable backups of valuable data.

Although in some ways smart phones are like active badges, constantly transmitting fine-grained location data (ingeniously using a combination of coarse-grained GPS and wifi access point triangulation, rather than requiring a special-purpose infrastructure), this data is mostly used by advertisers, with various degrees of anonymisation; it’s also wildly popular with government agencies.

Paying for things

I only mentioned payments in passing; rented software has become the norm, though not in most cases “pay-per-use” as I foresaw; much commoner (and not foreseen) is “free-to-use” ad-supported (Google’s GMail and G Suite; Microsoft’s Office Online). I also failed to foresee the rise of libre software such as LibreOffice and Firefox.

Interaction in word and deed

Voice recognition has improved greatly. Some users (though still a minority) use voice to dictate text and commands to their phones; others, whether for reasons of efficiency, privacy, or habit, have stuck with increasingly smart on-screen keyboards, which use personalised prediction, and can turn a swipe over the letters that make up a word into accurate text. Voice commands are most widely used at home, with always-on digital assistants such as Amazon’s Alexa listening for your command to play music, answer questions (web searches or other queries that have a short answer) or order groceries. Banks and other institutions are starting to use voice-based authentication, and making basic mistakes (such as opening themselves to replay attacks). Meanwhile, though facial recognition has advanced to the point where it’s used to unlock phones, and beloved of law enforcement agencies and repressive tendencies in governments worldwide, and gesture is used for gross control on touch devices (basically, phones), the keyboard is still an integral part of everything larger. Paper is not going away; indeed, ambitious and imaginative initiatives such as Dynamicland are bringing the physical and tactile into the realm of computing. The desk is here to stay, and for my fortieth birthday I had my maternal grandfather’s desktop remade into the centre of a beautiful desk that I hope I’ll be using for the rest of my life. I’m really not sure what I was thinking when I wrote “desk space will no longer be needed”!

What has disappeared increasingly is the large screen. Even though most households still have a television, it’s no longer a must-have. “Bring your own device” is increasingly the norm, and cabled networking, even for non-portable devices, is becoming rare. So that our rooms are indeed less cluttered with computers than they were; but networking has affected rooms even more profoundly: while individual machines remain recognisable, as networks have become mainly logical constructs, rather than physical ones, individual data centres have been absorbed into the “cloud”, and cost savings have done the rest, leading to the rise of the virtual organisation (as noted by Steve Jobs3 as early as 1990), the stereotype of the startup in a coffee shop, and the reality of many workers spending much of their time either working isolated, if hopefully in physical comfort, at home, or on transport, “hot-desking”, or in some other permanently temporary and typically uncomfortable arrangement (especially if you do not have a cast iron constitution and concentration—the already privileged and powerful are inevitably disproportionately advantaged by arrangements that are always more ad the hoc of those with clout).

Back to phones themselves: I did not foresee that “storage devices the size of a matchbox that can store billions of characters of data and programs” might be matched by similar computing capacity (my mid-range pocket phone is more powerful than a top-end server of 20 years ago) and a screen, and hence be an all-in-one computing device, complete with geo-location, voice recognition, microphone, camera etc. rather than a passive device to be plugged into compute and I/O.

I predicted the rise of distributed computing: “Computing power will no longer be divided into discrete and qualitatively different units, but be a substance rather like electrical power…Distributed computing turns separate computers each running their own programs into pronits which each run parts of many programs. These pronits will be ubiquitous, embedded in and controlling the network, and part of everything connected to it.” We do indeed measure “compute” in CPU cores, GB of RAM and Gbit/s of bandwidth, but instead of consumption at the point of use, we got the cloud: reservoirs of compute run by a few big companies (Amazon, Microsoft and Google again) into which our data is shovelled, and without which our “smart” phones lose a large fraction of their capabilities. We are hooked into the online giants’ advertising ecosystem by a range of free services that crucially include some that we increasingly rely on to maintain social and professional contacts.

I wrote that “even the boundary between interacting with a human and with a computer will be blurred.” The context was suggesting that the help function of a word processor might be extended to enable the user to chat to a human; instead we have “artificial intelligence” that in limited contexts has become better and better at mimicking humans; but as users have become used to getting software and many services without paying, human technical support remains the byword for inaccessibility it was in the ’90s, though Stack Exchange stands out as an ingeniously designed forum system that elicits good answers to questions on a wide range of subjects (not all technical) that are easy to find with a web search; though it’s mostly used by technically sophisticated users.

Finally, the use of electronic devices as “memory prostheses” is widespread, though this is mostly implemented as unstructured search across a variety of applications (email, calendar, instant messaging, location…); the general contextual data log envisaged by Xerox EuroParc does not exist; we have a series of discrete searchable external memories, rather than an active memory assistant.

“Computers, being out of sight, will be out of mind”: untrue. Our various gadgets are still a source of continuous frustration and fascination. Nonetheless, “People will think in terms of information and communication rather than in terms of computation”. When they work, our special-purpose interactive objects, from central heating thermostats to toys, focus attention on the task at hand, not the computation involved, or in pathological cases, on information gathering, social media, and the privacy and security implications of unnecessary internet links.

“Computers will be able to read books, making translations and looking up commentaries on the spot, and to identify paintings and birds that the user sees”: the technology exists, and is an important part of Augmented Reality, but despite a number of impressive demos in the last ten years is struggling to make it beyond being various promising toy applications at the point of use. The same effects are achieved more laboriously or expensively, for example by buying an electronic edition of a book and copying and pasting text into a translation service (machine translation, at least, has advanced to being an extremely useful aid for the non-speaker wanting to get the gist of an article, or, intelligently used, by the intermediate speaker wanting to grasp a nuance—here technical and linguistic sophistication are required), or by using a phone-based bird, tree, or font identifier: no more advanced than a paper one, but much easier to summon at need. Computers do indeed support cognition, most obviously in the modern mantra that it’s no longer necessary to know anything, only how to find it with Google; but the directed support of computers “at any given moment…offer[ing] information from the totality of that available [tailored] to…[one’s] current needs” is restricted to the twisted caricature that is online targeted advertising.

We got much better at information retrieval much faster than we expected: the introduction of Google and its PageRank algorithm, plus the determined engineering put in by Google to hoover up all the data available, and digitise far more (Google Maps and Google Books being the two most visible results) made it trivial to find a huge range of information and things within a few years. Ten years later, one of Facebook’s less remarked-on effects was to make it an order of magnitude easier to seek advice and information from one’s social circle. In large areas of life, information search is practically a solved problem, though the hidden biases in the way in which it is solved are only just beginning to be acknowledged and regulated.

Meanwhile, access to information and its abuse has become an enormous problem, whether in the asymmetry between governments and their citizens, by corporations and their customers, or between rich and poor. Only determined resistance has seen off various attempts to worsen the imbalance, whether in the name of state security or intellectual property “rights”, and 2018 saw the end of net neutrality at the federal level in the US; though at the same time the EU’s GDPR regulations marked an important blow against corporate and governmental invasion of privacy; at the same time, in different ways, both American and European IP regimes are stifling the public domain, with one remission: in 2019, for the first time since 1998, new works entered the public domain (from 1924).

Rather than active fabric, the panopticon

“It is not only buildings that will become active, controlled by and permeated by communications equipment; clothes will contain active badges and pronits, and screens will be built into glasses, like a head-up display on the world.” All of these technologies have been commercially introduced, but only building management has really taken off. Screens in glasses, in particular, have been negatively received, with Google Glass wearers being targeted for perceived invasion of privacy: ironic in cities that are now blanketed in public and private CCTV. Indeed “we [are] never out of touch” in the Western world; yet hard-to-surveil satellite telephony is illegal in some countries.

Swiss Army knives come in various sizes

“Telephones and televisions will cease to exist”. Both are still very much in evidence, but more as cultural artefacts than as technological ones: their raw capabilities are increasingly the same, but they are used for different things.

Conversely, while printers, cars, microwaves, washing machines and fridges are increasingly online, users seem only really to want this functionality from printers; network-connected fridges are an object of humour, televisions of suspicion as they monitor our intimate conversations (excused by many as our television is increasingly delivered over the internet), and cars of worries about attacks; and while manufacturers seem to be serious about wanting us to connect everything to the internet, their attitudes to security and privacy are almost uniformly woeful.

Few green shoots

I predicted that distributed computing would lead to greater efficiency, though I only touched on the implications of load-balancing and bringing compute to data. Today we worry about the energy consumption of computers far more, but mostly either because it’s become a significant part of overall energy needs, or because we always want longer battery life from our gadgets. Cloud computing data centres are extremely efficient, with the largest providers using custom designs down to the hardware level to minimise their energy budgets; but the gains have been spent on increased capacity and price competition. Even the move to virtual computing, where many customers share the same hardware as demand ebbs and flows, has had little negative impact on energy consumption. Extra capacity is required to cover peak demand, and both network topology and regulatory regimes restrict exactly where and in what combination virtual machines are mapped to physical.

Relatedly, while the global supply chain has delivered undreamed-of speed-to-market and economies of scale, it has become increasingly fragile as it integrates, so that for example DRAM prices have spiked owing to factories burning down (by 100% as early as 1993, and again 20% in 2013) or simply closing (5.5% of world-wide production went offline in 2017). So much of the world’s manufacturing capacity is now in China that its exchange rates and economic cycle have a disproportionate effect. Gamers were unable to buy high-end graphics cards for a while in 2017 when they were being bought up by Bitcoin miners; and trade disputes and trade wars (currently looming) can rapidly change what makes economic sense. Meanwhile, the environmental and human costs have continued to increase: global energy demand has continued to rise, and manufacturers have reacted to fierce competition by an ever more ruthless search for cheap labour, while with a very few exceptions, chiefly among the biggest consumer brands, turning a blind eye to exactly how their raw materials are obtained or the conditions under which their products are assembled.4

Of what I could not speak

I did not attempt to make predictions about nanotechnology, self-replicating machines, or artificial intelligence, claiming that they would not greatly affect society in the next thirty years. This has proven true: while there has been a lot of buzz about AI in the last ten years, it’s not really AI at all, but machine learning that has taken off. Powerful, impressive, society-changing and something I missed, but not AI.

The nice things we haven’t yet got

The dominant business models of the early 21st century tech industry have been based on either “free” services paid for by advertising (Facebook, Google, Amazon), or a disintermediated, deregulated version of an existing service (Uber, Airbnb, Amazon), or simply massive scale (Amazon Web Services). In both cases, user data is the fuel, whether to target advertising or make the service more efficient, and in the case of “platforms” such as Amazon and Uber, the “users” include those selling their products and services. As a result we have concentrated data centres rather than a distributed computing fabric, massive data silos rather than a ubiquitous commons, and a new generation of enormous corporations rather than a flourishing market of peer-to-peer exchange, all backed by increasingly ferocious intellectual property law that allows start-ups to use a web of copyright, patents and digital rights management to stifle competition, further encouraging start-ups to grow as large as possible as quick as possible.

Alan Kay, the visionary at the centre of Xerox PARC, never learned the importance of sharing: just as the innovations of the 70s took 30 years to come to fruition, even with the determined efforts of Steve Jobs, Kay’s ’00s publicly-funded think tank, VPRI, which picked up where he had left off and gave us what the ’80s might have looked like at PARC, failed to publish more than a fraction of the code it produced, and a few screenshots of its intriguing demos. Unfortunately, the next generation, who should know better, are in places just as bad: Bret Victor is doing wildly imaginative technical work coupled with a total failure to use new modes of dissemination.

Nice surprises

I completely overlooked the rise of free software; this was perhaps forgiveable, since despite the GNU Project being already a decade old, it had made relatively little impact; freeware and shareware still thrived, and services underlying the massive distributed development and delivery of software were in their infancy; not to mention that free operating systems were still at the hobbyist-only stage. The impact of Open Source, which could easily have led to the death of free software but in fact led directly to its current dominance, in combination with the economic incentive of an immense and complex base of functionality that is best maintained in common while competition occurs at the edges and top of the stack.

Other nice surprises fall into two main camps. The first is the successful application of technology to primarily non-technological ends, such as the British charity mySociety spearheading a “civic programming” movement that now spans the world, leveraging the expertise of civic-minded technologists to open up public life and institutions, and using cost savings to work their way into partnership with many local and national governments; the many artists and educators using technology creatively; I mention Nicky Case and Alex McLean as a couple of random examples. The second is the stunning success of the movement to put programming at the centre of the educational curriculum, catalysed in the UK by the Raspberry Pi (a success I predicted would not happen!).

Other work

On 7th July 2006 I noticed The Next Few Decades of Computing by Linus Vepstas, an occasionally revised essay started a few years after mine with a similar theme.

Only mature fields have almanacs; the best way to follow developments in tech changes form every few years, let alone author. Currently, the best place to sample the changing winds may be Twitter (absent the budget to locate and travel to meet those working the bellows!). Azeem Azhar’s Exponential View is a good curated feed, available as a weekly digest email.


  1. Though it’s easy to overstate its demise: as with email, which remains the baseline mode of communication every one can access even though many people hardly use it, PCs are still the “gold standard”, and though their relative numbers are now dwarfed by mobile devices, the absolute number in use has continued to rise. The same physical machines often evolve from “fat” to “thin” clients as more work is shifted into the cloud; yesterday’s powerful workstation is still adequate as today’s “dumb” terminal (though “minion” is probably more apt—unlike the terminals of the 1970s, considerable processing power is required).
  2. I put these adjectives in scare quotes because their Western origin and bias misses the sophistication of the technical and human systems underpinned by all varieties, from the incredible success of mobile phone infrastructure and alternative finance in Africa, helping to drive improvements in the lives of the continent’s billion despite a depressing lack of progress in governance and public infrastructure in the same time, to the extraordinarily different versions of the future currently on offer in China and the West, despite the increasing homogeneity of the underlying technologies, as hard physical limits and global capitalism continue to drive increasing consolidation of the tech industry.
  3. I can no longer find this description of virtual organisations from an interview. If any reader can, I would appreciate a pointer!
  4. The impact of Fairphone, a Dutch start-up making phones with end-to-end transparency in the supply chain, has been mainly to publicise how entrenched inequity is.

Last updated 2019/08/20