CHAPTER 1
Major Trends in Software
Software in the form of computer programs is the machinery thatdrives digital systems. In that narrow sense, it is computer code. Buta key is that it is indeed "soft." First, it is soft in that it can be easilychanged and updated, much more easily than hardware. Second, it issoft in that it can be copied and shared without any significant costof the sharing.
The characteristic of being able to share a good indefinitelywithout it running out applies to data and knowledge as well ascomputer code. Throughout this book, I will often use the term"software" in the broadest sense—code, data, and knowledgerepresented in that data—when discussing its impact.
The term "data" brings up the image of tables of numbers,but it is much more than that. The human body of knowledge islargely stored and passed on in the form of text, sound, and images;increasingly, that knowledge is preserved and organized in digitalform and accessed by software programs. Knowledge and our abilityto find it, create it, and share it is a fundamental trait distinguishinghuman society. Knowledge can grow with few limits if we preserveit and find ways to discover what we need to know quickly.
The focus of this book on software does not negate the impactof hardware advances. As I have noted, an increasing number ofphysical products incorporate digital processors and thus software.This expansion of "digital systems" is part of the trend of softwarebecoming more pervasive in our lives. The expansion of systemsdriven by software will continue.
Why does software change the nature and speed of technologyevolution and its impact on society? The following subsectionshighlight key trends accelerating that impact.
Exponential growth in computer power
The processing power available to software grows exponentiallyas we periodically upgrade our hardware, buy new hardwaresystems, or add servers to a network. Continuing improvement indigital processor speed and memory storage allows more complexsoftware to operate quickly enough to be useful and to be affordable.Famously, Moore's Law says that digital processor and memory chipsimprove exponentially in complexity (doubling in the number of coreprocessing elements—transistors—on the chips every 18 months).Exponential growth creates amazing progress—with Moore's Lawrate corresponding to the number of transistors on a chip increasingby a factor of over 32,000 in 24 years. If that continued and wastranslated directly into computing speed, what takes 32,000 seconds(more than 9 hours) to compute today will only take one second in24 years.
It appears that Moore's Law can continue for quite a while,with both improvements in basic processes and the use of multiplesemiconductor layers on a chip (moving from a flat architectureto a 3D architecture). In a February 2012 talk where he predictedthe continuation of this trend, Intel CEO Paul Otellini pointed outthat Intel moved from a 32-nanometer process for chips in 2009 toa smaller 22-nanometer process for its Ivy Bridge chips in 2012,allowing more transistors per chip.
Nathan Myhrvold, a physicist by training and the former ChiefTechnology Officer at Microsoft, put this trend somewhat differentlyin a chapter of Talking Back to the Machine; Computers and HumanAspiration (edited by Peter J. Denning). He claimed that computingpower has increased by a factor of one million in the last 25 years(speaking in 1999) and that it should increase by a factor of onemillion in the next twenty years; he felt this growth could continuefor at least 40 years. That rate of growth means that in 30 years,computers of comparable price will be able to do in 30 seconds whatit takes today a million years to do, he noted.
The trend may even be accelerating when translated into thesupport it can give software. Multiple microprocessor "cores"(processing units) are put on one chip, allowing more than one thingto be done at once on the chip—allowing more than one softwareprogram to be run simultaneously. For example, your smartphonemight be finding news relevant to you while you dictate a messageto be sent as text. Intel researchers were working on a 48-coreprocessor for smartphones and tablets in 2012, targeting five to 10years to market.
There are other methods on the horizon that could allowcontinuing growth in computing power if current approaches reacha limit. In October 2012, it was announced that David J. Winelandof the US National Institute of Standards and Technology wouldbe awarded the Nobel Prize in Physics "for ground-breakingexperimental methods that enable measuring and manipulation ofindividual quantum systems." Wineland's group has demonstratedcomputing operations based on the cited research that have thepotential to accelerate computing power beyond today's technology.Wineland was quoted, "Most of us feel that even though that is along way off before we can realize such a computer, many of us feelit will eventually happen."
Another aspect of processing power is the increasing useof network-based processing. For example, many smartphoneapplications use processing within the network. Companies such asGoogle compute search results on huge banks of computers in thenetwork, not limited by the processing power of the device accessingthese results. (More on this point in the next subsection.)
Obviously, computers will be able to use complex models andanalyze quantities of data that are out of the question today. Whatwe see today is impressive, but advances aren't stopping today.
The Internet and cloud-based computing
Internet connectivity, the World Wide Web, and search enginesyield access to huge amounts of data—information that would notbe feasible or easily updated on a local device. These huge datarepositories go well beyond what any human could remember, ofcourse, and in effect are an important adjunct to human intelligence.That intelligence is increasingly available to us wherever we arethrough mobile devices such as smartphones. Web search is oftenthe first resource we turn to for activities such as finding a businessor getting directions to an address.
The trend of fast access to Internet-based information isaccelerating. In November 2012, Google launched its first installationof an ultra high-speed network to homes in Kansas City. The newtechnology claims speeds of up to 1 Gigabit per second. Currently,most Americans have Internet connections one-twentieth of thatspeed.
The Internet allows the power of software running in the cloudon powerful "servers" (specialized computers) to be available toconnected devices. Thus, the resources for running the softwarecan be much greater than what the device itself can support. Andconnectivity means that multiple servers in the cloud can supporta single service. Further, this "cloud-based" computing power isavailable to businesses as a paid service, even to smaller companies,allowing the concentration of expertise, backup, and security insuch services to be beyond what even some large companies want toinvest. Netflix, the large provider of downloadable video, blamed anoutage on Christmas Eve 2012 on the Amazon Web services it wasusing to serve customers. (Apparently, an Amazon engineer doingroutine maintenance made an error that erased some data.) Thelargest Internet Service Provider (ISP, providing Internet access toservers for outside companies for a fee) is Level 3 Communications,which in 2012 provided services to over 2,700 corporations usingover 100,000 miles of optical fiber.
Services delivered over the Internet (based in the cloud) are amajor growth trend. Individuals look at such services, e.g., Google 'sweb search and Facebook, as a natural part of the Web. The Webpages we visit are delivered by servers in the cloud.
And now companies big and small are increasingly looking atcloud-based services as an alternative to "premise-based" software,where software is purchased and loaded onto computers within thecompany. By using the Software as a Service ("SaaS"), companiesavoid the costs of hardware and maintenance of the software. Theycan expand the volume of users without buying new hardware.What would be a capital expense becomes an operating cost, and isspread over time. The need to hire experts in that software to keepit running properly is eliminated, since the cloud service providerprovides that function.
The cloud services provider gets the advantage of scale. The useof servers is spread over many customers, and the feedback from onecustomer leads to fixes to the software that benefit other customersbefore they even encounter the problem.
In terms of the quality of the software, the cloud-based approachhas many advantages. Since the software in the cloud is fully underthe control of the developer, without the burden of many differentversions operating in many different environments on customers'premises, one version can be supported and updates can be veryfrequent, improving the customer experience and making it easierto add features.
A deliverer of cloud services can even compare versions ofsoftware before making them available to all customers. A commonpractice now is "A/B testing," where a company provides in thecloud two different versions of a service to different users (typicallyby random selection). The company can collect metrics on whichversion gets a better response (e.g., generates more sales), and adoptthe better version across the service. Compare this to the old model,where a company releases a new version every year or two and shipsit to customers on a CD, with no guarantee that they'll upgrade froma 10-year-old version, complete with all of its limitations. (This isn'ta theoretical problem; many people, for example, are still using oldversions of common programs such as Microsoft Office and oldversions of operating systems such as Windows XP.)
In the 2012 book How Google Tests Software, James A.Whittaker, Jason Arbon, and Jeff Carollo comment that "Googlehas ... benefited from being at the inflection point of softwaremoving from massive client-side binaries with multi-year releasecycles to cloud-based services that are released every few weeks,days, or hours." In an October 2012 shareholder letter posted onMicrosoft 's website, CEO Steve Ballmer indicated that one areaof focus for Microsoft going forward was "building and runningcloud services in ways that unleash incredible new experiences andopportunities for businesses and individuals."
The growth of cloud-based software has accelerated the rate ofchange of software beyond its intrinsic ability to evolve quickly. It isa fundamental trend that will continue, and it makes the accelerationof change driven by software even more pronounced.
Automatic software updates
Cloud services allow rapid changes in software while maintainingconsistency, so that there is just one version to support, and companiesthat sell software that runs on client devices are looking to have thisadvantage for their software. Desktop software (and mobile apps)are moving in this direction, with automatic update ("auto-update"),updating the software without user intervention or with minimalpermission requests becoming more prevalent. Major softwaredevelopers are almost forcing updates, often with a warning thatsecurity concerns require the upgrade. Google takes it a step furtherwith their Chrome browser, which automatically downloads andinstalls updates in the background (a feature you can't disable).
This may occasionally lead to some issues when the updatesaren't invisible fixes to software problems. If a feature changesto be different than what the user is familiar with, it may causeresistance. There was one report that a significant feature wasremoved in a mobile phone software update that was characterizedas a maintenance update by the vendor; the suspicion was that thefeature was removed because of a patent suit by a competitor. Iffeatures start disappearing after one buys a product, it will cause abacklash, but, as I write this, the process hasn't been widespread orcontroversial.
The upside of automatic updates is that improvements canbe added quickly, and the cost of supporting many versions iseliminated. This trend has so many advantages that it will certainlycontinue, despite occasional problems that arise. Again, this trendallows faster evolution of software.
Connected mobile devices
Smartphones and tablet computers with wireless connectionsto the Internet are making computer power available to individualswherever they are, increasing our access to network-basedinformation and computing power (and the software running it all).Smartphones deliver many of the same services we enjoy on our PC.A Verizon survey published in October 2012 found that 52% of allU.S. consumers surveyed say Internet service is their home's mostimportant utility, and nearly 40% of Americans reported having anInternet-connectible device with them at all times, making themwhat the Verizon report called "borderless consumers." Over 800million smartphones were sold in 2012. The trend toward "personalassistant" and "voice search" services make information morequickly and intuitively available. In addition, the expansion ofsearch capabilities to the exploration of personal information on adevice as well as in the cloud makes our own information moreeasily available to us; more and more of what we do is recordeddigitally (e.g., our contact info and email) and searchable.
Wireless connections are becoming faster and more widelyavailable. A number of wireless companies are moving to "4G"(Fourth Generation) networks such as LTE (Long-Term Evolution)to provide faster data rates, although shortage of available spectrumfor supporting an increasing number of devices and increased use ofthe data channel for applications such as video may limit the speedof this growth.
A partial solution to the restraints on long-distance wirelesstechnologies is short-distance wireless networks such as WiFi,which can supplement the long-distance wireless networks. Theirreach is local for a wireless connection, but they use wired networksto get to the Internet. Smartphones can use a local WiFi connectionif the owner enables it. WiFi and similar technologies can be foundtoday in coffee shops, other retail establishments, offices, airports,many homes, and soon on many airplanes. It's safe to predict thatthis trend will grow.
The growth of "apps"
"Apps" is of course shorthand for "applications," and evolvedas terminology for software downloaded to mobile devices. Mobileapps are obtained over the Internet and used on devices with anInternet connection.
One way of viewing apps is as an expansion of Web browsers.Instead of using a standard browser such as Internet Explorer, Safari,Chrome, or Firefox, the apps present an interface to the user that istailored to specific tasks and thus more efficient. Some apps canrun independently on the device without an Internet connection, butmany are "distributed" applications, running partially on the deviceand partially in the network. One advantage of apps is that theycan respond faster for some functions than a cloud-based serviceoperating solely through a web browser.
There seem to be apps for every need. In September 2102, Applewas offering about 700,000 apps for iOS devices in its online AppStore . The Google Play app store offered 675,000 applications forthe Android operating system at the same time. Since an app can bedownloaded in minutes, it essentially allows a user to customize theirdevice, to add software that directly suits their needs. Apps also providea new marketing channel, as "in-app" purchases generate sales.
According to a report in December 2012 from mobile appanalytics company Flurry, US consumers spent an average 127minutes on mobile apps per day, more time in apps than on the weband almost as much as they spend watching TV. They spent 43% ofthe time in mobile gaming and 26% of the time in social networking,according to the Flurry survey.
It may get more difficult to even know where software isrunning. As processors on mobile phones get more powerful andmemory sizes increase, more of the software providing a servicemay be shifted to the device accessing those services, making thesoftware even more responsive. The movement of some processingfrom server to client also reduces the cost of providing the services.The long-term result may be that cloud services run through appswill become increasingly powerful without proportional costs to theservice provider, accelerating the trend further.