Principles for Coping with the Evolution of
Computers collaborate in the Internet
much the way cells collaborate in multicellular organisms and
the way organisms compete and collaborate in ecologies. What are
the parallels and what can we learn from them?
Single cell organisms evolved into multicellular organisms a billion
years ago. Computing is in the midst of a similar transition.  Thirty five
years ago few computers communicated directly with others. Now tens of
billions of computers exchange information at Internet speeds. The role that
interacting computers play in the world has changed dramatically as their
costs dropped and their numbers exploded. They entertain us, help us shop,
help us communicate with and befriend each other, enhance our memories,
recognize our faces, understand our speech, and talk with us. As the digital world
inexorably becomes more complex it encounters problems common to all complex
systems – problems already solved in the evolution of living systems.
This website explores the challenges encountered as computing becomes
more complex and it discusses architectural solutions for the problems
inherent in increasingly complex systems.
In the late 1960s a handful of computers in universities and research labs began
to be connected together in a persistent high-speed network called
A descendent of that network eventually became the Internet. In 1990
Tim Berners-Lee put the first Web page up on the open Internet. The web was born
and it grew rapidly. Today an isolated computer is an oddity. Computers surround
us, they are in our pockets or purses, are on our wrists and in our cars, houses
and offices. Google, Amazon, Yahoo, Baidu (China's Google-like equivalent) and
many other less well-known organizations spider/crawl the web for various purposes.
Google led the way to monetizing all the data thereby gathered by selling ads.
They stored the contents of all the websites they crawled on large numbers of
servers and developed quite sophisticated algorithms to characterize the web
pages and sites. They then used proprietary "page rank" algorithms to decide
which pages to recommend for various searches. This process required very large
numbers of servers. The number of servers Google employs isn't available, but their
advertizing revenue grew 275-fold
between 2002 and 2017. The size of their server farms presumably grew comparably.
Prior to 2000, corporate data centers tended to be housed in the firm's basement.
Dedicated "server farms" didn't emerge until the late '90s during the dotcom bubble.
Today hundreds of thousands of servers are located in more than half a million server-farms
around the world (photo shows a part of Facebook's server-farm in Lulea, Sweden).
The digital world inexorably becomes more and more complex. It records our
emails, phone calls, eCommerce purchases, searches and social media interactions.
Facebook also analyzes all this data for hints about our buying preferences, our
opinions and even for the identity of those that appear in our photos. Google's many
server farms around the world spider virtually all web-pages cataloging their content
so that it can recommend the page it judges to be what we are looking for when we
browse the Web. Other servers at data centers owned by the likes of Amazon,
Switch, Microsoft, Twitter, and the NSA gather and store different sorts of data for
many known and unknown purposes. At least one,
specializes in analyzing our political viewpoints and influencing our votes. There are
millions of servers that store, catalog, and make searchable information about people,
products, businesses, real estate, government activities, universities, weather, crops,
livestock, and almost anything else one can imagine (see
History of Computing
Server farms handle the big-data issues in the Web. The
Internet of Things (IoT)
deals with the small things: smart door locks for the home, wireless cameras,
smart electric plugs, smart AC and heater vents, wearable exercise monitors,
smart refrigerators, baby-cams and even
smart pill bottles
Yet collectively they provide more compute
power and wifi capability than we could have imagined a few years ago. Most of their
processing is wasted in idle loops -- so far. But one of these days various parties
will harness the collective IoT compute power for their own use with or without
permission. According to Wikipedia, the number of IoT devices increased 31% to 8.4
billion in 2017 and there are expected to be 30 billion devices by 2020. They are
problematic because they are
very poorly protected from hackers
They are sold with simple default passwords and all too many people see no reason
to change those passwords or, if they do change the defaults, the new passwords
are all too often simple and easy to guess. So the hackers who seek to create
large botnets out of IoT devices have found it easy. On October 12,
2016, a new botnet appeared called Mirai that nearly took down the Internet
And the code for it was put out on the net. In January 2018, a Mirai variant
called iTroop or Reaper was used to target
three large Financial institutions
In multicellular computing terms, botnets can best be thought of as Internet cancers.
Another facet of computation that poses problems for society is the rise of
Venezuela, among other countries, is talking about making
their national currency. Cryptocurrencies are
one use of Blockchain legers. IBM, State Street Bank, Accenture, Fujitsu,
Intel and other heavyweights formed
The Hyperledger Fabric project
around the end of 2015 to formalize and harden the notion of blockchains.
They recommend isolating the ledger from the general cloud computing environment,
building a security container for the ledger to prevent unauthorized access, and
offering tamper-responsive hardware, that can shut itself down if it detects someone
trying to hack a ledger. That is analogous to biological apoptosis (see below)
Finally, machine learning is
becoming increasingly popular
Machine learning has already been adopted by many well-heeled parties.
There is already specialized
machine learning hardware
The evolution of computing is similar to the evolution of other
complex systems -- biological, social, ecological, and economic
systems. In each of these domains, the elements become
increasingly more specialized and sophisticated, and they
interact with each other in ever more complex ways. From that
similarities between biology and computing
are not coincidental.
Multicellular computing already is adopting four major
organizing principles of multicellular biological systems
because they help tame the spiraling problems of complexity and
out-of-control interactions in the Internet. They are:
- Multicellular systems support much richer functionality than
single cell and single computer systems. They do it by the
collaboration between specialized cells. There are, for
example, about 250 specialized types of cells in humans.
Unspecialized cells such as many cancer cells are dangerous
because they don't "play well with others."
Specialization in computing is important for similar
reasons. We are finding that unspecialized general-purpose
computers, especially unprotected IoT systems with ARM processors,
are increasingly dangerous to multicellular computing systems.
- Cooperating cells or computers must communicate safely with
one another. The "meaning"
of cell-to-cell messages must be determined by the receiving
cell, not the sender. To that end, cells communicate
with each other via messenger molecules, never DNA. Similarly,
communication between computers in multicellular systems
relies increasingly upon message
passing. Here again, the receiver not the sender
determines the meaning of the messages. It is especially
dangerous for computers to transfer code from one machine to
another precisely because code predetermines the resulting
behavior of the receiving machine, and all too often contains
malware. Code endangers the health of the receiving machine
and hence the larger system of which it is a part.
- Messages orchestrate cooperation in real-time. Longer term
or more persistent collaboration requires longer-lived
messages that can be deposited in some structure where they
can be later encountered by others. This sort of messaging has
become known if the field of biology as stigmergy.
Analogously, stigmergy in
computing systems is supported by persistent external
data, e.g., in databases, and network connectivity structures.
Persistent data are especially important for organizing
cooperative computing in the Internet.
Apoptosis (Programmed cell death) - Despite all
precautions, cells and computers do go awry. They may also
simply outlive their usefulness, Every healthy cell in a
multicellular organism is programmed to commit suicide if
their removal is in the best interest of the organism as a
whole. Cells that are infected by viruses or become cancerous
usually kill themselves unless the programmed cell death
mechanism itself is compromised. We are only recently learning
the importance of sacrificing
compromised computers for the health of the whole
multicellular computing system. The recent appearance of
the Internet of Things (IoT) and botnets exploiting IoT
devices creates, in effect, out-of-control computing cancer cells
that do not kill themselves (activate apoptosis) when compromised.
These principles are not independent
both in life and in computing.
This site explores these principles in considerable detail --
more detail than most readers would want to absorb in one
sitting. It presents each principle in its biological context
and describes its benefits both for multicellular life and for
If you are impatient, you might want to skip right to the end
of the story and read the conclusions.
However, as with many a mystery novel, reading the last few
pages will tell you who-done-it without telling you the most
interesting part...why. The conclusions may well not make much
sense without seeing how we get there.
The site map can help navigate to
the various pages in an order that helps make sense of the
Evolution of Computing -- Last revised 9/19/2018