Principles for Coping with the Evolution of
Computing
Computers collaborate in the Internet
much the way cells collaborate in multicellular organisms and
the way organisms compete and collaborate in ecologies. What are
the parallels and what can we learn from them?
Single cell organisms evolved into multicellular organisms a
billion years ago. Computing has made a similar transition in just
a few decades. In 1975, few computers communicated directly
with others and there was no Internet. Today tens of billions of
computers around the world exchange information at Internet
speeds. The role that interacting computers play has changed
dramatically as their costs dropped and their numbers exploded.
They entertain us, help us shop, help manage the lights in our
homes, help us communicate with and befriend each other, enhance
our memories, recognize our faces, 'understand' our speech, and
'talk' to us. Yet computers know not what they do, nor
'know' who they do it for, or why!
As the digital world inexorably becomes more complex it encounters
problems common to all complex systems – problems already solved
in the evolution of living systems. This website explores the
challenges of complexity and discusses architectural solutions for
the problems inherent in increasingly complex systems.
In the late 1960s a handful of computers in universities and
research labs began to be connected together in a persistent
high-speed network called
ARPANET. A
descendant of that network eventually became the Internet. In 1990
Tim Berners-Lee put the first Web page up on the open Internet.
The web was born and it grew rapidly -- today there are over 1.5
billion web pages to choose from. An isolated computer is an
oddity. Computers surround us, they are in our pockets or purses,
are on our wrists and in our cars, houses and offices. Google,
Amazon, Microsoft, Yahoo, Baidu (China's Google-like equivalent)
and many other less well-known organizations spider/crawl the web
for various purposes. Google led the way to monetizing all the
data thereby gathered by selling ads. They stored the contents of
all the websites they crawled on large numbers of servers and
developed quite sophisticated algorithms to characterize the web
pages and sites. They then used proprietary "page rank" algorithms
to decide which pages to recommend for various searches. It is
estimated that google employs over a million servers for such
tasks.
Prior to 2000, corporate data centers tended to be housed in the
firm's basement. Dedicated "server farms" didn't emerge until the
late '90s during the dotcom bubble.

Today servers are located in more than half a million server-farms
around the world (photo shows a part of Facebook's server-farm in
Lulea, Sweden). The digital world inexorably becomes more and more
complex. It records our emails, phone calls, eCommerce purchases,
searches and social media interactions. Facebook also analyzes all
this data for hints about our buying preferences, our opinions and
even for the identity of those that appear in our photos. Google's
many server farms around the world spider virtually all web-pages
cataloging their content so that it can recommend the page it
judges to be what we are looking for when we browse the Web. Other
servers at data centers owned by the likes of Amazon, Switch,
Microsoft, Twitter, and the NSA gather and store different sorts
of data for many known and unknown purposes. At least one,
Cambridge
Analytica specializes in analyzing individuals' political
viewpoints to influence votes. There are millions of servers that
store, catalog, and make searchable information about people,
products, businesses, real estate, government activities,
universities, weather, crops, livestock, and almost anything else
one can imagine (see
History of
Computing).
Server farms handle the big-data issues in the Web. The
Internet
of Things (IoT) deals with the small things: smart door
locks for the home, wireless cameras, smart electric plugs, smart
AC and heater vents, wearable exercise monitors, smart
refrigerators, baby-cams and even
smart pill bottles.
Yet collectively they provide more compute power and wifi
capability than we could have imagined a few years ago. Most of
their processing is wasted in idle loops -- so far. But various
parties seek to harness the collective IoT compute power for their
own use with or without permission. The number of IoT devices
increased 31% to 8.4 billion in 2017, 14.2 billion in 2019 and
there are expected to be 20 billion devices by 2020. They are
problematic because they are
very
poorly protected from hackers. They are sold with simple
default passwords and all too many people see no reason to change
those passwords or, if they do change the defaults, the new
passwords are all too often simple and easy to guess. So the
hackers who seek to create large botnets out of IoT devices have
found it easy. On October 12, 2016, a new botnet appeared called
Mirai that
nearly
took down the Internet. And the code for it was put out on
the net. In January 2018, a Mirai variant called iTroop or Reaper
was used to target
three
large Financial institutions. In multicellular computing
terms, botnets can best be thought of as Internet cancers.
Another facet of computation that poses problems for society is
the rise of
cryptocurrencies.
Venezuela, among other countries, is talking about making
bitcoin their national currency. Cryptocurrencies
are one use of Blockchain legers. IBM, State Street Bank,
Accenture, Fujitsu, Intel and other heavyweights formed
The Hyperledger
Fabric project around the end of 2015 to formalize and
harden the notion of blockchains. They recommend isolating the
ledger from the general cloud computing environment, building a
security container for the ledger to prevent unauthorized access,
and offering tamper-responsive hardware, that can shut itself down
if it detects someone trying to hack a ledger. That is analogous
to biological apoptosis (see below)
Finally, machine learning is
becoming
increasingly popular. Machine learning has already
been adopted by many well-heeled parties. There is already
specialized
machine
learning hardware running 7x24.
The evolution of computing is similar to the evolution of other
complex systems -- biological, social, ecological, and economic
systems. In each of these domains, the elements become
increasingly more specialized and sophisticated, and they interact
with each other in ever more complex ways. From that perspective,
the
similarities
between biology and computing are not coincidental.
Multicellular computing already is adopting four major
organizing principles of multicellular biological systems
because they help tame the spiraling problems of complexity and
out-of-control interactions in the Internet. They are:
- Specialization
- Multicellular systems support much richer functionality than
single cell and single computer systems. They do it by the
collaboration between specialized cells. There are, for
example, about 250 specialized types of cells in humans.
Unspecialized cells such as many cancer cells are dangerous
because they don't "play well with others."
Specialization in computing is important for similar
reasons. We are finding that unspecialized general-purpose
computers, especially unprotected IoT systems with ARM
processors, are increasingly dangerous to multicellular
computing systems.
- Messaging
- Cooperating cells or computers must communicate safely with
one another. The "meaning"
of cell-to-cell messages must be determined by the receiving
cell, not the sender. To that end, cells communicate
with each other via messenger molecules, never DNA. Similarly,
communication between computers in multicellular systems
relies increasingly upon message
passing. Here again, the receiver not the sender
determines the meaning of the messages. It is especially
dangerous for computers to transfer code from one machine to
another precisely because code predetermines the resulting
behavior of the receiving machine, and all too often contains
malware. Code endangers the health of the receiving machine
and hence the larger system of which it is a part.
- Stigmergy
- Messages orchestrate cooperation in real-time. Longer term
or more persistent collaboration requires longer-lived
messages that can be deposited in some structure where they
can be later encountered by others. This sort of messaging has
become known if the field of biology as stigmergy.
Analogously, stigmergy in
computing systems is supported by persistent external
data, e.g., in databases, and network connectivity structures.
Persistent data are especially important for organizing cooperative
computing in the Internet.
-
Apoptosis (Programmed cell death) - Despite all
precautions, cells and computers do go awry. They may also
simply outlive their usefulness, Every healthy cell in a
multicellular organism is programmed to commit suicide if
their removal is in the best interest of the organism as a
whole. Cells that are infected by viruses or become cancerous
usually kill themselves unless the programmed cell death
mechanism itself is compromised. We are only recently learning
the importance of sacrificing
compromised computers for the health of the whole
multicellular computing system. The recent appearance of the
Internet of Things (IoT) and botnets exploiting IoT devices
creates, in effect, out-of-control computing cancer cells that
do not kill themselves (activate apoptosis) when compromised.
These principles
are not independent;
they are
deeply intertwined both in life and in computing.
This site explores these principles in considerable detail -- more
detail than most readers would want to absorb in one sitting. It
presents each principle in its biological context and describes
its benefits both for multicellular life and for computing.
If you are impatient, you might want to skip right to the end of
the story and read the
conclusions.
However, as with many a mystery novel, reading the last few pages
will tell you who-done-it without telling you the most interesting
part...why. The conclusions may well not make much sense without
seeing how we get there.
The site map can help navigate to
the various pages in an order that helps make sense of the
story.
Evolution of Computing -- Last revised 10/21/2019