illusion of individuality

Every new emergent layer in evolution is built out of the previous one. In other words, living entities are like matryoshka dolls, made of layers and layers of living entities inside each other. (Yes, I believe that even atoms are in some sense alive. See the post Emergence of Life for more details.) This does not mean that a newly emerging layer preserves the layer below as is. On the contrary, it modifies the entities that it is generated out of in a significant way, just like technology is modifying us today by slowly automating all the recurring external human patterns away.

Here, the qualifiers recurring and external are important, because they also happen to define exactly the domain of science. What is unique (not recurring) or subjective (not external) can not be studied by science, and therefore can not be automated by technology.

As technology unfolds, it slowly exposes our true human core (i.e. what is unique and subjective), which is actually the only thing it will need for its steady-state survival at maturity. We should not fight against this trend. On the contrary, we should embrace and accelerate it by increasing our social flexibility. True, we may be losing jobs in droves, but in the long run technology makes all of us wealthier and healthier. (There is a lot of politics involved here of course, but you get what I mean. Just compare the current living standards to the livings standards a few hundred years ago.)

Remember, we are what animates technology and makes it adaptive. In other words, artificial general intelligence is already here. It is operating at a global scale through the multi-cloud layer and is composed of myriad of artificial (special) intelligences which are communicating through us. The dynamics is no different than your own mind being a society of smaller minds and your own genome being a society of smaller genomes.

The magic glue is always in the network. Smartness is always an emergent, social affair. In other words, there is no such thing as a general intelligence composed of a single node. Yes, we will have super intelligent robots in the future, but the prime source of their intelligence will always be the global multi-cloud layer. In other words, they will continuously tap into the entirety of our accumulated wisdom, which will keep evolving in the background.

Of course, the supreme complexity will be deftly hidden away, and it will look as if the entire intelligence resides within the individual robots themselves. The irony is that the robots too will believe in this illusion, just as we tend to mistakenly equate our minds with our consciousnesses. Remember, it took us thousands of years to even notice the bare existence of the unconscious. Even today we have no clue with regards to its structure, although deep down we all feel that it somehow links us together in a mysterious fashion.

“We are like islands in the sea, separate on the surface but connected in the deep.”
- William James

This obviously takes us beyond the reach of science and into the territory of metaphysics. But that should not stop us from asking some fun questions!

  • Robot unconscious taps into the electromagnetic field. What field does the human unconscious tap into? Is vacuum not what we think it is?

  • Information is encoded into the electromagnetic field by the collective human consciousness. Whose collective consciousness is encoding information into this other field? Are cells not what we think they are?

digital vs physical businesses

In the first part, I will analyze how digital businesses and physical businesses are complementary to each other via the following dualities:

  1. Risk of Death vs Potential for Growth

  2. Controlling Demand vs Controlling Supply

  3. Network Effects vs Scale Effects

  4. Mind vs Body

  5. Borrowing Space vs Borrowing Time

In the second part, I will analyze how the rise of digital businesses against physical businesses is triggering the following trends:

  1. Culture is Shifting from Space to Time

  2. Progress is Accelerating

  3. Science is Becoming More Data-Driven

  4. Economy is Getting Lighter

  5. Power is Shifting from West to East

Duality 1: Risk of Death vs Potential for Growth

Since information is frictionless, every digital startup has a potential for fast growth. But since the same fact holds for every other startup as well, there is also a potential for a sudden downfall. That is why defensibility (i.e. ability to survive after reaching success) is often mentioned as the number one criterion by the investors of such companies.

Physical businesses face the inverse reality: They are harder to grow but easier to defend, due to factors like high barriers to entry, limited real estate space, hard-to-set-up distribution networks etc. That is why competitive landscape is the most scrutinized issue by the investors of such companies.

Duality 2: Controlling Supply vs Controlling Demand

In the physical world, limited by scarcity, economic power comes from controlling supply; in the digital world, overwhelmed by abundance, economic power comes from controlling demand.
- Ben Thompson - Ends, Means and Antitrust

Although Ben’s point is quite clear, it is worth expanding it a little bit.

In the physical world, supply is much more limited than demand and therefore whoever controls the supply wins.

  • Demand. Physical consumption is about hoarding in space which is for all practical purposes infinite. Since money is digital in its nature, I can buy any object in any part of the world at the speed of light and that object will immediately become mine.

  • Supply. Extracting new materials and nurturing new talents take a lot of time. In other words, in the short run, supply of physical goods is severely limited.

In the digital world, demand is much more limited than supply and therefore whoever controls the demand wins:

  • Demand. Digital consumption is information based and therefore cognitive in nature. Since one can pay attention to only so many things at once, it is restricted mainly to the time dimension. For instance, for visual information, daily screen time is the limiting factor on how much can be consumed.

  • Supply. Since information travels at the speed of light, every bit in the world is only a touch away from you. Hence, in the short run, supply is literally unlimited.

Duality 3: Scale Effects vs Network Effects

Physical economy is dominated by geometric dynamics since distances matter. (Keyword here is space.) Digital economy on the other hand is information based and information travels at the speed of light, which is for all practical purposes infinite. Hence distances do not matter, only connectivities do. In other words, the dynamics is topological, not geometric. (Keyword here is network.)

Side Note: Our memories too work topologically. We remember the order of events (i.e. temporal connectivity) easily but have hard time situating them in absolute time. (Often we just remember the dates of significant events and then try to date everything else relative to them.) But while we are living, we focus on the continuous duration (i.e. the temporal distance), not the discrete events themselves. That is why the greater the number of things we are pre-occupied with and the less we can feel the duration, the more quickly time seems to pass. In memory though, the reverse happens: Since the focus is on events (everything else is cleared out!), the greater the number of events, the less quickly time seems to have passed.

This nicely ties back to the previous discussion about defensibility. Physical businesses are harder to grow because that is precisely how they protect themselves. They reside in space and scale effects help them make better use of time through efficiency gains. Digital businesses on the other hand reside in time and network effects help them make better use of space through connectivity gains. Building protection is what is hard and also what is valuable in each case.

Side Note: Just as economic value continuously trickles down to the space owners (i.e. land owners) in the physical economy, it trickles down to “time owners” in the digital economy (i.e. companies who control your attention through out the day).

Scale does not correlate with defensible value in the digital world, just as connectivity does not correlate with defensible value in the physical world. Investors are perennially confused about this since scale is so easy to see and our reptilian brains are so susceptible to be impressed by it.

Of course, at the end of the day, all digital businesses thrive on physical infrastructures and all physical businesses thrive on digital infrastructures. This leads to an interesting mixture.

  • As a structure grows, it suffers from internal complexities which arise from increased interdependencies between increased number of parts.

  • Similarly, greater connectivity requires greater internal scale. In fact, scalability is a huge challenge for fast-growing digital businesses.

Hence, physical businesses thrive on scale effects but suffer from negative internal network effects (which are basically software problems), and digital businesses thrive on network effects but suffer from negative internal scale effects (which are basically hardware problems). In other words, these two types of businesses are dependent on each other to be able to generate more value.

  • As physical businesses get better at leveraging software solutions to manage their complexity issues, they will break scalability records.

  • As digital businesses get better at leveraging hardware solutions to manage their scalability issues, they will break connectivity records.

Note that we have now ventured beyond the world of economics and entered the much more general world of evolutionary dynamics. Time has two directional arrows:

  • Complexity. Correlates closely with size. Increases over time, as in plants being more complex than cells.

  • Connectivity. Manifests itself as “entropy” at the lowest complexity level (i.e. physics). Increases over time, as evolutionary entities become more interlinked.

Evolution always pushes for greater scale and connectivity.

Side Note: "The larger the brain, the larger the fraction of resources devoted to communications compared to computation." says Sejnowski. Many scientists think that evolution has already reached an efficiency limit for the size of the biological brain. A great example of a digital entity (i.e. the computing mind) whose growing size is limited by the accompanying growing internal complexity which manifests itself in the form of internal communication problems.

Duality 4: Mind vs Body

All governments desire to increase the value of their economies but also feel threatened by the evolutionary inclination of the economic units to push for greater scale and connectivity. Western governments (e.g. US) tend to be more sensitive about size. They monitor and explicitly break up physical businesses that cross a certain size threshold. Eastern governments (e.g. China) on the other hand tend to be more sensitive about connectivity. They monitor and implicitly take over digital businesses that cross a certain connectivity threshold. (Think of the strict control of social media in China versus the supreme freedom of all digital networks in US.)

Generally speaking, the Western world falls on the right-hand side of the mind-body duality, while the Eastern world falls on the left-hand side.

  • As mentioned above, Western governments care more about the physical aspects of reality (like size) while Eastern governments care more about the mental aspects of reality (like connectivity).

  • Western sciences equate the mind with the brain, and thereby treats software as hardware. Eastern philosophies are infused with panpsychic ideas, ascribing consciousness (i.e. mind-like properties) to the entirety of universe, and thereby treats hardware as software.

We can think of the duality between digital and physical businesses as the social version of the mind-body duality. When you die, your body gets recycled back into the ecosystem. (This is no different than the machinery inside a bankrupt factory getting recycled back into the economy.) Your mind on the other hand simply disappears. What survive are the impressions you made on other minds. Similarly, when digital businesses die, they leave behind only memories in the form of broken links and cached pages, and therefore need “tombstones” to be remembered. Physical businesses on the other hand leave behind items which continue to circulate in the second-hand markets and buildings which change hands to serve new purposes.

Duality 5: Borrowing Space vs Borrowing Time

Banking too is moving from space to time dimension, and this is happening in a very subtle way. Yes, banks are becoming increasingly more digital, but this is not what I am talking about at all. Digitalized banks are more efficient at delivering the same exact services, continuing to serve the old banking needs of the physical economy. What I am talking about is the unique banking needs of the new digital economy. What do I mean by this?

Remember, physical businesses reside in space and scale effects help them make better use of time through efficiency gains. Digital businesses on the other hand reside in time and network effects help them make better use of space through connectivity gains. Hence, their borrowing needs are polar opposite: Physical businesses need to borrow time to accelerate their defensibility in space, while digital businesses need to borrow space to accelerate their defensibility in time. (What matters in the long run is only defensibility!)

But what does it mean to borrow time or space?

  • Lending time is exactly what regular banks do. They give you money and charge you an interest rate, which can be viewed as the cost of moving (discounting) the money you will be making in the future to now. In other words, banks are in the business of creating contractions in the time dimension, not unlike creating wormholes through time.

  • Definition of space for a digital company depends on the network it resides in. This could be a specific network of people, businesses etc. A digital company does not defend itself by scale effects, it defends itself by network effects. Hence its primary goal is to increase the connectivity of its network. In other words, a digital company needs creation of wormholes through space, not through time. Whatever facilitates further stitching of its network satisfies its “banking needs”.

Bankers of the digital economy are the existing deeply-penetrated networks like Alibaba, WeChat, LinkedIn, Facebook, Amazon etc. What masquerades as a marketing expense for a digital company to rent the connectivity of these platforms is actually in part a “banking” expense, not unlike the interest payments made to a regular bank.

Trend 1: Culture is Shifting from Space to Time

Culturally we are moving from geometry to topology, more often deploying topological rather than geometric language while narrating our lives. We meet our friends in online networks rather than physical spaces.

Correlation between the rise of the digital economy and the rise of the experience economy (and its associated cultural offshoots like hipster movement and decluttering movement) is not a coincidence. Experiential goods (not just those that are information-based) exhibit the same dynamics as digital goods. They are completely mental and reside in time dimension.

Our sense of privacy too is shifting from space dimension to time dimension. We are growing less sensitive about sharing objects and more sensitive about sharing experiences. We are participating in a myriad of sharing economies, but also becoming more ruthless about time optimization. (What is interpreted as a general decline in attention span is actually a protective measure erected by the digital natives, forcing everyone to cut their narratives short.) Increasingly we are spending less time with people although we look more social from outside since we share so many objects with each other.

Our sense of aesthetics has started to incorporate time rather than banish it. We leave surfaces unfinished and prefer using raw and natural-looking rather than polished and new-looking materials. Everyone has become wabi-sabi fans, preferring to buy stuff that time has taken (or seems to have taken) its toll on them.

Even physics is caught in the Zeitgeist. Latest theories are all claiming that time is fundamental and space is emergent. Popular opinion among the physicists used to be the opposite. Einstein had put the final nail on the coffin by completely spatializing time into what is called spacetime, an unchanging four-dimensional block universe. He famously had said “the distinction between past, present, and future is only a stubbornly persistent illusion.”

Trend 2: Progress is Accelerating

As economies and consumption patterns shift to time dimension, we feel more overwhelmed by the demands on our time, and life seems to progress at a faster rate.

Let us dig deeper into this seemingly trivial observation. First recall the following two facts:

  1. In a previous blog post, I had talked about the effect of aging on perception of time. As you accumulate more experience and your library of cognitive models grows, you become more adept at chunking experience and shifting into an automatic mode. What was used to be processed consciously now starts getting processed unconsciously. (This is no different than stable software patterns eventually trickling down and hardening to become hardware patterns.)

  2. In a previous blog post, I had talked about how the goal of education is to learn how not to think, not how to think. In other words, “chunking” is the essence of learning.

Combining these two facts we deduce the following:

  • Learning accelerates perception of time.

This observation in turn is intimately related to the following fact:

What exactly is this relation?

Remember, at micro-level, both learning and progress suffer from the diminishing returns of S-curves. However, at the macro-level, both overcome these limits via sheer creativity and manage to stack S-curves on top of each other to form a (composite) exponential curve that literally shoots to infinity.

This structural similarity is not a coincidence: Progress is simply the social version of learning. However, progress happens out in the open, while learning takes place internally within each of our minds and therefore can not be seen. That is why we can not see learning in time, but nevertheless can feel its acceleration by reflecting it off time.

Side Note: For those of you who know about Ken Wilber’s Integral Theory, what we found here is that “learning” belongs to the upper-left quadrant while “progress” belongs to the lower-right quadrant. The infinitary limiting point is often called Nirvana in personal learning and Singularity in social progress.

Recall how we framed the duality between digital and physical businesses as the social version of the mind-body duality. True, from the individual’s perspective, progress seems to happen out in the open. However, from the perspective of the mind of the society (represented by the aggregation of all things digital), progress “feels” like learning.

Hence, going back to the beginning of this discussion, your perception of time accelerates for two dual reasons:

  1. Your data processing efficiency increases as you learn more.

  2. Data you need to process increases as society learns more.

Time is about change. Perception of time is about processed change, and how much change your mind can process is a function of both your data processing efficiency (which defines your bandwidth) and the speed of data flow. (You can visualize bandwidth as the diameter of a pipe.) As society learns more (i.e. progresses further), you become bombarded with more change. Thankfully, as you learn more, you also become more capable of keeping up with change.

There is an important caveat here though.

  1. Your mind loses its plasticity over time.

  2. The type of change you need to process changes over time.

The combination of these two facts is very problematic. Data processing efficiency is sustained by the cognitive models you develop through experience, based on past data sets. Hence, their continued efficiency is guaranteed only if the future is similar to the past, which of course is increasingly not the case.

As mentioned previously, the exponential character of progress stems from the stacking of S-curves on top of each other. Each new S-curve represents a discontinuous creative jump, a paradigm shift that requires a significant revision of existing cognitive models. As progress becomes faster and life expectancy increases, individuals encounter a greater number of such challenges within their lifetimes. This means that they are increasingly at risk of being left behind due to the plasticity of their minds decreasing over time.

This is exactly why the elderly enjoy nostalgia and wrap themselves inside time capsules like retirement villages. Their desire to stop time creates a demographic tension that will become increasingly more palpable in the future, as the elderly become increasingly more irrelevant while still clinging onto their positions of power and keeping the young at bay.

Trend 3: Science is Becoming More Data-Driven

Rise of the digital economy can be thought of as the maturation of the social mind. The society as a whole is aging, not just us. You can tell this also from how science is shifting from being hypothesis-driven to being data-driven, thanks to digital technologies. (Take a look at the blog post I have written on this subject.) Social mind is moving from conscious thinking to unconscious thinking, becoming more intuitive and getting wiser in the process.

Trend 4: Economy is Getting Lighter

As software is taking over the world, information is being infused into everything and our use of matter is getting smarter.

Automobiles weigh less than they once did and yet perform better. Industrial materials have been replaced by nearly weightless high-tech know-how in the form of plastics and composite fiber materials. Stationary objects are gaining information and losing mass, too. Because of improved materials, high-tech construction methods, and smarter office equipment, new buildings today weigh less than comparable ones from the 1950s. So it isn’t only your radio that is shrinking, the entire economy is losing weight too.

Kevin Kelly - New Rules for the New Economy (Pages 73-74)

Energy use in US has stayed flat despite enormous growth. We now make less use of atoms, and the share of tangibles in total equity value is continuously decreasing. As R. Buckminster Fuller said, our economies are being ephemeralized thanks to the technological advances which are allowing us to do "more and more with less and less until eventually [we] can do everything with nothing."

This trend will probably, in a rather unexpected way, ease the global warming problem. (Remember, it is the sheer mass of what is being excavated and moved around, that is responsible for the generation of greenhouse gases.)

Trend 5: Power is Shifting from West to East

Now I will venture far further and bring religion into the picture. There are some amazing historical dynamics at work that can be recognized only by elevating ourselves and looking at the big picture.

First, let us take a look at the Western world.

  • Becoming. West chose a pragmatic, action-oriented attitude towards Becoming and did not directly philosophize about it.

  • Being. Western religions are built on the notion of Being. Time is deemed to be an illusion and God is thought of as a static all-encompassing Being, not too different from the entirety of Mathematics. There is believed to be an order behind the messy unfolding of Becoming, an order that is waiting to be discovered by us. It is with this deep conviction that Newton managed to discover the first mathematical formalism to predict natural phenomena. There is nothing in the history of science that is comparable to this achievement. Only a religious zeal could have generated the sort of tenacity that is needed to tackle a challenge of this magnitude.

This combination of applying intuition to Becoming and reason to Being eventually led to a meteoric rise in technology and economy.

Side Note: Although an Abrahamic religion itself, Islam did not fuel a similar meteoric rise, because it was practiced more dogmatically. Christianity on the other hand self-reformed itself into a myriad of sub-religions. Although not too great, there was enough intellectual freedom to allow people to seek unchanging patterns in reality, signs of Being within Becoming. Islam on the other hand persecuted any such aspirations. Even allegorical paintings about Being was not allowed.

East did the opposite and applied reason to Becoming and intuition to Being.

  • Becoming. East based its religion in Becoming and this instilled a fundamental suspicion against any attempts to mathematically model the unfolding reality or seek absolute knowledge. Of course, reasoning about Becoming without an implicit belief in unchanging absolutes is not an easy task. In fact, it is so hard that one has no choice but to be imprecise and poetic, and of course that is exactly what Eastern religions did. (Think of Taoism.)

  • Being. How about applying intuition to Being? How can you go about experiencing Being directly, through the “heart” so to speak? Well, through non-verbal silent meditation of course! That is exactly what Eastern religions did. (Think of Buddhism.)

Why could not East reason directly about Becoming in a formal fashion, like West reasoned directly about Being using mathematics? Remember Galileo saying "Mathematics is the language in which God has written the universe." What would have been the corresponding statement for the East? In other words, what is the formal language of Becoming? It is computer science of course, which was born out of Mathematics in the West around 1930s.

Now you understand why West was so lucky. Even if East had managed to discover computer science first, it would have been useless in understanding Becoming, because without the actual hardware to run simulations, you can not create computational models. A model needs to be run on something. It is not like a math theory in a book, waiting for you to play with it. Historically speaking, mathematics had to come first, because it is the cheaper, more basic technology. All you need is literally a pen, a paper and a trash bin.

Side Note: Here is a nerdy joke for you… The dean asks the head of the physics department to see him. “Why are you using so many resources? All those labs and experiments and whatnot; this is getting expensive! Why can’t you be more like mathematicians – they only need pens, paper, and a trash bin. Or philosophers – they only need pens and paper!”

But now is different. We have tremendous amounts of cheap computation and storage at our disposal, allowing us to finally crack the language of Becoming. Our entire economy is shifting from physical to digital, and our entire culture is shifting from space to time. An extraordinary period indeed!

It was never a coincidence that Chinese mathematicians chose to work in (and subsequently dominated) statistics, the most practical fields within mathematics. (They are culturally oriented toward Becoming.) Now all these statisticians are turning into artificial intelligence experts while West is still being paranoid about the oncoming Singularity, the exponential rise of AI.

Why have the Japanese always loved robots while the West has always been afraid of them? Why is the adoption of digital technologies happening faster in the East? Why are the kids and their parents in the East less worried about being locked into digital screens? As we elaborated above, the answer is metaphysical. Differences in metaphysical frameworks (often inherited from religions) are akin to the hard-to-notice (but exceptionally consequential) differences in the low-level code sitting right above the hardware.

Now guess who will dominate the new digital era? Think of the big picture. Do not extrapolate from recent past, think of the vast historical patterns.

I believe that people are made equal everywhere and in the long-run whoever is more zealous wins. East is more zealous about Becoming than the West, and therefore will sooner or later dominate the digital era. Our kids will learn their languages and find their religious practices more attractive. (Meditation is already spreading like wildfire.) What is “cool” will change and all these things will happen effortlessly in a mindless fashion, due to the fundamental shift in Zeitgeist and the strong structural forces of economics.

Side Note: Remember, in Duality 4, we had said that the East has an intrinsic tendency to regulate digital businesses rather than physical businesses. And here we just claimed that the East has an intrinsic passion for building digital businesses rather than physical businesses. Combining these two observations, we can predict that the East will unleash both greater energy and greater restrain in the digital domain. This actually makes a lot of sense, and is in line with the famous marketing slogan of the tyre manufacturing company Pirelli: “Power is Nothing Without Control”

Will the pendulum eventually swing back? Will the cover pages again feature physical businesses as they used to do a decade ago? The answer is no. Virtualization is one of the main trends in evolution. Units of evolution are getting smarter and becoming increasingly more governed by information dynamics rather than energy dynamics. (Information is substrate independent. Hence the term “virtualization”.) Nothing can stop this trend, barring some temporary setbacks here and there.

It seems like West has only two choices in the long run:

  1. It can go through a major religious overhaul and adopt a Becoming-oriented interpretation of Christianity, like that of Teilhard de Chardin.

  2. It can continue as is, and be remembered as the civilization that dominated the short intermediary period which begun with the birth of mathematical modeling and ended with the birth of computational modeling. (Equivalently, one could say that West dominated the industrial revolution and East will dominate the digital revolution.)


If you liked this post, you will probably enjoy the older post Innovative vs Classical Businesses as well. (Note that digital does not mean innovative and physical does not mean classical. You can have a classical digital or an innovative physical business.)

pain and learning

FAAH is a protein that breaks down anandamide, also known as the “bliss molecule,” which is a neurotransmitter that binds to cannabinoid receptors. These are some of the same receptors that are activated by marijuana. With less FAAH activity, this patient was found to have more circulating levels of anandamide, which may explain her resistance to feeling pain.

... Dr. James Cox, another author and senior lecturer at the Wolfson Institute for Biomedical Research at University College London, said, “Pain is an essential warning system to protect you from damaging and life-threatening events.” Another disadvantage to endocannabinoids and their receptor targets is that poor memory and learning may be unwanted byproducts. Researchers said the Scottish woman reported memory lapses, which mirrors what is seen in mice missing the FAAH gene.

Jacquelyn Corley - The Case of a Woman Who Feels Almost No Pain Leads Scientists to a New Gene Mutation

Pain is needed to register what is learned. As they say, no pain no gain.

You can easily tell that you are not learning much if everything is flowing too smoothly. You take notice only upon encountering the unexpected and the unexpected is painful.

I advise mature students to stay away from well-written textbooks. They are like driving on a wide and empty highway. Typos keep you alert, logical gaps sharpen your mind and bad arguments force you to generate new ideas. You should generally make the reading process as hard for yourself as possible.

Educational progress can be achieved by making either the content or the environment more challenging. If you can perform well under constraints, you will perform even better when the environment normalizes.


Engagement enhances learning not because it increases focus but because it increases grit. Struggle is necessary. If the teaching is not engaging, student will more easily give up on the struggle. The goal is not to eliminate the struggle.

The more confident a learner is of their wrong answer, the better the information sticks when they subsequently learn the right answer. Tolerating big mistakes can create the best learning opportunities.

David Epstein - Range (Page 86)

So the harder you fall the better. The more wrong you turn out to be, the more unforgettable will the experience be. As they say, never waste a good crisis.

People usually go into defensive mode when their internal reality clashes with the external reality. That is basically why persuasion is such a hard art form to master. The radicalized easily become even more radicalized when you try to lay a convincing path to moderation.

Of course, there are times when you need to close up, refuse to learn and stick with your beliefs. World is complex, situations are multi-faceted, refutations are never really that clear. In some sense, every principle looks stupid in certain contexts. The principled man knows this and nevertheless takes the risk, because he thinks that looking stupid sometimes is better than looking like an amorphous mass of jelly all the time. Someone who is constantly learning and therefore constantly in revision mode runs the danger of becoming jelly-like. Sometimes one may need to prefer the pain of resisting to the pain of learning.


The essence of the neuromatrix theory of pain is that chronic pain is more a perception than a raw sensation, because the brain takes many factors into account to determine the extent of danger to the tissues. Scores of studies have shown that along with assessing damage, the brain, when developing our subjective experience of pain perception, also assesses whether action can be taken to diminish the pain, and it develops expectations as to whether this damage will improve or get worse. The sum total of these assessments determines our expectation as to our future, and these expectations play a major role in the level of pain we will feel. Because the brain can so influence our perception of chronic pain, Melzack conceptualized it as more of "an output of the central nervous system.”

Norman Doidge - The Brain’s Way of Healing (Page 10)

Pain is not an objective factor. As with everything else, it is gauged in an anticipatory manner by the mind. If you implicitly or explicitly believe that the associated costs will be greater, your pain will be greater.

Since pain is necessary for learning, this means that learning too is done in an anticipative manner. That is why proper coaching is so essential. The student needs to have some idea about what he desires for the future so that his cost function becomes more well-defined.

When one has no expectation from the future, one is essentially dead and floating, and has reverted back to basic-level survival mode. You need to make yourself susceptible to higher forms of pain. Some of the greatest minds I have met had mastered the art of getting mad and pissed-off. They were extremely passionate about some subject and had cultivated an exceptional level of emotional sensitivity in that area.

emergence of life

Cardiac rhythm is a good example of a network that includes DNA only as a source of protein templates, not as an integral part of the oscillation network. If proteins were not degraded and needing replenishment, the oscillation could continue indefinitely with no involvement of DNA...

Functional networks can therefore float free, as it were, of their DNA databases. Those databases are then used to replenish the set of proteins as they become degraded. That raises several more important questions. Which evolved first: the networks or the genomes? As we have seen, attractors, including oscillators, form naturally within networks of interacting components, even if these networks start off relatively uniform and unstructured. There is no DNA, or any equivalent, for a spiral galaxy or for a tornado. It is very likely, therefore, that networks of some kinds evolved first. They could have done so even before the evolution of DNA. Those networks could have existed by using RNA as the catalysts. Many people think there was an RNA world before the DNA-protein world. And before that? No one knows, but perhaps the first networks were without catalysts and so very slow. Catalysts speed-up reactions. They are not essential for the reaction to occur. Without catalysts, however, the processes would occur extremely slowly. It seems likely that the earliest forms of life did have very slow networks, and also likely that the earliest catalysts would have been in the rocks of the Earth. Some of the elements of those rocks are now to be found as metal atoms (trace elements) forming important parts of modern enzymes.

Noble - Dance to the Tune of Life (Pages 83, 86)

Darwin unlocked evolution by understanding its slow nature. (He was inspired by the recent geological discoveries indicating that water - given enough time - can carve out entire canyons.) Today we are still under the influence of a similar Pre-Darwinian bias. Just as we were biased in favor of fast changes (and could not see the slow moving waves of evolution), we are biased in favor of fast entities. (Of course, what is fast or slow is defined with respect to the rate of our own metabolisms.) For instance, we get surprised when we see a fast-forwarded video of growing plants, because we equate life with motion and regard slow moving life forms as inferior.

Evolution favors the fast and therefore life is becoming increasingly faster at an increasingly faster rate. Imagine catalyzed reactions, myelinated neurons etc. Replication is another such accelerator technology. Although we tend to view it as a must-have quality of life, what is really important for the definition of life is repeating "patterns” and such patterns can emerge without any replication mechanisms. In other words, what matters is persistence. Replication mechanisms speed up the evolution of new forms of persistence. That is all. Let me reiterate again: Evolution has only two ingredients, constant variation and constant selection. (See Evolution as a Physical Theory post) Replication is not fundamental.

Unfortunately most people still think that replicators came first and led to the emergence of functional (metabolic) networks later, although this order is extremely unlikely since replicators have an error-correction problem and need supportive taming mechanisms (e.g. metabolic networks) right from the start.

In our present state of ignorance, we have a choice between two contrasting images to represent our view of the possible structure of a creature newly emerged at the first threshold of life. One image is the replicator model of Eigen, a molecular structure tightly linked and centrally controlled, replicating itself with considerable precision, achieving homeostasis by strict adherence to a rigid pattern. The other image is the "tangled bank" of Darwin, an image which Darwin put at the end of his Origin of Species to make vivid his answer to the question, What is Life?, an image of grasses and flowers and bees and butterflies growing in tangled profusion without any discernible pattern, achieving homeostasis by means of a web of interdependences too complicated for us to unravel.

The tangled bank is the image which I have in mind when I try to imagine what a primeval cell would look like. I imagine a collection of molecular species, tangled and interlocking like the plants and insects in Darwin's microcosm. This was the image which led me to think of error tolerance as the primary requirement for a model of a molecular population taking its first faltering steps toward life. Error tolerance is the hallmark of natural ecological communities, of free market economies and of open societies. I believe it must have been a primary quality of life from the very beginning. But replication and error tolerance are naturally antagonistic principles. That is why I like to exclude replication from the beginnings of life, to imagine the first cells as error-tolerant tangles of non-replicating molecules, and to introduce replication as an alien parasitic intrusion at a later stage. Only after the alien intruder has been tamed, the reconciliation between replication and error tolerance is achieved in a higher synthesis, through the evolution of the genetic code and the modern genetic apparatus.

The modern synthesis reconciles replication with error tolerance by establishing the division of labor between hardware and software, between the genetic apparatus and the gene. In the modem cell, the hardware of the genetic apparatus is rigidly controlled and error-intolerant. The hardware must be error-intolerant in order to maintain the accuracy of replication. But the error tolerance which I like to believe inherent in life from its earliest beginnings has not been lost. The burden of error tolerance has merely been transferred to the software. In the modern cell, with the infrastructure of hardware firmly in place and subject to a strict regime of quality control, the software is free to wander, to make mistakes and occasionally to be creative. The transfer of architectural design from hardware to software allowed the molecular architects to work with a freedom and creativity which their ancestors before the transfer could never have approached.

Dyson - Infinite in All Directions (Pages 92-93)

Notice how Dyson frames replication mechanisms as stabilizers allowing metabolic networks to take even further risks. In other words, replication not only speeds up evolution but also enlarges the configuration space for it. So we see not only more variation per second but also more variation at any given time.

Going back to our original question…

Life was probably unimaginably slow at the beginning. In fact, such life forms are probably still out there. Are spiral galaxies alive for instance? What about the entire universe? We may be just too local and too fast to see the grand patterns.

As Noble points out in the excerpt above, our bodies contain catalyst metals which are remnants of our deep past. Those metals were forged inside stars far away from us and shot across the space via supernova explosions. (This is how all heavy atoms in the universe got formed.) In other words, they used to be participants in vast-scale metabolic networks.

In some sense, life never emerged. It was always there to begin with. It is just speeding up over time and thereby life forms of today are becoming blind to life form of deep yesterdays.

It is really hard not to be mystical about all this. Have you ever felt bad about disrupting repeating patterns for instance, no matter how physical they are? You can literally hurt such patterns. They are the most embryonic forms of life, some of which are as old as those archaic animals who still hang around in the deep oceans. Perhaps we should all work a little on our artistic sensitivities which would in turn probably give rise to a general increase in our moral sensitivities.


How Fast Will Things Get?

Life is a nested hierarchy of complexity layers and the number of these layers increases overtime. We are already forming many layers above ourselves, the most dramatic of which is the entirety of our technological creations, namely what Kevin Kelly calls as Technium.

Without doubt, we will look pathetically slow for the newly emerging electronic forms of life. Just as we have a certain degree of control over the slow-moving plants, they too (will need us but also) harvest us for their own good. (This is already happening as we are becoming more and more glued to our screens.)

But how much faster will things eventually get?

According to the generally accepted theories, our universe started off with a big bang and went through a very fast evolution that resulted in a sudden expansion of space. While physics has since been slowing down, biology (including new electronic forms) is picking up speed at a phenomenal rate.

Of all the sustainable things in the universe, from a planet to a star, from a daisy to an automobile, from a brain to an eye, the thing that is able to conduct the highest density of power - the most energy flowing through a gram of matter each second - lies at the core of your laptop.

Kelly - What Technology Wants (Page 59)

Evolution seems to be taking us to a very strange end, an end that seems to contain life forms that exhibit features that are very much like those exhibited by the beginning states of physics, extreme speed and density. (I had brought up this possibility at the end of Evolution as a Physical Theory post as well.)

Of course, flipping this logic, the physical background upon which life is currently unfolding is probably alive as well. I personally believe that this indeed is the case. To understand what I mean, we will first need to make an important conceptual clarification and then dive into Quantum Mechanics.



Autonomy as the Flip-Side of Control

Autonomy and control are two sides of the same coin, just like one man's freedom fighter is always another man's terrorist. In particular, what we can not exert any control over looks completely autonomous to us.

But how do you measure autonomy?

Firstly, notice that autonomy is a relative concept. In other words, nothing can be autonomous in and of itself. Secondly, the degree of autonomy correlates with the degree of unanticipatability. For instance, something will look completely autonomous to you only if you can not model its behavior at all. But how would such a behavior literally look like, any guesses? Yes, that is right, it would look completely random.

Random often means inability to predict... A random series should show no discernible pattern, and if one is perceived then the random nature of the series is denied. However, the inability to discern a pattern is no guarantee of true randomness, but only a limitation of the ability to see a pattern... A series of ones and noughts may appear quite random for use as a sequence against which to compare the tossing of a coin, head equals one, tails nought, but it also might be the binary code version of a well known song and therefore perfectly predictable and full of pattern to someone familiar with binary notation.

Shallis - On Time (Pages 122-124)

The fact that randomness is in the eye of the beholder (and that absolute randomness is an ill-defined notion) is the central tenet of Bayesian school of probability. The spirit is also similar to how randomness is defined in algorithmic complexity theory, which I do not find surprising at all since computer scientists are empiricists at heart.

Kolmogorov randomness defines a string (usually of bits) as being random if and only if it is shorter than any computer program that can produce that string. To make this precise, a universal computer (or universal Turing machine) must be specified, so that "program" means a program for this universal machine. A random string in this sense is "incompressible" in that it is impossible to "compress" the string into a program whose length is shorter than the length of the string itself. A counting argument is used to show that, for any universal computer, there is at least one algorithmically random string of each length. Whether any particular string is random, however, depends on the specific universal computer that is chosen.

Wikipedia - Kolmogorov Complexity

Here a completely different terminology is used to say basically the same thing:

  • “compressibility” = “explanability” = “anticipatability”

  • “randomness can only be defined relative to a specific choice of a universal computer” = “randomness is in the eye of the beholder”



Quantum Autonomy

Quantum Mechanics has randomness built into its very foundations. Whether this randomness is absolute or the theory itself is currently incomplete is not relevant. There is a maximal degree of unanticipatability (i.e. autonomy) in Quantum Mechanics and it is practically uncircumventable. (Even the most deterministic interpretations of Quantum Mechanics lean back on artificially introduced stochastic background fields.)

Individually quantum collapses are completely unpredictable, but collectively they exhibit a pattern over time. (For more on such structured forms of randomness, read this older blog post.) This is actually what allows us to tame the autonomy of quantum states in practice: Although we can not exert any control over them at any point in time, we can control their behavior over a period of time. Of course, as life evolves and gets faster (as pointed out in the beginning of this post), it will be able to probe time periods at more and more frequent rates and thereby tighten its grip on quantum phenomena increasingly more.

Another way to view maximal unanticipatability is to frame it as maximal complexity. Remember that every new complexity layer emerges through a complexification process. Once a functional network with a boundary becomes complex enough, it starts to behave more like an “actor” with teleological tendencies. Once it becomes ubiquitous enough, it starts to display an ensemble-behavior of its own, forming a higher layer of complexity and hiding away its own internal complexities. All fundamentally unanticipatable phenomena in nature are instances of such actors who seem to have a sense of unity (a form of consciousness?) that they “want” to preserve.

Why should quantum phenomena be an exception? Perhaps Einstein was right and God does not play dice, and that there are experimentally inaccessible deeper levels of reality from which quantum phenomena emerge? (Bohm was also thinking this way.) Perhaps it is turtles all the way down (and up)?

Universe as a Collection of Nested Autonomies

Fighting for power is the same thing as fighting for control, and gaining control of something necessitates outgrowing the complexity of that thing. That is essentially why life is becoming more complex and autonomous over time.

Although each complexity layer can accommodate a similar level of maximal complexity within itself before starting to spontaneously form a new layer above itself, due to the nested nature of these layers, total complexity rises as new layers emerge. (e.g. We are more complex than our cells since we contain their complexity as well.)

It is not surprising that social sciences are much less successful than natural sciences. Humans are not that great at modeling other humans. This is expected. You need to out-compete in complexity what you desire to anticipate. Each layer can hope to anticipate only the layers below it. Brains are not complex enough to understand themselves. (It is amazing how we equate smartness with the ability to reason about lower layers like physics, chemistry etc. Social reasoning is actually much more sophisticated, but we look down on it since we are naturally endowed with it.)

Side Note: Generally speaking, each layer can have generative effects only upwards and restrictive effects only downwards. Generative effects can be bad for you as in having cancer cells and restrictive effects can be good for you as in having a great boss. Generative effects may falsely look restrictive in the sense that what generates you locks you in form, but it is actually these effects themselves which enable the exploration of the form space in the first place. Think at a population level, not at an individual level. Truth resides there.

Notice that as you move up to higher levels, autonomy becomes harder to describe. Quantum Mechanics, which currently seems to be the lowest level of autonomy, is open to mathematical scrutiny, but higher levels can only be simulated via computational methods and are not analytically accessible.

I know, you want to ask “What about General Relativity? It describes higher level phenomena.” My answer to that would be “No, it does not.”

General Relativity does not model a higher level complexity. It may be very useful today but it will become increasingly irrelevant as life dominates the universe. As autonomy levels increase all over, trying to predict galactic dynamics with General Relativity will be as funny and futile as using Fluid Dynamics to predict the future carbon dioxide levels in the atmosphere without taking into consideration the role of human beings. General Relativity models the aggregate dynamics of quantum “decisions” made at the lowest autonomy level. (We refer to this level-zero as “physics”.) It is predictive as long as higher autonomy levels do not interfere.

God as the Highest Level of Autonomy

The universe shows evidence of the operations of mind on three levels. The first level is elementary physical processes, as we see them when we study atoms in the laboratory. The second level is our direct human experience of our own consciousness. The third level is the universe as a whole. Atoms in the laboratory are weird stuff, behaving like active agents rather than inert substances. They make unpredictable choices between alternative possibilities according to the laws of quantum mechanics. It appears that mind, as manifested by the capacity to make choices, is to some extent inherent in every atom. The universe as a whole is also weird, with laws of nature that make it hospitable to the growth of mind. I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be either a world-soul or a collection of world-souls. So I am thinking that atoms and humans and God may have minds that differ in degree but not in kind. We stand, in a manner of speaking, midway between the unpredictability of atoms and the unpredictability of God. Atoms are small pieces of our mental apparatus, and we are small pieces of God's mental apparatus. Our minds may receive inputs equally from atoms and from God.

Freeman Dyson - Progress in Religion

I remember the moment when I ran into this exhilarating paragraph of Dyson. It was so relieving to find such a high-caliber thinker who also interprets quantum randomness as choice-making. Nevertheless, with all due respect, I would like to clarify two points that I hope will help you understand Dyson’s own personal theology from the point of view of the philosophy outlined in this post.

  • There are many many levels of autonomies. Dyson points out only the most obvious three. (He calls them “minds” rather than autonomies.)

    • Atomic. Quantum autonomy is extremely pure and in your face.

    • Human. A belief in our own autonomy comes almost by default.

    • Cosmic. Universe as a whole feels beyond our understanding.

  • Dyson defines God as “what mind becomes when it has passed beyond the scale of our comprehension” and then he refers to the entirety of the universe as God as well. I on the other hand would have defined God as the top level autonomy and not referred to human beings or the universe at all, for the following two reasons:

    • God should not be human centric. Each level should be able to talk about its own God. (There are many things out there that would count you as part of their God.)

      • Remember that the levels below you can exert only generative efforts towards you. It is only the above-levels that can restrict you. In other words, God is what constraints you. Hence, striving for freedom is equivalent to striving for Godlessness. (It is no surprise that people turn more religious when they are physically weak or mentally susceptible.) Of course, complete freedom is an unachievable fantasy. What makes humans human is the nurturing (i.e. controlling) cultural texture they are born into. In fact, human babies can not even survive without a minimal degree of parental and cultural intervention. (Next time you look into your parents’ eyes, remember that part of your God resides in there.) Of course, we also have a certain degree of freedom in choosing what to be governed by. (Some let money govern them for instance.) At the end of the day, God is a social phenomenon. Every single higher level structure we create (e.g. governments selected by our votes, algorithms trained on our data) governs us back. Even the ideas and feelings we restrict ourselves by arise via our interactions with others and do not exist in a vacuum.

    • Most of the universe currently seems to exhibit only the lowest level of autonomy. Not everywhere is equally alive.

      • However, as autonomy reaches higher levels, it will expand in size as well, due to the nested and expansionary nature of complexity generation. (Atomic autonomy lacks extensiveness in the most extreme sense.) So eventually the top level autonomy should grow in size and seize the whole of reality. What happens then? How can such an unfathomable entity exercise control over the entire universe, including itself? Is not auto-control paradoxical in the sense that one can not out-compete in complexity oneself? We should not expect to be able to answer such tough questions, just like we do not expect a stomach cell to understand human consciousness. Higher forms of life will be wildly different and smarter than us. (For instance, I bet that they will be able to manipulate the spacetime fabric which seems to be an emergent phenomenon.) In some sense, it is not surprising that there is such a proliferation of religions. God is meant to be beyond our comprehension.

Four men, who had been blind from birth, wanted to know what an elephant was like; so they asked an elephant-driver for information. He led them to an elephant, and invited them to examine it; so one man felt the elephant's leg, another its trunk, another its tail and the fourth its ear. Then they attempted to describe the elephant to one another. The first man said ”The elephant is like a tree”. ”No,” said the second, ”the elephant is like a snake“. “Nonsense!” said the third, “the elephant is like a broom”. ”You are all wrong,” said the fourth, ”the elephant is like a fan”. And so they went on arguing amongst themselves, while the elephant stood watching them quietly.

- The Indian folklore story of the blind men and the elephant, as adapted from E. J. Robinson’s Tales and Poems of South India by P. T. Johnstone in the Preface of Sketches of an Elephant

pharma vs diagnostics

Bioinformatics industry is bifurcating into the two categories defined by the two extreme-value generation endpoints, namely drug development and data creation.

  • Drugs come with patent protection and therefore create defensible sources of revenue. Data usually suffers from diminishing returns and data generation can not sustain value indefinitely, but this is not true for the case of biology which is (almost by definition) the most complex subject in the universe. (The fact that biological data seems to have a shorter half-life makes the situation even worse.)

  • Pharma companies develop the drugs and (the volume driven) diagnostics companies generate (the majority of) the data.

Pharma companies love to dip into data because it enables them to drive their precision medicine programs forward by enabling

  • the targeting of the right patient cohorts for existing drugs, and

  • the generation of novel drug targets.

Better precision medicine generates more knowledge about the genetic variants and more drugs targeting them, which in turn render diagnostics tests respectively more accurate and useful. In other words, more data eventually leads to an increase in the demand for diagnostics tests and therefore results in the generation of even more data. (This positive feedback cycle will greatly accelerate the maturation of the precision medicine paradigm in the near future.)

Pharma companies and diagnostics companies behave very differently (as summarized in the table below) and this creates a polarity in the product and business model configuration space for the bioinformatics industry whose primary customers (in the private domain) are these two types of companies.

Pharma vs Diagnostics.png

Last two lines are very important and worth explaining in greater detail:

  • Pharma companies do basic research and therefore want to tap into all types of data sets. (They also have a greater tendency use all types of analytical applications while diagnostic companies ignore the long tail.) These datasets are generally huge and may be residing in private cloud or some public cloud provider. So pharma companies have to be able to connect to all of these datasets and run computation-heavy analysis that seamlessly weave through them. (When you are dealing with big data, computation needs to go to the data rather than other way around.) In other words, they naturally belong to the multi-cloud paradigm. Diagnostics companies, on the other hand, belong to the cloud paradigm since they are optimizing cost and will just choose a single cloud provider based on price and convenience. (Read this older blog post to better understand the difference and polarity between the multi-cloud and cloud paradigms.)

  • Pharma companies are looking for help to solve their complex problems. Hence they are primarily focused on solutions. This pushes the software layer behind the services layer. In other words, software is still there but it is the service provider who is mostly using it. Diagnostics companies, on the other hand, focus on their unit economics. They do not need much consulting since they just optimize the hell out of their production pipelines and leave them alone for the most of the time.

thoughts on cybersecurity business

  • Cybersecurity and drug development are similar in the sense that neither can harbor deep, long-lived productification processes. Problems are dynamical. Enemies eventually evolve protection against productified attacks.

  • Cybersecurity and number theory are similar in the sense that they contain the hardest problems of their respective fields and are not built on a generally-agreed-upon core body of knowledge. Nothing meaningful is accessible to beginner-level students since all sorts of techniques from other subfields are utilized to crack problems.

Hence, in its essence, cyber security is an elite services business. Anyone else claiming the opposite (that it is a product company, that it does not necessitate the recruitment of the best minds of the industry) is selling a sense of security, not real security.

evolution as a physical theory

Evolution has two ingredients, constant variation and constant selection.

Two important observations:

  1. Variation in biology exhibits itself in myriad forms, but they all can be traced back to the second law of thermodynamics, which says that entropy (on average) always increases over time. (It is not a coincidence that Darwin formulated the theory of natural selection in 1850s, around the same time Clausius formulated the second law.)

  2. If you decrease selection pressures, the fitness landscape expands. You see less people dying around you, but you also see more variety at any given time. As we learn to cure and cope with (physical and mental) disorders using advances in (hard and soft) sciences / extend our societal safety nets further / improve our parenting and teaching techniques, more and more people stay alive and functional to go on to mate and reproduce. Progress creates more elbow room for evolution so that it can try out even wilder combinations than before.

    Conversely, if you increase selection pressures, the fitness landscape contracts, but in return the shortened life cycles enable evolution to shuffle through the contracted landscape of possibilities at a higher speed.

    Hence, selection pressure acts like a lever between spatial variation and temporal variation. Decreasing it increases spatial variation and decreases temporal variation, increasing it decreases spatial variation and increases temporal variation.

These observations imply respectively the following:

  1. Evolution never stops since the second law of thermodynamics is always valid.

  2. Remember, Einstein discovered that space and time by themselves are not invariant, only spacetime as a whole is. Similarly, evolution may slow down or speed up in space or time dimensions, but is always a constant at spacetime level. In other words, the natural setting for evolution is spacetime.

It is not surprising that thermodynamics has so far stood out as the odd ball that can not be unified with the rest of physics. Principle of entropy seems to be only half the picture. It needs to be combined with the principle of selection to give rise to a spacetime invariant theory at the level of biological variations. In other words, evolution (i.e. principles of entropy and selection combined together) is more fundamental than thermodynamics from the point of view of physics.

Side Note: The trouble is that the principle of selection is a generative, computational notion and does not lend itself to a structural, mathematical definition. However the same can also be said for the principle of entropy, which looks quite awkward in its current mathematical forms. (Recall from the older post Biology as Computation that biology is primarily driven by computational notions.)

All of our theories in physics, except for thermodynamics, are time symmetric. (i.e. They can not distinguish the past from the future.) Second law of thermodynamics, on the other hand, states that entropy (on average) always increases over time and therefore can (obviously) detect the direction of time. This strange asymmetry actually disappears in the theory of evolution, where something emerges to counterbalance the increasing entropy, namely increasing control.

Side Note: Entropy is said to increase globally but control can only be exercised locally. In other words, control decreases entropy locally by dumping it elsewhere, just like a leaf blower. Of course, you may be wondering how, as finite localized beings, we can formulate any global laws at all. I share the same sentiment because, empirically speaking, we can not distinguish a sufficiently large local counterbalance from a global one. Whenever I talk about the entropy of the whole universe, please take it with a grain of salt. (Formally speaking, thermodynamics is not even defined for open systems. In other words, it can not be globally applied to universes with no peripheries.) We will dig deeper into the global vs local dichotomy in Section 3. (Strictly speaking, thermodynamics can not be applied locally neither since every system is bound to be somewhat open due to our inability to completely control its environment.)


1. Increasing Control

All living beings exploit untapped energy sources to exhibit control and influence the future course of their own evolution.

Any state that is not lowest-energy can be considered semi-stable at best. Eventually, by the second law of thermodynamics, every such state evolves towards the lowest-energy configuration and emits energy as a by-product. By “untapped energy sources” I mean such extractable pockets of energy.

So, put more succinctly, all living beings harness entropy to reduce entropy.

The accumulative effect of their efforts over long periods of time has so far been quite dramatic indeed: What basically started out as simple RNA-based structures floating uncontrollably in oceans eventually turned into human beings proposing geo-engineering solutions to the global climate problems they themselves have created.

Let us now look at two interesting internal examples.


1.1. Cognitive Example

Our brains continuously make predictions and proactively interpolate from sensory data flow. In fact, when the higher (more abstract) layers of our neural networks lose the ability to project information downwards and become solely information-receivers, we slip into a comatose state.

Our predictive mental models slowly decay due to entropy (That is why blind people gradually lose their abilities to dream.) and are also at constant risk of becoming irrelevant. To address these problems, our brains continuously reconstruct the models in the light of new triggers and revise them in the light of new evidence. If they did not exercise such self-control, we would be stuck in an echo chamber of slowly decaying mental creations of our own. (That is why schizophrenic people gradually lose touch with reality.)

Autism and schizophrenia can be interpreted as imbalances in this controlled hallucination mechanism and be thought of as inverses of each other, causing respectively too much control and too much hallucination:

Aspects of autism, for instance, might be characterized by an inability to ignore prediction errors relating to sensory signals at the lowest levels of the brain’s processing hierarchy. That could lead to a preoccupation with sensations, a need for repetition and predictability, sensitivity to certain illusions, and other effects. The reverse might be true in conditions that are associated with hallucinations, like schizophrenia: The brain may pay too much attention to its own predictions about what is going on and not enough to sensory information that contradicts those predictions.

Jordana Cepelewicz - To Make Sense of the Present, Brains May Predict the Future


1.2. Genomic Example

Since only 2 percent of our DNA actually codes for proteins, the remaining 98 percent was initially called “junk DNA” which later proved to be a wild misnomer. Today we know that this junk part performs myriad of interesting functions.

For instance, one thing it does for sure is to insulate the precious 2 percent from genetic drift by decreasing the probability of a mutation event to cause critical damage.

Side Note: It is amazing how evolution has managed to diminish the coding region down to 2 percent (without sacrificing any functionality) by getting more and more dexterous at exposing the right coding regions (for gene expression) at the right time. This has resulted in greater variability of gene expression rates across different cellular contexts.

Remember (from our previous remarks) that if you decrease selection pressure, spatial variation increases and temporal variation decreases. Nature achieves this feat via an important intermediary mechanism. To understand this mechanism, first observe the following:

  1. Ability to decrease selection pressure requires greater control over the environment and decreased selection pressure entails longer life span.

  2. Exerting greater control over the environment requires more complex beings.

  3. More complexity and longer life span entail respectively greater fragility towards and longer exposure-time to random mutation events.

  4. This increased susceptibility to randomness in turn necessitates more protective control over genomes.

Since an expansion in the fitness landscape is worthless unless you can roam around on it, greater control exerted at phenotypical level is useless without greater control exerted at genotypical level. In other words, as we channel the speed of evolution from the temporal to the spatial dimension, we need to drive more carefully to make it safely home. From this point of view, it is not surprising at all that the percentage of non-coding DNA of a species is generally correlated with its “complexity”.

I used quotation marks here since there is no generally-agreed-upon, well-defined notion of complexity in biology. But one thing we know for sure is that evolution generates more and more of it over time.


2. Increasing Complexity

Evolution is good at finding out efficient solutions but bad at simplification. As time passes by, both ecosystems and their participants become more complex.

Currently we (as human beings) are by far the greatest complexity generators in the universe. This sounds wildly anthropocentric of course, but when it comes to complexity, we are really the king of the universe.


2.1 Positive Feedback between Control and Complexity

Control and complexity are more or less two sides of the same coin. They always coexist because of the following strong positive feedback mechanism between them:

  • Greater control for you implies more selection pressure for everyone else. In other words, at the aggregate level, greater control increases selection pressure and thereby generates more complexity. (This observation is similar to saying that greater competition makes everyone stronger.)

  • How can you assert more control in an environment that has just become more complex? You need to increase your own complexity so that you can get a handle on things again. (This observation is similar to saying that human brain will never be intelligent enough to understand itself.)


2.2. Positive Feedback between Higher and Lower Complexity Levels

All ecological networks are stratified into several levels:

  • Internally speaking, each human being is an ecology onto himself, consisting of ten of trillions of cells, coexisting with equally many cells in human bacterial flora. This internal ecology is stratified into levels like tissues, organs and organ systems.

  • Externally speaking, each human being is part of a complex ecology that is stratified into many layers that cut across our relationships to each other and to the rest of the biosphere.

Greater complexity generated at higher levels like economics, sociology and psychology propagates all the way down to the cellular level. Conversely, greater complexity generated at a very low level affects all the levels sitting above it. This positive feedback loop accelerates total complexity generation.

Two concrete examples:

  • The notion of an ideal marriage has evolved drastically over time, along with the increasing complexity of our lives. Family as a unit is evolving for survival.

  • Successful people at the frontiers of science, technology, business and art all tend to be quirky and abnormal. (Read the older blog post Success as Abnormality for more details.) Through such people, an expansion of the fitness landscape at the cognitive level propagates up to an expansion at the societal level.


2.3. Positive Correlation between Fragility and Complexity Level

Overall fragility increases as complexity levels are piled up on top of each other. In order to ensure stability, it is necessary for each level to be more robust than the level above it. (Think of the stability of pyramid structures.)

Invention of nucleus by biological evolution is an illustrating example. Prokaryotes (cells without nucleus) are much more open to information (DNA) sharing than the eukaryotes (cells with nucleus) which depend on them. This makes them simpler but also more robust.

It could take eukaryotic organisms a million years to adjust to a change on a worldwide scale that bacteria [prokaryotes] can accommodate in a few years. By constantly and rapidly adapting to environmental conditions, the organisms of the microcosm support the entire biota, their global exchange network ultimately affecting every living plant and animal.

Microcosmos - Lynn Margulis & Dorion Sagan (Page 30)

Whenever you see a long-lasting fragility, look for a source of robustness level below. Just as our mechanical machines and factories are maintained by us, we ourselves are maintained by even more robust networks. Each level should be grateful to the level below. 

Side Note: AI singularity people are funny. They seem to be completely ignorant about the basics of ecology. Supreme AI will be the single most fragile form of life. It can not take over the world. It can merely suffer from an illusion of control, just like we do. You can not destroy or control what is below you in the ecosystem. Survival of each level depends on the freedom of the level below. Just like we depend on the stability provided by freely evolving and information exchanging prokaryotes, supreme AI will depend on the stability provided by us.


2.4. Positive Correlation between Fragility and Firmness of Identity

How limited and rigid life becomes, in a fundamental sense, as it extends down the eukaryotic path. For the macrocosmic size, energy, and complex bodies we enjoy, we trade genetic flexibility. With genetic exchange possible only during reproduction, we are locked into our species, our bodies, and our generation. As it is sometimes expressed in technical terms, we trade genes "vertically" - through the generations - whereas prokaryotes trade them "horizontally" - directly to their neighbors in the same generation. The result is that while genetically fluid bacteria are functionally immortal, in eukaryotes sex becomes linked with death.

Microcosmos - Lynn Margulis & Dorion Sagan (Page 93)

Biological entities that are more protective of their DNA (e.g. eukaryotes whose genes are packed into chromosomes residing inside nuclei) exhibit greater structural permanence. (We had reached a similar conclusion while discussing the junk DNA example in Section 1.2.) Eukaryotes are more precisely defined than prokaryotes, so to speak. Degree of flexibility correlates inversely with firmness of identity.

Firmer the identity gets, the more necessary death becomes. In other words, death is not a destroyer of identity, it is the reason why we can have identity in the first place. I suggest you to meditate on this fact for a while. (It literally changed my view on life.)

  • The reason why we are not at peace with the notion of death is that we are still not aware of how challenging it was for nature to invent the technologies necessary for maintaining identity through time.

  • Fear of death is based on the ego illusion, which Buddha rightly framed as the mother of all misrepresentations about nature. This is the story of a war between life and non-life, between biology and physics, not you against the rest of the universe or your genes against other genes.


3. Physics vs Biology

 
Physics vs Biology.png
 

Physics and biology (with chemistry as the degenerate middle ground) can be thought of as duals of each other, as forces pulling the universe in two opposite directions.

Side Note: Simple design is best done over a short period of time, in a single stroke, with the spirit of a master. Complex design is best done over a long period of time, in small steps, with the spirit of an amateur. That is essentially why physics progresses in a discontinuous manner via single-author papers by non-cooperative genius minds, while biology progresses in a continuous manner via many-author papers by cooperative social minds.


3.1. Entropy, Time and Scale

Note that entropy and time are two sides of the same coin:

  • Time is nothing but motion. Time without any motion is not something that mortals like us can fathom.

  • All motion happens basically due to the initial low-entropy state of the universe and the statistical thermodynamic evolution towards higher entropy states. (Universe somehow began in a very improbable state and now we are paying the “price” for it.) In other words, entropy is the force behind all motion. It is what makes time flow. The rest of physics just defines the degrees of freedom inside which entropy can work its magic (i.e. increase the disorder of the configuration space defined by the degrees of freedom), and specifies how time flow takes place via least action principles which allows one to infer the unique time evolution of a particle or a field from the knowledge of its beginning and ending states.

Side Note: It is not a coincidence that among all physics theories only thermodynamics could not be formulated in terms of a least action principle. Least action principles give you one dimensional (path) information that is inaccessible by experimentation. Basically, each experiment we do allows us to peak at the different time slices of the universe, and each least action principle we have allows us to view each pair of time slices as the beginning and ending states of a unique wholesome causal story. (We can not probe nature continuously.) Entropy on the other hand does not work on a causal basis. (If it did, then it could not be responsible for time flow.) It operates in a primordially acausal fashion.

When we flip the direction of time, thermodynamics starts working backwards and the energy landscape turns upside down. Time-flipped biological entities start harnessing order to create disorder, which is exactly what physics does.

The difference between physics and time-flipped biology is that former operates globally and harnesses the background order that originates from the initial low-entropy state of the universe and latter harnesses local patches of order created by itself. (This is why watching time-flipped physics videos is a lot more fun than watching time-flipped biology videos.)

Side Note: There are nano scale examples of biology harnessing order to create disorder. This is allowed by the statistical nature of the second law of thermodynamics which says that entropy increases only on average. Small divergences may occur over short intervals of time. Large divergences too may occur but they require much longer intervals of time.

The heart of the duality between physics and biology lies in this “global vs local” dichotomy which we will dig deeper in the next section.

It is worth reiterating here the fact that entropy breaks symmetries in the configuration space, not in geometric one. (It may even increase local order in geometric space by creating symmetric arrangements, as in spontaneous crystallisation, which disorders the momentum component of the configuration space via energy release.) Hence, strictly speaking, the “global vs local” dichotomy should not be interpreted purely in spatial terms. What time-flipped biology does is to harness local patches of configurational order (i.e. degrees of freedom associated with those locations), not spatial order.

Side Note: Entropy also triggers the breaking of some structural symmetries along the way. According to inflation theory, as the universe cooled and expanded from its initial hot and dense state, the primordial force split into the four forces (Gravitational, Electromagnetic, Weak Nuclear and Strong Nuclear) that we have today. (Again, as mentioned before, entropy is an odd ball among all physics theories and is not regarded as a force since it does not have an associated field etc.) This de-unification happened through a series of three spontaneous symmetry breakings, each of which took place at a different temperature threshold.

3.2. Entropy and Dynamical Scale Invariance

Imagine a very low-entropy universe that consists of an equal number of zeros and ones which are neatly separated into two groups. (This is a fantasy world with no forces. In other words, the only thing you can randomize is position. So the configuration space just consists of the real space since there are no other degrees of freedom.) Global uniformity of such a universe would be low, since there will be only fifty percent probability that any two randomly chosen local patches will look like each other. Local uniformity on the other hand would be high, since all local patches (except for those centered at the borderline separating the two groups) will either have a homogenous set of zeros or a homogenous set of ones.

Entropy can be seen as a local operator breaking local uniformities in the configuration space. Over time, the total configuration space starts to look the same no matter how much you zoom in or out. In other words, the universe becomes more and more dynamically scale invariant.

Note that entropy does not increase uniformity. It actually does the opposite and decreases uniformity across the board so that the discrepancy between local and global uniformity disappears. Close to heat death (maximum theoretical entropy), no two local patches in the configuration space will look like each other. (They will be random in different ways.)

Side Note: Due to the statistical nature of the second law of thermodynamics, universe will keep experiencing fluctuations to the very end. It can get arbitrarily close to heat death but will never actually reach it. Complete heat death means end of physics altogether.

Now a natural question to ask is whether there could have been other ways of achieving scale invariance? The answer is no and the blocker is an information problem. You can not have complete knowledge about the global picture without infinite energy at your disposal and without this knowledge you can not define a local operator that can achieve scale invariance. For instance, going back to our initial example, if your region of the universe happens to have no zeros, you would not even be able to define an operator that takes zeros into consideration. All you can really do is to just ask every local patch to scatter everything so that (hopefully) whatever is out there will end up proportionally in every single patch. Of course, this is exactly what entropy itself does. (It is this random, zero knowledge mechanism which gives thermodynamics its acausal nature.)

Biology on the other hand creates low entropy islands by dumping entropy elsewhere and thereby works against the trend towards dynamical scale invariance. It is exactly in this sense that biology is anti-entropic. Entropy is not neutralized or cancelled, instead it is deflected through a series of brilliant jiu jitsu strokes so that it defeats its own goal.

Physics fights for dynamical scale invariance by breaking local uniformities in the configuration space and biology fights against dynamical scale invariance by creating local uniformities in the configuration space. This is the essence of the duality between physics and biology, but there is a slight caveat: Physics works on a global scale and hails down on all local uniformities in an indiscriminate manner, while biology begins in some local patches in a discriminate manner and slowly makes its way up to global scale, conquering physics from inside out, pushing entropy to the peripheries. (Biology needs to be discriminative since only certain locations are convenient to jumpstart life, and it needs to learn since - unlike physics - it does not have the privilege of starting global.)

Let us now scroll all the way to the end of time to see what this duality means for the fate of our universe.


3.3. Ultimate Fate of the Universe

There is no current scientific consensus about the ultimate fate of the universe. Some cosmologists believe in the inexhaustible expansion and the eventual heat death, some others believe in the unavoidable collapse and the subsequent bounce. Since nobody has any idea about how dark energy, dark matter and quantum gravity actually work, everything is basically up grabs.

Side Note: Dark energy is uniformly-distributed and non-interacting. It is posited to be the driving factor behind the acceleration of the uniform expansion of space. Dark matter on the other hand is non-uniformly-distributed and gravitationally-attractive. Together dark energy and dark matter make up around 95 percent of the total energy content of the universe. Hence the reason why some people call junk DNA, which make up 98 percent of human genome, as the dark sector of DNA. Funnily enough, in a similar fashion, more than 90 percent of the more evolved (white matter) part of the human brain is composed of non-neuron (glial) cells . (Neurons in the white matter, as opposed to those in the gray matter, are myelinated and therefore conduct electricity at a much higher speed.) It seems like the degree of complexity of an evolving system is directly correlated with the degree of dominance of the modulator (e.g. non-neuron cells, non-coding DNA) against the modulated (e.g. neurons cells, coding DNA). Could the prevalence of the dark sector be interpreted as an evidence that physics itself is undergoing evolution? (Note that, in all cases, the scientific discovery of the modulator occurred quite late and with a great deal of astonishment. Whenever we see a variation exhibiting substructure, we should immediately suspect that it is modulated by its complement.)

One thing that is conspicuously left out of these discussions is life itself. Everyone basically assumes that entropy will eventually win. After all even supermassive black holes will inevitably evaporate due to Hawking radiation. Who would give a chance to a phenomenon (like life) that is close to non-existent at the grand cosmological scales?

Well, I am actually super optimistic about the future of life. It is hard not to be so after one studies (in complete awe) how far evolution has progressed in just a few billion years. Life is learning at a phenomenal speed and will figure out (before it gets too late) how to do cosmic-scale engineering.

Since no one really knows anything about the dynamics of a cosmic bounce (and how it interacts with thermodynamics), let us finish this long blog post with some fun speculations:

  • The never ending war between physics and biology may be the reason why time still exists and the universe still keeps on managing to collapse on itself while also averting a heat death. Life could have learned how to engineer an early collapse before a heat death or how to prevent a heat death long enough for a collapse. Life could have even learned how to leave a local fine-tuned low-entropy quantum imprint so that it is guaranteed to reemerge after the big bounce.

  • What if life always reaches total control in the sense of Section 1 in each one of the cosmic cycles and becomes indistinguishable from its environment? Could the beginning state of this universe’s physics be the ending state of the previous universe’s biology? In other words, could our entire universe be an extremely advanced life form? Could this be the god described by Pantheists? Was Schopenhauer right in the sense that the most fundamental aspect of reality is its primordial will to live? Is the acausal nature of thermodynamics a form of pure volition?

future of pharmaceutical industry

What will the future of pharmaceutical industry look like?

It is clear that we are reaching one end of a paradigm, but what most people still do not get is how big the oncoming changes will be. We are on the cusp of a great intellectual revolution, on par with the revolution in 20th century physics. Computer science is unlocking biology, just like mathematics unlocked physics, and the consequences will be huge. (Read this older post for a deeper look at this interesting analogy between analogies.)

For the first time in history, we are engineering solutions from scratch rather than stumbling into them or stealing them from nature. Western medicine is only now truly taking off.

Not only will this transformation be breathtaking, but it will also be unfolding at a speed much faster than we expect. As biology becomes more information theoretical, pharmaceutical industry will become more software driven and will start displaying more of the typical dynamics of the software industry, like faster scaling and deeper centralization and modularization.

Of course, predicting the magnitude of change is not the same thing as predicting how things will actually unfold. (Sometimes I wonder which one is harder. Remember Paul Saffo: “We tend to mistake a clear view of the future for a short distance.”) Let us give a try anyway.


1. Splitting and Centralization of the Quantitative Brain

Just like the risk analytics layer is slowly being peeled out of big insurance companies as it is becoming more quantitative (small companies could not harbor such analytics departments anyway), the quantitative layer of the drug development process will split out of the massive pharmaceutical companies. (Similarly, in the autonomous driving space, companies like Waymo are licensing out self-driving technologies to big car manufacturers.)

Two main drivers of this movement:

  • Soft Reason. Culturally speaking, traditional (both manufacturing and service) companies can not nurture software development within themselves. Big ones often think that they can, but without exception they all end up wasting massive resources to realize that it is not a matter of resources. Similarly, they always end up suffocating the technology companies they acquire.

  • Hard Reason. Unlike services and manufacturing, software scales perfectly. In other words, the cost of reproduction of software is close to nil. This leads to centralization and winner-takes-all effects. (Even within big pharmas bioinformatics and IT departments are centralized.) Software developed in-house can never compete with software developed outside, which serves many customers, takes as input more diverse use cases and improves faster.

Study of complex systems (which biology is an example of) is conducted from either a state centric or process centric perspective, using either statistical (AI driven) or deterministic (algorithm driven) methods. (Read this older post for a deeper look at the divide between state and process centric perspectives.)

In other words, the quantitative brain in biology will be centralized around four different themes:

  1. Algorithm Driven + State Centric

  2. AI Driven + State Centric

  3. Algorithm Driven + Process Centric

  4. AI Driven + Process Centric

Xtalpi is a good example for the 4th category. Seven Bridges in its current form belongs to the 1st category. There are other examples out there that fit neatly into one of these categories or cut across a few. (It is tough to cut across both state centric and process centric perspectives since latter is mostly chemistry and physics driven and tap into a very different talent pool.)


2. Democratization and Commodification of Computation

Big pharma companies could afford to buy their own HPCs to run complex computations and manage data. Most are still holding onto these powerful clusters, but they are all realizing that this is not sustainable for two main reasons:

  • They either can not accommodate bursty computations or can not keep the machines busy all time. So it is best for the machines to be aggregated in shared spaces where they are maintained centrally.

  • Since data size is exploding doubly exponentially, it is becoming harder to move and more expensive to store. (Compute needs to go where data is generated.)

Cloud computing took off for reasons entirely unrelated to biomedical data analysis, which will soon be the biggest beneficiary of this revolution as biomedical data sizes and computation needs surpass everything else. (It is not surprising that the centralized disembodied brain is developing in the same way as our decentralized embodied brains did. It got enlarged for social reasons and deployed later for scientific purposes.) Small biotechs can now run complex computations on massive data repositories and pay for computation just like they pay for electricity, only for the amounts they use. Big pharmas too are migrating to the cloud, finally coming to terms with the fact that cloud is both safer and cheaper. They are no longer uncomfortable departing with their critical data and no longer ignorant about the hidden costs of maintaining local hardware.

Long story short, democratization of computation is complete (aside from some big players with sunk cost investments) and the industry has already moved on to its next phase. Today we are witnessing a large scale commoditization of cloud services, driven by the following two factors:

  • Supply Side. Strong rivals arriving and catching up with Amazon Web Services.

  • Demand Side. Big players preferring to be cloud agnostic and supporting multi-cloud.


3. Democratization, Uniformization and Centralization of Data

Democratization. Big pharmas are hoarding data. They are entering into pre-competitive consortiums and forming partnerships with or buying diagnostics companies straight out. Little pharmas (startup biotechs) are left out of this game, just as they were left out of the HPC game. But just like Amazon democratized computing, National Institutes of Health (NIH) is now trying to democratize data. (Amazon and NIH are playing parallel roles in this grand story. Interesting.) Sooner or later public data will outstrip private data simply because health is way too important from a societal point of view.

Uniformization. NIH is also trying to uniformize data structures and harmonize compliance and security standards across the board, so that data can flow around at a higher speed.

Centralization. NIH not only wants to democratize and uniformize data, but it also wants to break data silos. Data is a lot more useful when it all comes together. (Fragmentation problem is especially acute in US.) Similarly, imagine if everyone could hold all of their health data on a blockchain that they can share with any pharma in return for a compensation. This is another form of centralization, radically bringing together everything at an individual level. All pharma companies need to do is to take a cross section across the cohorts they are interested in.

With its top-down centralized policy making and absence of incumbent (novel drug developing) big pharmas, China will skip all of the above steps just as Africa skipped grid-based centralized electricity distribution and is jumping straight into off-grid decentralized solar power technologies.


4. Streamlining and Cheapening of Clinical Trials

It is extremely time consuming and expensive to get a drug approved. In 2000s, only 11 percent of drugs entering phase 1 clinical trials ended up being approved by FDA. Biotech startups that can make it to phase 3 usually end up selling themselves completely (or partially on a milestone basis) to big pharma companies simply because they can not afford the process. In other words, the final bottleneck for these startups in getting to the market on their own is clinical trials.

This problem is much more multi dimensional and thorny, but there is still hope:

  • Time. Regulations are being more streamlined and thereby making the processes faster.

  • Cost. Genomics and real world data are enabling better targeting (or - in the case of already approved drugs - retargeting) of patients and resulting in better responding cohorts and thereby driving costs down.

  • Risk. As we get better at simulating human biology on hardware and software, parallelizability of experimentation will increase and thereby the number of unnecessary (sure to fail) experiments on human beings will decrease. In other words, just as in the software world, experiments will fail faster.


5. Democratization and Decentralization of Drug Development

As some of the largest companies in the world, big pharmas are intimidating, but from an evolutionary point of view, they are actually quite primitive. The existing fatness is not due to some incredible prowess or sustained success, it is entirely structural in the sense that the industry itself has not fully matured and modularized yet. (In fact, there is little hope that they can execute the necessary internal changes and evolve a contemporary data-driven approach to drug development. That is why they seek acquisitions, outside partnerships etc.)

If you split open a big pharma today, you will see a centralized quantitative brain (consisting of bioinformatics and IT departments) and a constellation of independent R&D centers around this brain. This is exactly what the whole pharma industry will look like in the future.

Once quantitative brain is split off and centralized, computation is democratized and commoditized, data is democratized, uniformized and centralized, and clinical trials is streamlined and cheaper, there will be no need for biotech startups to merge themselves into the resource-rich environments of big pharma companies. Drugs will be developed in collaboration with the brain and be co-owned. (Currently we have already started seeing partnerships between the brain and the big pharma. Such partnerships will democratize and become common place.)

Biology will start off in independent labs and stay independent, and the startups will not have to sell themselves to the big guys if they do not want to, just as in the software world.

Biology is way too complex to allow repeat successes. Best ideas will always come from outsiders. In this sense, pharma industry will look more like the B2C software world rather than the B2B software world. Stochastic and experimental.

We have already started to see more dispersed value creation in the industry:

“Until well into the 1990s, a single drug company, Merck, was more valuable than all biotech companies combined. It probably seemed as if biotech would never arrive—until it did. Of the 10 best-selling drugs in the US during 2017, seven (including the top seller, the arthritis drug Humira) are biotech drugs based on antibodies.”

- MIT Tech Review - Look How Far Precision Medicine Has Come

(I did not say anything about the manufacturing and distribution steps since the vast majority of these physical processes is already being outsourced by pharma companies. In other words, these aspects of the industry have already been modularized.)

Future of Pharma.png

states vs processes

We think of all dynamical situations as consisting of a space of states and a set of laws codifying how these states are weaved across time, and refer to the actual manifestation of these laws as processes.

Of course, one can argue whether it is sensical to split the reality into states and processes but so far it has been very fruitful to do so.


1. Interchangeability

1.1. Simplicity as Interchangeability of States and Processes

In mathematics, structures (i.e. persisting states) tend to be exactly whatever are preserved by transformations (i.e. processes). That is why Category Theory works, why you can study processes in lieu of states without losing information. (Think of continuous maps vs topological spaces) State and process centric perspectives each have their own practical benefits, but they are completely interchangeable in the sense that both Set Theory (state centric perspective) and Category Theory (process centric perspective) can be taken as the foundation of all of mathematics.

Physics is similar to mathematics. Studying laws is basically the same thing as studying properties. Properties are whatever are preserved by laws and can also be seen as whatever give rise to laws. (Think of electric charge vs electrodynamics) This observation may sound deep, but (as with any deep observation) is actually tautologous since we can study only what does not change through time and only what does not change through time allows us to study time itself. (Study of time is equivalent to study of laws.)

Couple of side-notes:

  • There are no intrinsic (as opposed to extrinsic) properties in physics since physics is an experimental subject and all experiments involve an interaction. (Even mass is an extrinsic property, manifesting itself only dynamically.) Now here is the question that gets to the heart of the above discussion: If there exists only extrinsic properties and nothing else, then what holds these properties? Nothing! This is basically the essence of Radical Ontic Structural Realism and exactly why states and processes are interchangeable in physics. There is no scaffolding.

  • You probably heard about the vast efforts and resources being poured into the validation of certain conjectural particles. Gauge theory tells us that the search for new particles is basically the same thing as the search for new symmetries which are of course nothing but processes.

  • Choi–Jamiołkowski isomorphism helps us translate between quantum states and quantum processes.

Long story short, at the foundational level, states and processes are two sides of the same coin.


1.2. Complexity as Non-Interchangeability of States and Processes

You understand that you are facing complexity exactly when you end up having to study the states themselves along with the processes. In other words, in complex subjects, the interchangeability of state and process centric perspectives start to no longer make any practical sense. (That is why stating a problem in the right manner matters a lot in complex subjects. Right statement is half the solution.)

For instance, in biology, bioinformatics studies states and computational biology studies processes. (Beware that the nomenclature in biology literature has not stabilized yet.) Similarly, in computer science, study of databases (i.e. states) and programs (i.e. processes) are completely different subjects. (You can view programs themselves as databases and study how to generate new programs out of programs. But then you are simply operating in one higher dimension. Philosophy does not change.)

There is actually a deep relation between biology and computer science (similar to the one between physics and mathematics) which was discussed in an older blog post.


2. Persistence

The search for signs of persistence can be seen as the fundamental goal of science. There are two extreme views in metaphysics on this subject:

  • Heraclitus says that the only thing that persists is change. (i.e. Time is real, space is not.)

  • Parmenides says that change is illusionary and that there is just one absolute static unity. (i.e. Space is real, time is not.)

The duality of these points of views were most eloquently pointed out by the physicist John Wheeler, who said "Explain time? Not without explaining existence. Explain existence? Not without explaining time".

Persistences are very important because they generate other persistencies. In other words, they are the building blocks of our reality. For instance, states in biology are complex simply because biology strives to resist change by building persistence upon persistence.


2.1. Invariances as State-Persistences

From a state perspective, the basic building blocks are invariances, namely whatever that do not change across processes.

Study of change involves an initial stage where we give names to substates. Then we observe how these substates change with respect to time. If a substate changes to the point where it no longer fits the definition of being A, we say that substate (i.e. object) A failed to survive. In this sense, study of survival is a subset of study of change. The only reason why they are not the same thing is because our definitions themselves are often imprecise. (From one moment to the next, we say that the river has survived although its constituents have changed etc.)

Of course, the ambiguity here is on purpose. Otherwise without any definiens, you do not have an academic field to speak of. In physics for instance, the definitions are extremely precise, and the study of survival and the study of change completely overlap. In a complex subject like biology, states are so rich that the definitions have to be ambiguous. (You can only simulate the biological states in a formal language, not state a particular biological state. Hence the reason why computer science is a better fit for biology than mathematics.)


2.2. Cycles as Process-Persistences

Processes become state-like when they enter into cyclic behavior. That is why recurrence is so prevalent in science, especially in biology.

As an anticipatory affair, biology prefers regularities and predictabilities. Cycles are very reliable in this sense: They can be built on top of each other, and harnessed to record information about the past and to carry information to the future. (Even behaviorally we exploit this fact: It is easier to construct new habits by attaching them to old habits.) Life, in its essence, is just a perpetuation of a network of interacting ecological and chemical cycles, all of which can be traced back to the grand astronomical cycles.

Prior studies have reported that 15% of expressed genes show a circadian expression pattern in association with a specific function. A series of experimental and computational studies of gene expression in various murine tissues has led us to a different conclusion. By applying a new analysis strategy and a number of alternative algorithms, we identify baseline oscillation in almost 100% of all genes. While the phase and amplitude of oscillation vary between different tissues, circadian oscillation remains a fundamental property of every gene. Reanalysis of previously published data also reveals a greater number of oscillating genes than was previously reported. This suggests that circadian oscillation is a universal property of all mammalian genes, although phase and amplitude of oscillation are tissue-specific and remain associated with a gene’s function. (Source)

A cyclic process traces out what is called an orbital which are like invariances that are smeared across time. An invariance is a substate preserved by a process, namely a portion of a state that is mapped identically to itself. An orbital too is mapped to itself by the cyclic process, but it is not identically done so. (Each orbital point moves forward in time to another orbital point and eventually ends up at its initial position.) Hence orbitals and process-persistency can be viewed respectively as generalizations of invariances and state-persistency.


3. Information

In practice, we do not have perfect knowledge of the states nor the processes. Since we can not move both feet at the same time, in our quest to understand nature, we assume that we have perfect knowledge of either the states or the processes.

  • Assumption: Perfect knowledge of all the actual processes but imperfect knowledge of the state
    Goal: Dissect the state into explainable and unexplainable parts
    Expectation: State is expected to be partially unexplainable due to experimental constraints on measuring states.

  • Assumption: Perfect knowledge of a state but no knowledge of the actual processes
    Goal: Find the actual (minimal) process that generated the state from the library of all possible processes.
    Expectation: State is expected to be completely explainable due to perfect knowledge about the state and the unbounded freedom in finding the generating process.

The reason why I highlighted expectations here is because it is quite interesting how our psychological stance against the unexplainable (which is almost always - in our typical dismissive tone - referred to as noise) differs in each case.

  • In the presence of perfect knowledge about the processes, we interpret the noisy parts of states as absence of information.

  • In the absence of perfect knowledge about the processes, we interpret the noisy parts of states as presence of information.

The flip side of the above statements is that, in our quest to understand nature, we use the word information in two opposite senses.

  • Information is what is explainable.

  • Information is what is inexplainable.


3.1 Information as the Explainable

In this case, noise is the ideal left-over product after everything else is explained away, and is considered normal and expected. (We even gave the name “normal” to the most commonly encountered noise distribution.)

This point of view is statistical and is best exemplified by the field of statistical mechanics where massive micro-degrees freedom can be safely ignored due to their random nature and canned into highly regular noise distributions.


3.2. Information as the Inexplainable

In this case, noise is the only thing that can not be compressed further or explained away. It is surprising and unnerving. In computer speak, one would say “It is not a bug, it is a feature.”

This point of view is algorithmic and is best exemplified by the field of algorithmic complexity which looks at the notion of complexity from a process centric perspective.

hubris as high mutational burden

Checkpoint inhibitors seem to work best against tumor types and cancers with lots of genetic mutations. Because it is unusual in the body, this heavy mutational load seems to be easier for the immune system to identify as not belonging to ‘self’. Lung cancers triggered by smoking are generally loaded with mutations, and smokers respond to the checkpoint-inhibition therapies better than those who have never smoked. One strategy is to use combination therapies — such as chemotherapy plus a checkpoint inhibitor — to trigger mutations that will make it easier for the immune system to recognize tumor cells.

The Quest to Extend the Reach of Checkpoint Inhibitors in Lung Cancer (Weintraub)

Stronger cancers are easier to defeat. (Who would have thought that smoking can increase the odds of survival?) Strategically speaking, this outrageously counter-intuitive conclusion is actually quiet generalizable.

Making your enemy stronger makes sense in many different contexts. Once the ego inflates and hubris kicks in, your enemy inevitably starts making mistakes, just like a highly mutated cancer cell giving itself away to the immune system. The trick is to reach this state as quickly as possible so that you still have enough energy to act with fury when your enemy makes the fatal mistake. (Remember that you do not need to win every battle to become the final victor.)

Complex systems exhibit phase transitions. Making your enemy stronger can tilt the equilibrium, helping you initiate a favorable phase transition. For instance, as a young adult growing up, you need to rebel against your parents and friendly parents make this maturation process harder. Similarly, as you dump plastic into it, nature needs to learn how to turn this waste into food and eco-friendly policies make the adaptation process harder. As you can not expect to grow up via trivial adversities, you can not expect nature to come up with plastic eating bacteria via occasional exposures.

PS: On a similar note, see the post Against Small Doses which argues in favor of (low frequency) high doses within the (positive) pleasure domain, whereas the current post is focused on (negative) pain domain.