09 Mar 2018

What awaits humanity, pt I: extinction

…between true utopia, assorted distopias and extinction

Some futurologists insist on concept of bifurcation or singularity point, prediction beyond which would have little meaning. And applied to “midterm” time scale that indeed might be plausible. But on the larger scale there are still few constants which can be postulated a priori. Some might not agree with that and put to doubt every notion related to time, but i consider such approach too limited to be fun.

So, given some constant basis of universe, such as time and probability theory, we can speculate on distant future.

Extinction might be inevitable, says science

While this article isn’t really about entropy and second law of thermodynamics or assorted universe end theories (such as big rip and big crunch), it is worth mentioning that ultimate fate of humanity might as well be death by the cruel forces of physics.

I will however consider situations in which ultimate universe end doesn’t matter. I’ll give humanity endless time to kill itself for the sake of this article. Because, seriously, if exactly nothing can be done about world end, discussing that single fact is boring.

Humanity vs humanities vs trans-humanity

Since i do not presuppose much, i should consider the possibility of humanity splitting into disconnected or loosely connected sub-humanities (most likely separated by cosmic space). I will not rule out possibility of FTL travel altogether (however absurd it may be), but strictly speaking it’s a bit irrelevant. What is important in being loosely connected is that the most technically advanced humanity cannot reliably affect other humanities. Of course, such definition is still loose, but i hope my readers get the basic concept.

Extinction can affect one sub-humanity or all of the humanity. Other fates are completely independent for hypothetical full disconnection (such as going beyond event horizon) and mostly independent for loosely connected sub-humanities. I’ll talk about the exceptions where appropriate.

Another thing to touch upon is trans-humanism: cyborgs, mind uploading, merging and splitting. I’ll try to presuppose as little as possible here and even if i won’t talk much about it explicitly, many cases that i examine might also include some of even the most bizarre trans-human ideas.

For example, single human constantly living alone in a vast virtual space running on a distributed super computer would count as a (quite fragile) sub-humanity for most matters.

If you think your idea truly brings something new to the plate, i’m open to considering it.

Finally, i should also mention that while i acknowledge possibility of “re-emerging humanity” (that is, that life form indistinguishable from humans will be/is/was independently appearing in universe), it goes beyond the scope of this article. I’m going to consider only current humans or their “descendants”, because otherwise only truly global universe catastrophes will lead to full extinction of all human forms.

Chances of probabilistic extinction

Now, after establishing some basic stuff, lets consider the most simple future.

There are many factors that could lead to humanity extinction, which for your pleasure i would categorize as following:

  • Natural catastrophe
  • Anthropogenic catastrophe (extinction by unintended involuntary death)
  • Self-extermination (extinction by intended involuntary death)
  • Voluntary extinction (extinction by natural or intended voluntary death)

Potential extinction via nuclear war (so adored by some readers) could fall under categories A or S depending on why it was started and why it killed everyone. Or even partly under category N, if said nuclear war has triggered natural disaster (there are pure natural and pure anthropogenic catastrophes of course).

Now, lets consider probabilities of these categories. I’m going to use first letters for these (note to math snobs: i could use Px notation instead, but it would be less readable in plain text). But probability of Extinction at one single time quant moment is of course 0 (if you don’t know anything about that, please don’t take my word for it). So i’m going to use additional notation: X₁(t) for probability of X during some time unit at time t and X(t) for probability of X happening before t moment; X∞ is probability of X happening given infinite time.

If you know enough math, you probably already understand where am i getting at; please excuse the verbosity intended for those not that familiar with math.

So, the question is then: what is E∞? As you might guess, it’s ∫E. The only chance of it being less than 1 (i.e. certainty) is if lim E₁ = 0. Since E₁ can be roughly estimated as sum of N, A, S and V, lim E₁ is sum of their limits.

Throwing away pretense to be mathematically correct, in plain words, the conclusion is following: the only chance of humanity existing forever is if its extinction chances can either be zeroed or be constantly reduced.

Oh well, that was kinda obvious. Why’d i spend so much time on writing it out?

The next part will (if i ever write it) consider individual extinction reasons and what would it take to make all of their limits zero.

TBC

Comments

You need to access this site via 0net to read & write comments; alternatively, refer to contacts page