23.02.2023 Views

The Zero Delusion v0.99x

Zero signifies absence or an amount of no dimension and allegedly exemplifies one of humanity's most splendid insights. Nonetheless, it is a questionable number. Why did algebra embrace zero and dismiss infinity despite representing symmetric and complementary concepts? Why is zero exceptional in arithmetic? Is zero a "real" point? Has it a geometrical meaning? Is zero naturalistic? Is it universal? Digit 0 is unnecessary in positional notation (e.g., bijective numeration). The uniform distribution is unreachable, transmitting nill bits of information is impossible, and communication is never error-free. Zero is elusive in thermodynamics, quantum field theory, and cosmology. A minimal fundamental extent is plausible but hard to accept because of our acquaintance with zero. Mathematical zeroes are semantically void (e.g., empty set, empty sum, zero vector, zero function, unknot). Because "division by zero" and "identically zero" are uncomputable, we advocate for the nonzero algebraic numbers to build new physics that reflects nature's countable character. In a linear scale, we must handle zero as the smallest possible nonzero rational or the limit of an asymptotically vanishing sequence of rationals. Instead, zero is a logarithmic scale's pointer to a being's property via log(1)). The exponential function, which decodes the encoded data back to the linear scale, is crucial to understanding the Lie algebra-group correspondence, the Laplace transform, linear fractional transformations, and the notion of conformality. Ultimately, we define a "coding space" as a doubly conformal transformation realm of zero-fleeing hyperbolic geometry that keeps the structural and scaling relationships of the world.

Zero signifies absence or an amount of no dimension and allegedly exemplifies one of humanity's most splendid insights. Nonetheless, it is a questionable number. Why did algebra embrace zero and dismiss infinity despite representing symmetric and complementary concepts? Why is zero exceptional in arithmetic? Is zero a "real" point? Has it a geometrical meaning? Is zero naturalistic? Is it universal? Digit 0 is unnecessary in positional notation (e.g., bijective numeration). The uniform distribution is unreachable, transmitting nill bits of information is impossible, and communication is never error-free. Zero is elusive in thermodynamics, quantum field theory, and cosmology. A minimal fundamental extent is plausible but hard to accept because of our acquaintance with zero. Mathematical zeroes are semantically void (e.g., empty set, empty sum, zero vector, zero function, unknot). Because "division by zero" and "identically zero" are uncomputable, we advocate for the nonzero algebraic numbers to build new physics that reflects nature's countable character. In a linear scale, we must handle zero as the smallest possible nonzero rational or the limit of an asymptotically vanishing sequence of rationals. Instead, zero is a logarithmic scale's pointer to a being's property via log(1)). The exponential function, which decodes the encoded data back to the linear scale, is crucial to understanding the Lie algebra-group correspondence, the Laplace transform, linear fractional transformations, and the notion of conformality. Ultimately, we define a "coding space" as a doubly conformal transformation realm of zero-fleeing hyperbolic geometry that keeps the structural and scaling relationships of the world.

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

structure, while holistic information gets imprecise. Thus, a being handles only

imperfect information and struggles between the specific and the general. In

particular, a living being is never wholly ignorant or aware of its environment.

This argument is also why we can strictly discard neither the null nor the

alternative hypothesis in statistics. The most we can do is to minimize presumptions

about a system, such as configurational preconditions and restrictions. The

maximum entropy principle, or minimal information principle, claims that [191]

"the probability distribution appropriate to a given state of information is one

that maximizes entropy subject to the constraints imposed by the information

given", a bet for the less informative a priori distribution possible and hence the

most sensible of all "fair" random a priori distributions. This principle "affords

a rule of inductive inference of the widest generality" [160], a giant step toward

formalizing Occam’s razor as a rule, for instance, to choose a prior probability

distribution in Bayesian thinking. Note that we always believe something before

considering evidence; presuming nada is impossible. Likewise, the posterior

distribution is never thoroughly informative; we cannot cancel its information

entropy. These are new signs that we live in a countable and orderly world.

Information interchange, discussed above, preserves heat. However, what can

we learn from a dissipative process? Information loss is a physical process intricately

linked with irreversibility or unpredictability [86], so a being must constantly

acquire new information to postpone death. Landauer’s erasure principle

tells that any phase-space compressing irreversible computation, such as erasing

information from memory, "would result in a minimal nonzero amount of work

converted into heat dumped into the environment and a corresponding rise in

entropy" [105]. Then, a computational system continually creates entropy, and

constant entropy means no computation takes place.

Can a system have no entropy? Defining "entropy" as the logarithm of the

number of possible microstates N of a system and assuming that all have the

same occurrence probability, its entropy vanishes when the system has only one

microstate, i.e., one component (N = 1) with no freedom degrees, whence closed;

it would not be a system per se, but just a beable, if any. Moreover, a system

evaporates if N = 0, whence speaking of entropy is nonsense. Can a system be

simplistic in full? Suppose we measure the complexity of a being by the length

of the computer program in a predetermined formal programming language that

produces such a system as output. In that case, the only possibility of possessing

no (Kolmogorov [100]) complexity is no entropy, i.e., no information content.

Thus, the representation of the void, i.e., no freedom and no complexity, is log 1.

We finish this subsection by commenting on a few paramount investigations

that have consolidated the thermodynamics-information connection. Shannon’s

Source Coding Theorem limits the reliability of data coding [155]; an ordinary

(noisy) channel cannot transfer error-free data if the codewords are finite in

length, meaning that error is natural. Bell’s theorem unveils that locality is a

condition vaguer than classically assumed [14]; we had better regard phenomena

as neither absolute local nor global. IT Evans’ fluctuation theorem is crucial

to generalizing the 2nd law of thermodynamics to microscopic systems, with

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!