23.02.2023 Views

The Zero Delusion v0.99x

Zero signifies absence or an amount of no dimension and allegedly exemplifies one of humanity's most splendid insights. Nonetheless, it is a questionable number. Why did algebra embrace zero and dismiss infinity despite representing symmetric and complementary concepts? Why is zero exceptional in arithmetic? Is zero a "real" point? Has it a geometrical meaning? Is zero naturalistic? Is it universal? Digit 0 is unnecessary in positional notation (e.g., bijective numeration). The uniform distribution is unreachable, transmitting nill bits of information is impossible, and communication is never error-free. Zero is elusive in thermodynamics, quantum field theory, and cosmology. A minimal fundamental extent is plausible but hard to accept because of our acquaintance with zero. Mathematical zeroes are semantically void (e.g., empty set, empty sum, zero vector, zero function, unknot). Because "division by zero" and "identically zero" are uncomputable, we advocate for the nonzero algebraic numbers to build new physics that reflects nature's countable character. In a linear scale, we must handle zero as the smallest possible nonzero rational or the limit of an asymptotically vanishing sequence of rationals. Instead, zero is a logarithmic scale's pointer to a being's property via log(1)). The exponential function, which decodes the encoded data back to the linear scale, is crucial to understanding the Lie algebra-group correspondence, the Laplace transform, linear fractional transformations, and the notion of conformality. Ultimately, we define a "coding space" as a doubly conformal transformation realm of zero-fleeing hyperbolic geometry that keeps the structural and scaling relationships of the world.

Zero signifies absence or an amount of no dimension and allegedly exemplifies one of humanity's most splendid insights. Nonetheless, it is a questionable number. Why did algebra embrace zero and dismiss infinity despite representing symmetric and complementary concepts? Why is zero exceptional in arithmetic? Is zero a "real" point? Has it a geometrical meaning? Is zero naturalistic? Is it universal? Digit 0 is unnecessary in positional notation (e.g., bijective numeration). The uniform distribution is unreachable, transmitting nill bits of information is impossible, and communication is never error-free. Zero is elusive in thermodynamics, quantum field theory, and cosmology. A minimal fundamental extent is plausible but hard to accept because of our acquaintance with zero. Mathematical zeroes are semantically void (e.g., empty set, empty sum, zero vector, zero function, unknot). Because "division by zero" and "identically zero" are uncomputable, we advocate for the nonzero algebraic numbers to build new physics that reflects nature's countable character. In a linear scale, we must handle zero as the smallest possible nonzero rational or the limit of an asymptotically vanishing sequence of rationals. Instead, zero is a logarithmic scale's pointer to a being's property via log(1)). The exponential function, which decodes the encoded data back to the linear scale, is crucial to understanding the Lie algebra-group correspondence, the Laplace transform, linear fractional transformations, and the notion of conformality. Ultimately, we define a "coding space" as a doubly conformal transformation realm of zero-fleeing hyperbolic geometry that keeps the structural and scaling relationships of the world.

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Luckily we can resort to algebraic numbers. A is a countable and computable

set, whence definable and arithmetical except at zero, that forms a field because

the sum, difference, product, and quotient (presuming that the denominator is

nonzero) of two algebraic numbers are also algebraic. Besides, A is algebraically

closed because every root of a polynomial equation whose coefficients are algebraic

numbers is algebraic, which we can consider the universal implementation

of the Fundamental Theorem of Algebra [101]. With this resume, an

algebraic number represented as a "polyrational", i.e., a sequence of nonzero rational

numbers, seems to match more realistically an observable’s measurement

requirements than a complex number.

Before adopting A as a universal number framework, we must consider its

effectiveness. An algebraic number is a root of a nonzero (non-trivial) univariate

(involving one indeterminate or variable) polynomial with rational coefficients

(irreducible fractions). This definition comes with a pair of caveats.

First, the representation is not unique. For example, consider the univariate

polynomials

z 8 + 1

12 z4 + 1

30 z3 + 7 2 z

2z 6 + z2

6 + z 15 + 7 z

2z 7 + z3

6 + z2

15 + 7

z 7 + 1

12 z3 + 1

30 z2 + 7 2

They are somewhat equivalent. However, suppose we require the polynomial’s

highest degree (leading) coefficient to be the positive unit, i.e., a monic polynomial,

and the lowest degree (trailing) constant to be nonzero. In that case,

the only valid representation is the last one. Such a unique irreducible representation

is the "minimal" polynomial, and the set of minimal polynomials forms

a ring. Because the trailing constant of a minimal polynomial is a nonzero rational,

neither rational numbers nor functions x n , with n ∈ Ň, belong to that

ring. Consequently, zero cannot be a root of a minimal polynomial and is not

algebraic. Since a rational power of a nonzero algebraic number is also a nonzero

algebraic, and the non-vanishing difference (expressions that do not have the

form z − z such as 1 − 1 or 2a − (3 + a − 1 − 2) − a), sum, product, and quotient

of two nonzero algebraic numbers are again a nonzero algebraic, Ǎ ≡ A − {0} is

a field algebraically closed without exceptions!

According to subsection 3.2, we can use the bijective notation to set a zerofree

unambiguous codification of a minimal polynomial; for instance, the minimal

polynomial (2) in signed bijective radix-3 notation is the "canonical" expression

(2)

{21 + (1/33) 3 + (1/233) 2 + (21/2)} 3

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!