Investigating the Logical Compression of the Primes into the Prime Primes

https://docs.google.com/spreadsheets/d/1B4uRhiu3s58vEQRdFwZXf_TAPDtuCPCn56T2VeHh7fk/edit?usp=sharing

The prime primes are the primes such that

Pp (pn is a prime with a novel prime difference p(n+1) – p(n) not found in all previous pi – pj)

Elucidating the Prime Prime Number Machine

Changeform Characterization Tab: The first 44774 primes are listed in column C, we track the difference between successive primes in column D and columns G – BL continually count whenever a specific prime difference is observed for a successive index of primes. Whenever we see a new difference for the first time we highlight that difference as it corresponds to a novel ‘epiphany generation’ that is, a list of prime differences following a novel prime difference prior to the following novel prime difference.

Changeforms Without Gaps Because 44774 lines in excel is unwieldy, we manually pull out the 50 lines that correspond to the 50 epiphany generations we’ve generated as a result of watching for those novel prime differences.

Epiphany Coefficients Here we take the difference from one row to the next so we understand which changeforms correspond to a SPECIFIC epiphany generation.

Epiphany Characterization Here we are investigating the properties of each epiphany generation but we can see that we have established fundamental information about the ‘entity understood of as the primes’ in the columns highlighted green (dark green being the specific coefficients for each chargeform which appears in an epiphany generation)

Finally in ‘Instantantious Decomposition Set’ We show that the prime numbers can be ‘factored (non-uniquely)’ additionally against a row of decision sets (-1, 0, 1) on a much more tightly bounded amount of prime primes. There are an extremely large amount of properties of this ’embedded arithmetic’ on various series.

The important thing to understand about epiphany generations is that they can literally be treated as a specific object which contains fundamental information about another object and can thus be used to parameterized predictions AHEAD OF TIME on a second travel through an algorithm which utilizes the information contained in an epiphany generation. The methodology generalizes to all series, but is most important here in the primes because it demonstrates that there is always a terminating path through a generation as long as you just have the set of changeforms which occur in that generation, which is much smaller than all of the potential changeforms.

Who sees an issue with this structure, and if there isn’t an issue, what do we think about the results?