The Origin of the 11D World Membrane As a Pascal Conic Section of a 6D-String in 5D Projective Space

Lisa R. Parker

The biography of David Hilbert discusses a debate concerning whether it was an intellectual faux pas to advocate advancing the 2000 year old Theorem of Pappus to the status of an Axiom. This issue is notable from a number of perspectives, one being his 1920 proposal establishing the Hilbert Program of formulating mathematics and/or geometry on a more solid and complete logical foundation conforming to inclusively greater ‘meta-mathematical principles.’

Yet at that time, he proposed this could be done if: 1. all math follows from a complete orcorrectly-chosen finite system of axioms, and 2. this axiom system is provably consistent through some means like his epsilon calculus. Although this formalism has been successfully influential in regard to Hilbert’s work in algebra and functional analysis, it failed to engage in the same way with respect to his interests in logic, as well as physics – not to mention his axiomatizion of geometry, given the sketchy issue of regarding Pappus’s theorem as an axiom. Likewise, a similar problem arose when Bertrand Russell rejected Cantor’s proof that there was no ‘greatest’ cardinal number, and went on to defend the ‘logicism’ of his and A. N. Whitehead’s proposition in Principia Mathematica that all mathematics is in some important sense reducible to logic. But both Hilbert and Russell’s support for an axiomatized mathematical system of definitive principles that could banish theoretical uncertainties “forever” would end in failure by 1931.

For Kurt Gödel demonstrated that any such non-contradictory (self-consistent) formal system comprehensive enough to at least include arithmetic, could not demonstrate (both) its completeness (and/or, conversely, its categorical consistency) by way of its own axioms. Which means that Hilbert’s program was impossible as stated since there’s no way the second point can be rationally combined with assumption-1 as long as the system of axioms is indeed finite – otherwise you have to add an infinite series of new axioms, beginning, I guess, with Pappus’s! Likewise, Gödel’s incompleteness theorem reveals that neither Principia Mathematica, nor any of other consistent system of recursive arithmetic, could decide whether every proposition, and/or its negation, was provable within that system itself. Yet beyond Hilbert’s faux pas concerning Pappus, one should note that Gödel’s theorem itself, in some realistic sense, then actually supports Hilbert’s basic idea of a deeper, more inclusive, ‘meta-logical’ foundation as a ‘Gödelian mapping‘ that ‘covers’ all mathematics and geometry. Indeed, it was Hilbert’s student Gentzen who used a Gödel mapping ‘orders’ of ‘trans-infinite’ systems of numbers to actually prove Gödel’s theorem: so truly meta-logically validates ordinary arithmetic. In any case, though this conclusion also loosely conforms to Russell’s logistic ideations, it at once demonstrates a vast improvement over his criticism of Cantor’s proof for an infinite series of cardinal numbers – which, after-all, is the point of Cantors arguments in the sense that some ‘axiom of continuity’ like Archimedes’s is required to generate a infinite field of real numbers. Which makes Hilbert’s Pappian faux pas seem almost trivial in comparison – as I’d like to know how Russell expected to find some ‘greatest cardinal number,’ as well as how he expected to axiomatically describe continuity for an infinite range of numbers or points on a line; i.e. before, let alone after, any infinite axiomatic system became an additional issue!

In either case, had Russell, or Hilbert, truly taken Occam’s razor to heart, they’d likely both slit their own throats with it before revealing their biased assumptions and inconsistencies to the world to see for eternity. Which simply means why elevate some provable theorem to the status of an axiomatic assumption, or introduce your own inconsistent system of assumptions, when one is clearly better off leaving everything as is. Yet I’m elated to instead wield that razor properly in order to cut-up such icons a little bit posthumously – as if God wretched it from their suicidal hands, if just to reciprocally thank them for the lenient opportunity to show everyone once again why fools seem to habitually skew themselves as absurdist fodder for us “lesser” fools or “commoners” in some exclusive or “formal” organizational hierarchy. Which is why the wiser ones just say: the higher the monkey climbs up the tree, the more they get exposed to those underneath! (But then too always watch out below, before something blows out that hole back into your face!!)

In any case, this brings us back to yet another, older, so more pressing, issue that is directly connected to Pappus’s ‘hexagon theorem’ – as it was generalized by Blaise Pascal on a projective conic section, or 6-point oval,in 1639, when he was just 16 years old. Being naturally impressed by Desargues’s work on conics, he produced, as a means of proof, a short treatise on what he called the “Mystic Hexagram,” so better known ever since then simply as Pascal’s theorem . Which basically (as defined by Wikipedia) states that if an arbitrary hexagon is inscribed in any conic section, where the opposite sides are extended until they meet, the three intersection points will lie on a straight, so-called ‘Pascal, line’ of that configuration.

Though this simple description verbally suffices, it might fail to convey the fuller, and more truly ‘mystic,’ aspects that earn the Pascal theorem and configuration the distinction of being regarded as the most centrally fundamental construct in projective geometry. And while diagrams would certainly help clarify things, especially the following descriptions, its hard enough re-formatting these articles content from the preferred notebook text to accommodate the differing formats of the web’s various e-magazines or article distribution services. In any case, it’s no coincidence that I not only made the Pascal conic the cover figure for my text covering projective and its subgeometries, but include a frontispiece of various 6-element conics relevant to all, including the Brianchon projective dual to Pascal’s. So any interested readers can go to the resource box and pull up at least the Pascal cover figure, if not the frontispiece.

Anyway, the text’s cover figure illustrates Pascal’s theorem represented on a simple hexagon formed by mutually inscribing a complete 6-point (15 line) and complete 6-line (15 point) representing the respective plane sections of a complete six-dimensional 6-line-at-a-point and a complete 3-dimensional 6-plane derived by recurrently sectioning a complete five-dimensional 6-point being the simplest representation as a maximal spatially-extended projective set of vertices for this object. This description thus emphasizes its deep significance with regard to encompassing entire dimensional gamete of the ‘axioms of incidence’ (before one goes one to introduce the additional set necessary to establish sense and that of continuity), beginning with those of the simplest axioms dimensional extension and 5D closure, along with its projective dual of six-lines at a point, which is then sectioned back down to the final incidence relations corresponding to six complete points on a line and six complete lines through a point. Likewise, one can add the 5-space with the 6-line dimensions to obtain an 11D manifold that serves as a covering space for mapping to what amounts to a finite projecting geometry that is at once is both complete and categorically consistent(as it’s not concerned with infinite ranges of points or numbers, it’s not restricted by, but again supports, Gödel’s reasoning). So, for example, it’s quite interesting that J. W. Hirschfeld points out in his text on finite projective groups that no six-conics exist in dimension higher than eleven.

Which brings us to the crux of this article as it relates to the mathematical physics of (super) string, and ‘M’ or membrane, theory – which I have yet seen pared down by Occam’s blade to its essence in the foundations of geometry, thus prompting the summary here, comprehensible to a wider segment of intelligent folks. Superstring theory is based on a four-dimensional space-time or physical metric, that together with an internal six-dimensional (Kauler) manifold (or compact Calabi-Yau space) for what can now be thought of as 1D-strings of the 6-line-a-point; forming a total 10-dimensional system. But it became apparent by the 1980’s that a promising unification of physics within one quantum gravitational theory of superstrings was impossible since they branched into five distinct 10 mathematical groups (which led to the situation where a raft of mathematical eggheads, most with little interest in physics per se, began dominating the theoretic physics departments). Which led to a second ‘superstring revolution’ in the mid 90’s when Ed Witten concluded that each of the 10D super-string theories is a different aspect of what was originally called a single ‘Membrane theory’ (see http://en.wikipedia.org/wiki/Membrane_Theory), whose totality is naturally eleven dimensional and establishes interrelations between the different superstring group theories as described by various ‘dualities.’ For just as 1D strings are more manageable, finite extensions of singular points, groups of strings on a plane form ‘world-sheets’ as literal ‘2D membranes,’ where these so-called ‘branes’ can be defined of any dimension, starting with a 0-brane or point.

So though the total system can correctly be called an ’11D World Membrane,’ Witten generically prefers to call it ‘M theory,’ where M can stand for membrane, mother, mathematical, matrix, master, mystery, magic, or then, as Pascal would forcefully add, Mystic! In any case, there is little doubt that someday a full rendering of 6-dimensions of compacted 1D strings in a 5D space-time will fulfill Einstein’s dream of a fully unified physical theory. But personally I’m far less concerned with ‘theoretical’ unifications than a comprehensive rendering of physics and cosmology, replete with a raft of confirmable data. For I’ve developed the first dimensionless or ‘pure’ system of (Planck) scaling I call ‘Mumbers’ or ‘membrane numbers;’ and which precisely covers the entire spectrum of particle and space-time physics. And while I don’t have the IQ, propensity or patience to follow or pursue higher mathematics for theoretic purposes, on the other hand, I’ve test many, but have yet to find anyone, physicist or mathematician, who can successfully write a pure numeric equation for even one relation between distinct physical states. Likewise, standard super-string or M theory has yet to make any confirmed, or even confirmable, predictions – no more than I, at least, have seen anyone point out the geometric foundations of M-theory as a Pascal’s 11-dimensional section of a dual projective 5-space and a 6D six-string-at-a-point. So regardless my mathematical inadequacies, I can guarantee no workable ‘unified M-theory of physics’ can possibly be developed until the intellectual community accepts the meta-logical tautology of both Pascal’s mysterious 6-conic underlying the foundation of geometry, as well as the accompanying unified ‘dimensionless’ scaling that already represents a proven system spanning the whole of physics.

Leave a Reply

Next Post

5 Practical Steps To Improve Your ACT Science Score

There are two particular skills that you need to cultivate so that you can improve your ACT science score. Mainly, they are your 1) Reasoning skills, and 2) Analytical skills. You should be proficient when it comes to identifying data that are presented in graphs, tables and maps and in […]