[138] **viXra:1212.0173 [pdf]**
*replaced on 2013-01-03 10:08:33*

**Authors:** Armando V.D.B. Assis

**Comments:** 15 Pages. Footnote [3] added.

This paper is intended to show the Schrodinger equation, within its structure, allows the manifestation of the wave function collapse within a very natural way of reasoning. In fact, as we will see, nothing new must be inserted to the classical quantum mechanics, viz., only the dialectics of the physical world must be interpreted under a correct manner. We know the nature of a physical system turns out to be quantical or classical, and, once under the validity of the Schrodinger equation to provide the evolution of this physical system, the dialectics, quantum or classical, mutually exclusive, must also be under context through the Schrodinger equation, issues within the main scope of this paper. We will show a classical measure, the obtention of a classical result, emerges from the structure of the Schrodinger equation, once one demands the possibility that, over a chronological domain, the system begins to provide a classical dialectic, showing the collapse may be understood from both: the structure of the Schrodinger equation as well as from the general solution to this equation. The general solution, even with a dialectical change of description, leads to the conservation of probability, obeying the Schrodinger equation. These issues will turn out to be a consequence of a general potential energy operator, obtained in this paper, including the possibility of the classical description of the physical system, including the possibility of interpretation of the collapse of the quantum mechanical state vector within the Schrodinger equation scope.

**Category:** Quantum Physics

[137] **viXra:1212.0172 [pdf]**
*submitted on 2012-12-31 19:53:03*

**Authors:** Belolipetsky P.V., Bartsev S.I.

**Comments:** 14 Pages.

We performed linear multivariate regression analysis using available estimates of natural and anthropogenic influences and the observed surface temperature records from 1900 to 2012. We considered four parts of Earth surface - tropics (30S-30N), northern middle altitudes (30N-60N), Arctic (60N-75N) and southern altitudes (60S-30S). For each part (except southern altitudes) we developed very simple linear regression models representing temperature dynamics without continuous anthropogenic influence. The monthly average tropical SST temperature anomaly dynamic could be adequately reproduced by only three factors - ENSO variability (Nino 3.4 index), volcanic aerosols in stratosphere and two climate shifts in 1925/1926 and 1987/1988 years. Northern middle altitudes SST temperature anomaly could be reproduced in general by the same factors, except ENSO which is changed on Pacific decadal oscillation (PDO) here. Continents in these parts have the same dynamic but with much more variability. Arctic temperature anomalies have in general the same dynamic as SST temperature anomalies of Atlantic ocean in northern middle altitudes (30N-60N). We didn't manage to build any adequate regression model for southern altitudes with or without anthropogenic influences, but it doesn't look like temperatures here are determined by continuous anthropogenic influence. The results enable us to suggest a quantitive hypothesis alternative to IPCC view about a mechanic of observed in past century climate change.

**Category:** Climate Research

[136] **viXra:1212.0171 [pdf]**
*submitted on 2012-12-31 21:28:19*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 7 Pages.

Potential time trends for water levels in Lake Athabasca, Canada, were investigated with particular emphasis on a critical examination of the available hydrometric record and other confounding factors mitigating against reliable trend detection on this sytem. Four hydrometric stations are available on Lake Athbasca, but only the Lake Athabasca near Crackingstone Point (07MC003) site has suitable - albeit temporally limited (1960-2010) - records for a rigorous time series analysis of annual water levels. The examination presented herein provides evidence that the 2010 lake level dataset at 07MC003 is flawed and should not be included in any trend analyses. With the conclusion that 2010 lake levels on Lake Athabasca at station 07MC003 are erroneous, lake level time series regressions over various timeframes between 1960 and 2009 yield widely varying degrees of non-significance and slope magnitude / direction. As a further confounding factor against mechanistic time trend analyses of water levels on Lake Athabasca, a dam and rockfill weirs were constructed on the lake outlets during the 1970s in order to maintain elevated lake levels. Thus, the entire time series of lake levels on Lake Athabasca since filling of the reservoir behind the W.A.C. Bennett Dam (Lake Williston) began in 1968 can be described as experiencing substantial anthropogenic modification. Collectively, these influences - including problems in the hydrometric record - appear to sufficiently impact the annual lake level record as to prevent reliable trend analyses that unequivocally isolate natural factors such as climate change or any other anthropogenic factors that may be operative in the source watersheds.

**Category:** Climate Research

[135] **viXra:1212.0170 [pdf]**
*submitted on 2012-12-31 23:03:15*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages. 1 diagram

It is hypothesized that mathematics is not a language but a series of symbols used to make measurements which either combine or subtract them. It does not possess the ability to communicate any idea effectively because there are no concrete objects in mathematics. In real languages in which a person can communicate ideas to one another, both abstractions and concrete ideas are needed for meaning. The use of abstractions only as in the case of mathematics means that communication is avoided all together. This is why quantum mechanics, general relativity, string theory and other purely mathematical theories are not scientific theories but unfalsifiable conjecture without meaning. This is why they are still taught in schools and why they will never be falsified because they are unfalsifiable as they contain no meaning.

**Category:** Linguistics

[134] **viXra:1212.0169 [pdf]**
*replaced on 2013-01-06 10:13:34*

**Authors:** Hans Detlef Hüttenbach

**Comments:** 7 Pages.

In this article I pick up with [4] and [3] and show that the mathematical relations of quantum mechanics derive from classical electrodynamics, albeit without the use of the principle of indeterminacy.

**Category:** Relativity and Cosmology

[133] **viXra:1212.0168 [pdf]**
*submitted on 2012-12-31 08:22:43*

**Authors:** Hans Detlef Hüttenbach

**Comments:** 3 Pages. (It's really more than 10 years old.)

It is proven that every complete, metrizable locally convex space (a.k.a. F-space)
is reflexive.
This in particular disproves an old conjecture that L^\infty was the dual of L^1.
It is shown that indeed, L^\infty contains a subspace of overcountable dimension
not contained in the dual of L^1.

**Category:** Functions and Analysis

[132] **viXra:1212.0167 [pdf]**
*submitted on 2012-12-31 09:45:17*

**Authors:** Steven Kenneth Kauffmann

**Comments:** 8 Pages.

The inherently homogeneous stationary-state and time-dependent Schroedinger equations are often recast into inhomogeneous form in order to resolve their solution nonuniqueness. The inhomogeneous term can impose an initial
condition or, for scattering, the preferred permitted asymptotic behavior. For bound states it provides sufficient focus to exclude all but one of the homogeneous version's solutions. Because of their unique solutions, such inhomogeneous versions of Schroedinger equations have long been the indispensable basis for a solution scheme of successive perturbational corrections which are anchored by their inhomogeneous term. Here it is noted that every such perturbational solution scheme for an inhomogeneous linear vector equation spins off a
nonperturbational continued-fraction scheme. Unlike its representation-independent antecedent, the spin-off scheme only works in representations where all components of the equation's inhomogeneous term are nonzero. But that requirement seems to confer theoretical physics robustness heretofore unknown: for quantum fields the order of the perturbation places a bound on unperturbed particle number, the spin-off scheme contrariwise has only basis elements of unbounded unperturbed particle number. It furthermore is
difficult to visualize such a continued-fraction spin-off scheme generating infinities, since its successive iterations always go into denominators.

**Category:** Mathematical Physics

[131] **viXra:1212.0166 [pdf]**
*replaced on 2014-02-26 16:13:45*

**Authors:** Keith D. Foote

**Comments:** 10 Pages. V2-minor typo corrections

The characteristics and behavior patterns of dark matter are examined and described as a support mechanism for the electromagnetic field. Flaws in Einstein’s models are examined and compared with an updated version of the aether as dark matter.

**Category:** Astrophysics

[130] **viXra:1212.0165 [pdf]**
*replaced on 2013-04-06 11:06:13*

**Authors:** Jay R. Yablon

**Comments:** 22 Pages. Version 4 has been accepted for publication by the Journal of Modern Physics, and will appear in their April 2013 "Special Issue on High Energy Physics."

In an earlier paper, the author employed the thesis that baryons are Yang-Mills magnetic monopoles and that proton and neutron binding energies are determined based on their up and down current quark masses to predict a relationship among the electron and up and down quark masses within experimental errors and to obtain a very accurate relationship for nuclear binding energies generally and for the binding of 56Fe in particular. The free proton and neutron were understood to each contain intrinsic binding energies which confine their quarks, wherein some or most (never all) of this energy is released for binding when they are fused into composite nuclides. The purpose of this paper is to further advance this thesis by seeing whether it can explain the specific empirical binding energies of the light 1s nuclides, namely, 2H, 3H, 3He and 4He, with high precision. As the method to achieve this, we show how these 1s binding energies are in fact the components of inner and outer tensor products of Yang-Mills matrices which are implicit in the expressions for these intrinsic binding energies. The result is that the binding energies for the 4He, 3He and 3H nucleons are respectively, independently, explained to less than four parts in one million, four parts in 100,000, and seven parts in one million, all in AMU. Further, we are able to exactly relate the neutron minus proton mass difference to a function of the up and down quark masses, which in turn enables us to explain the 2H binding energy most precisely of all, to just over 8 parts in ten million. These energies have never before been theoretically explained with such accuracy, which leads to the conclusion that the underlying thesis provides the strongest theoretical explanation to date of what baryons are, and of how protons and neutrons confine their quarks and bind together into composite nuclides. As is also reviewed in Section 9, these results may lay the foundation for more easily catalyzing nuclear fusion energy release.

**Category:** High Energy Particle Physics

[129] **viXra:1212.0164 [pdf]**
*submitted on 2012-12-31 04:19:39*

**Authors:** Nicola D'Alfonso

**Comments:** 4 Pages.

In this paper, I introduce a particular discrete spacetime that should
be seriously considered as part of physics because it allows to explain the
characteristics of the motion properly, contrary to what happens with the
continuous spacetime of the common conception.

**Category:** Mathematical Physics

[128] **viXra:1212.0163 [pdf]**
*submitted on 2012-12-31 05:34:55*

**Authors:** John R Ramsden

**Comments:** 10 Pages. First cut - comments welcome

We indicate how pursuit of a realist interpretation of Quantum Mechanics, starting from a simple and plausible physical principle and established Quantum Mechanics, leads to a physical picture almost as counter-intuitive but which among other things would if true confirm that the quest for a deterministic model of Quantum Mechanics is doomed to failure.

**Category:** Quantum Physics

[127] **viXra:1212.0162 [pdf]**
*submitted on 2012-12-30 16:06:38*

**Authors:** Diego Lucio Rapoport

**Comments:** 131 Pages. Accepted for publication in Analecta Husserliana; copyrighted in IntellectArchive.com, Canada

HyperKleinBottle surfaces and their logics; the latter incorporates interrelations and hyper-contextualizations within an heterarchy of Otherness. We introduce the associated logo-physics, as a basis for the unification of science and phenomenology, by surmounting the Cartesian Cut. Dualism is found to be a projection of the former logic, not an independent primeval ontoepistemology. We present the phenomenology of these logics, with regards to the geometries and topologies of space and time; of thought and language; of semiosis and its geological, cosmological and astronomical signs linked to the Myth of Eternal Return; of perception and cognition; of the common ontopoiesis of life and the inanimate realms, and of biological shape departing from embryological development and its unfolding as body-plans and their anatomy-physiology, and discuss its bearing in evolutionary theory, all which we present as embodiments of this non-dual ontoepistemology. We contrast this paradigm with: 1) the dualism of the theory of autopoiesis and the purported interior/exterior divide, as a general principle, which these logics subvert by self and mutual reentrances of the heterarchies; 2) the dual membrane of cell biology; 3) evolutionary theory associated to the metabolic versus genomic dualism; 4) the mereological fallacies of the neurosciences and the hypercontextuality of metaphors and anthropomorphisms; 5) the dualisms of Newtonian physics, Einstein’s relativity and quantum mechanics which are found to be epistemic theories, and the assumption of non-contextualization in physics, chemistry and geology which we show not to be the case; 6) the psychophysics of visual, aural and musical spaces, 7) the anatomy-physiology of the sensorium and the healing reconstitution of integrity; 8) in the division of epistemology and ontology, of language and process, and the top-down and bottom-up systemic, and finally 9) the issue of design related to turning-inside-out of a sphere (the ovum), yet transcending creationism. We present their surmountal through the ontoepistemologies of the Klein and Hyper-Klein Bottles surfaces, of hyper-contextuality and complexity. We discuss teleological causation and design of processes/structures, in particular in paleogeology, physics, chemistry and biology, in terms of the latter ontoepistemologies and of the Golden Mean, generated by time waves and their guidance by the Fibonacci Algorithm. We apply this ontoepistemology to the interpretation of religious texts and discuss the relations with the evolution of science. We discuss the two-dimensionality of biology and the lifeworld, novelty and the time operators.

**Category:** General Science and Philosophy

[126] **viXra:1212.0161 [pdf]**
*replaced on 2018-07-20 03:54:11*

**Authors:** Yongfeng Yang

**Comments:** 38 Pages.

Tide (the daily cycle of high and low water) has been known for thousands
of years. The widely accepted theory for this movement is the attractive mechanism,
that’s to say, the Moon’s gravitational attraction yields a pair of water bulges on the
Earth, an Earthly site will periodically pass these bugles and thus undergo alternation
of high and low water as the Earth spins. However, an in-depth investigation of global
tide-gauge data shows these bulges of water nonexistent. This absence suggests that
tide cannot be explained by the attractive mechanism. Here we propose, the Earth’s
rotations about the centre of mass of the Earth-Moon system and around the Sun create
some centrifugal effects to stretch the Earth’s body; the deformed solid Earth
generates oscillation for ocean basin as the Earth rotates, forming water movement
between all the parts of the basin and further the rise and fall of water level around the
globe. A modelling test shows that the RMS (Root Mean Square) deviation of
amplitude calculated against observation for deep ocean (34 sites) and shelf-coastal
regions (42 sites) are approximately 6.16 and 10.59 cm.

**Category:** Classical Physics

[125] **viXra:1212.0160 [pdf]**
*submitted on 2012-12-29 19:31:00*

**Authors:** Bo He, Jin He

**Comments:** 7 Pages. 4 figures, 1 table, A critical tesitification of Dr. Jin He's idea on Heaven

It may be true that mankind's hope is the identification of the living meaning of natural structures. However, scientists including physicists, chemists, and biologists have not found any evidence of the meaning. In the natural world, there exists one kind of structure which is beyond the scope of human laboratorial experiment. It is the structure of galaxies. Spiral galaxies are flat disk-shaped. There are two types of spiral galaxies. The spiral galaxies with some bar-shaped pattern are called barred spirals, and the ones without the pattern are called ordinary spirals. Longer-wavelength galaxy images (infrared, for example) show that ordinary spiral galaxies are basically an axi-symmetric disk that is called exponential disk. For a planar distribution of matter, Jin He and Bo He defined Darwin curves on the plane as such that the ratio of the matter densities at both sides of the curve is constant along the curve. Therefore, the arms of ordinary spiral galaxies are Darwin curves. Now an important question facing humans is that: Are the arms of barred spiral galaxies the Darwin curves too? Fortunately, Dr. Jin He made a piece of Galaxy Anatomy Graphic Software (www.galaxyanatomy.com). With the software, not only can people simulate the stellar density distribution of barred spiral galaxies but also can draw the Darwin curves of the simulated galaxy structure. Therefore, if Dr. Jin He's idea is true then people all over the world will witness the evidence that the arms of barred spiral galaxies are identical to the corresponding Darwin curves. This paper shows partial evidence that the arms of galaxy NGC 3275 follow Darwin curves. Note: Ammar Sakaji and Ignazio Licata are the founder or the editor-in-chief of the Electronic Journal of Theoretical Physics. Over fifty journals of astronomy and physics had rejected Dr. Jin He's core article on galaxy structure before 2010. It is Sakaji and Licata's journal that accepted the article.

**Category:** Astrophysics

[124] **viXra:1212.0159 [pdf]**
*submitted on 2012-12-29 08:34:36*

**Authors:** Dan Visser

**Comments:** 4 Pages.

This article summarizes theoretical based evidence related to practice for the prediction the universe is not originated from a Big Bang. Instead cosmology could be based on a Double Torus Universe, as is published in my papers in the Vixra-archive. In a few website-articles I also express my vision on the revision of physics and cosmology within this framework. This paper in particular highlights how Gravity could violate General Relativity by a (new) dark energy force in the new Cosmology. This framework contains the connection of the Newton-Gravity force for tiny matter-particles to a dark matter force, producing “+” and “–“ mass-generation, both at scales of about 10^-22 meter. This can cause repulsive gravity in nature. This can open-up a new energy-source for travelling through space by non-relativistic scaling.

**Category:** Mathematical Physics

[123] **viXra:1212.0158 [pdf]**
*submitted on 2012-12-29 08:49:50*

**Authors:** Hu Wang

**Comments:** 5 Pages.

Start from denying universal gravitation…

**Category:** Astrophysics

[122] **viXra:1212.0157 [pdf]**
*submitted on 2012-12-27 16:40:31*

**Authors:** Colin Naturman

**Comments:** 20 Pages.

Certain topological countability properties are generalized to interior algebras and basic reaults concerning these properies are investigated. The preservation of these properties under the formation of principal quotients and under a new construction called a join of interior algebras, is investigated.

**Category:** Algebra

[121] **viXra:1212.0155 [pdf]**
*replaced on 2013-07-30 22:27:36*

**Authors:** Khalid M Ibrahim

**Comments:** 62 Pages.

In this paper, we introduced a method to modify the Dirichlet series over the Mobius function by progressively eliminating the numbers that first have a prime factor 2, then 3, then 5, ..up to the prime pr. The properties of the new series are analyzed as pr approaches infinity and its relationship to the function and the partial Euler product is established and then used to examine the validity of the Riemann Hypothesis.

**Category:** Number Theory

[120] **viXra:1212.0154 [pdf]**
*submitted on 2012-12-27 21:30:35*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 16 Pages.

Potential annual (January-December) and summertime (June-August) regional time trends and increasingly extreme and / or variable values of Palmer-based drought indices were investigated over the contiguous United States (US) between 1895 and the present. Although there has been no significant change in the annual or summertime Palmer Drought Severity Index (PDSI), Palmer Hydrological Drought Index (PHDI), or Palmer Modified Drought Index (PMDI) for the contiguous US over this time frame, there is clear evidence of decreasing drought conditions in the eastern US (northeast, east north central, central, and southeast climate zones) and increasing drought conditions in the west climate region (California and Nevada). No significant time trends were found in the annual or summertime PDSI, PHDI, and PMDI for the spring and winter wheat belts and the cotton belt. The corn and soybean belts have significant increasing trends in both the annual and summertime PDSI, PHDI, and PMDI, indicating a tendency towards reduced drought conditions over time. Clear trends exist toward increasingly extreme (dry or wet) annual PDSI, PHDI, and PMDI values in the northeast, east north central, central, northwest, and west climate regions. The northeast, northwest, and west climate zones display significant temporal trends for increasingly extreme PDSI, PHDI, and PMDI values during the summertime. Trends toward increasingly variable annual and summertime drought index values are also apparent in the northeast, southwest, northwest, and west climate zones.

**Category:** Climate Research

[119] **viXra:1212.0153 [pdf]**
*submitted on 2012-12-27 09:53:53*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 3 Pages. 4 references, 2 illustrations

It is empirically known that all cooling older stars that possess a global magnetic field have rings. This includes the Earth regardless if they are or not observed with the naked eye. This reasoning will be presented below.

**Category:** Geophysics

[118] **viXra:1212.0152 [pdf]**
*replaced on 2012-12-29 03:57:17*

**Authors:** Chun-Xuan Jiang

**Comments:** 8 Pages.

Using the stable number theory we prove there are only 92 stable elements in nature and obtain the correct valence electron configurations of the elementsd. In Mendeleev periodic table the elements(1-18,29-36 and 46) have correct valence electron configurations and the elements(19-28,37-45 and 46-92) have wrong valence electron configurations.The elements have no periods.Appendix is wrong Mendeleev periodic table of electron configurations of the elements

**Category:** Chemistry

[117] **viXra:1212.0151 [pdf]**
*submitted on 2012-12-27 03:05:04*

**Authors:** Colin Naturman

**Comments:** 26 Pages. Results published in "Naturman, C.A., 1991, Interior Algebras and Topology, PhD Thesis, University of Cape Town Department of Mathematics" and this paper presented as "Research Reports, Department of Mathematics. The University of Cape Town, Volume 139

Interior algebras are Boolean algebras enriched with an interior operator and corresponding closure operator. Alternative descriptions of interior algebras in terms of generalized topologies in Boolean algebras and neighbourhood functions on Boolean algebras are found. The topological concepts of convergence and accumulation of systems and nets are generalized to interior algebras. Relationships between different forms of convergence and accunulation are found.

**Category:** Algebra

[116] **viXra:1212.0148 [pdf]**
*submitted on 2012-12-26 04:38:39*

**Authors:** Colin Naturman

**Comments:** 17 Pages. Results published in "Naturman, C.A., 1991, Interior Algebras and Topology, PhD Thesis, University of Cape Town Department of Mathematics" and this paper presented as "Research Reports, Department of Mathematics. The University of Cape Town, Volume 144

The intervals in an interior algebra A can be turned into interior algebras called interval algebras. Generalizations of homomorphisms, called topomorphisms, are introduced and certain quotient structures of A in the category of interior algebras and topomorphisms (the principal quotients) are shown to be (up to isomorphism) precisely the interval algebras of A.

**Category:** Algebra

[115] **viXra:1212.0147 [pdf]**
*submitted on 2012-12-25 15:49:22*

**Authors:** Zafar Turakulov

**Comments:** 9 Pages. no comments

The study of magnetic fields produced by steady currents is a full-valued physical theory which like any other physical theory employs a certain mathematics. This theory has two limiting cases in which source of the field is confined on a surface or a curve. It turns out that mathematical methods to be used in these cases are completely different and differ from from that of the main of the main part of this theory, so, magnetostatics actually consists of three distinct theories. In this work, these three theories are discussed with special attention to the case current carried by a curve. In this case the source serves as a model of thin wire carrying direct current, therefore this theory can be termed magnetostatics of thin wires. The only mathematical method used in this theory till now, is the method of Green's functions. Critical analysis of this method completed in this work, shows that application of this method to the equation for vector potential of a given current density has no foundation and application of this method yields erroneous results

**Category:** Mathematical Physics

[114] **viXra:1212.0146 [pdf]**
*submitted on 2012-12-25 11:02:30*

**Authors:** Peter Bruyns, Colin Naturman, Henry Rose

**Comments:** 17 Pages.

The amalgamation class Amal (N) of a lattice variety generated by a pentagon is considered. It is shown that Amal (N) is closed under reduced products and therefore is an elementary class determined by Horn sentences. The above result is based on a new characterization of Amal (N). The lattice varieties whose amalgamation classes contain Amal (N) as a subclass are considered.

**Category:** Algebra

[113] **viXra:1212.0145 [pdf]**
*replaced on 2015-09-26 11:10:45*

**Authors:** Felix M. Lev

**Comments:** 68 Pages. A version published in Physics of Particles and Nuclei has been considerably revised in view of our discussions with Anatoly Kamchatnov and his constructive criticism.

The postulate that coordinate and momentum representations are related to each other by the Fourier transform has been accepted from the beginning of quantum theory by analogy with classical electrodynamics. As a consequence, an inevitable effect in standard theory is the wave packet spreading (WPS) of the photon coordinate wave function in directions perpendicular to the photon momentum. This leads to several paradoxes. The most striking of them is that coordinate wave functions of photons emitted by stars have cosmic sizes and strong arguments indicate that this contradicts observational data. We argue that the above postulate is based neither on strong theoretical arguments nor on experimental data and propose a new consistent definition of the position operator. Then WPS in directions perpendicular to the particle momentum is absent and the paradoxes are resolved. Different components of the new position operator do not commute with each other and, as a consequence, there is no wave function in coordinate representation. Implications of the results for entanglement, quantum locality and the problem of time in quantum theory are discussed.

**Category:** Quantum Physics

[112] **viXra:1212.0144 [pdf]**
*replaced on 2013-01-02 05:15:04*

**Authors:** Giorgio Fabretti (Id.no.: FBRGRG51E18H501B)

**Comments:** 20 Pages. An updated definition of "Ethical Materialism", radically new after DNA discoveries, abstracted by the original philosophical essays of Giorgio Fabretti, plus 2 appendixes reasonably needed after the first comments to the 1st submission on Vixra

"What ETHICAL MATERIALISM means in the 3rd millennium" is the title of a synthesis from the original philosophy of Giorgio Fabretti, Doctor in Philosophy and Anthropology since 1973 in the University of Rome "La Sapienza", Italy, cured and translated by the editors of the foundation "Fondo Fabretti".
The originality of "Ethical Materialism" is the unifying conception of reality as a relational 'continuum' of ancient and new dichotomies like logic versus matter, axioms vs. systems, ethical vs. realistic, subjectivity vs. objectivity - conceived through a sort of 'copernican cognitive evolution', due to his scientific studies on quantum physics, cybernetics & DNA discoveries - and leading to a 'Weltanschauung', a vision of reality, as an 'updated map of the universe'.
In such a new copernican map of reality, 'matter' language directly comes from the logic, it is made of logic, it never loose its logic, it is driven by the logic that operates in what humans in the 3rd millennium represent as stringent stochastic fractal logarithms: being the 'stringent' factor what human common language and self-consciousness translate and define as "the material properties of reality" (being the Man the human factor within the logic operations).
Being reality conceived here as a time-set logic system generating matter, it is possible to conceive its axiomatic foundations as the ethical premises and instructional guides and glossary of logic operations, leading to an evolutionary design.
That is a balanced synthesis of ethical and material reality, free fundamental choices (both in the 'Big Bang' universe and in self-conscious 'Weltanschauung') and material (stochastic, non-linear, fractal) complexity.
The synthetic definition of "ETHICAL MATERIALISM", (as explained in the attached short 6 pages) was a needed reference to update its meaning in the panorama of the new conceptions in GENERAL SCIENCE , radically evolved after the new scientific discoveries in Phisics, Computer Sciences and DNA Biology and Anthropology.
Since it appears worldwide that all the epistemological conceptions of the past millennium have suddenly gone obsolete, straight into the Science Museums, this radically original systematic approach was a 'due attempt' to finally break the many conceptual 'ancient walls', between ethical and material reality, between human social sciences and natural mathematic sciences.
It also includes an appendix 2 on the biographical origin of the new conception of Ethical Materialism related to DNA discoveries, and an appendix 3 on the ETHICAL MATERIALISM as a return trip from empirical science to religious faith and viceversa, made easier by the recent scientifical and technological discoveries in physics, genetics and computer sciences.

**Category:** General Science and Philosophy

[111] **viXra:1212.0143 [pdf]**
*submitted on 2012-12-24 11:17:22*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 4 Pages.

Sovereign wealth funds (SWFs) are receiving significant attention from nations with substantial and sustained foreign reserves derived via natural resource development and/or manufacturing based export-led economies as a means of achieving intergenerational equity, government savings, and stable currency exchange rates. Based on an analysis of currency variability for representative export-led nations with and without SWFs between 1999 and 2012, the case for SWF-based currency sterilization requires further investigation. Furthermore, several nations undergoing active policy debates regarding the possible implementation of SWFs may not have current account balances suitable for accruing all perceived SWF benefits.

**Category:** Economics and Finance

[110] **viXra:1212.0139 [pdf]**
*submitted on 2012-12-23 15:18:45*

**Authors:** Fernando Loup, Daniel Rocha

**Comments:** 10 Pages.

Warp Drives are solutions of the Einstein Field Equations that
allows superluminal travel within the framework of General
Relativity. The first of these solutions was discovered by the
Mexican mathematician Miguel Alcubierre in $1994$.The Alcubierre
warp drive seems to be very attractive because allows interstellar
space travel at arbitrarily large speeds avoiding the time
dilatation and mass increase paradoxes of Special Relativity.
However it suffers from a very serious drawback:Interstellar space
is not empty:It is fulfilled with photons and particle dusts and a
ship at superluminal speeds would impact these obstacles in highly
energetic collisions disrupting the warp field and placing the
astronauts in danger.This was pointed out by a great number of
authors like Clark,Hiscock,Larson,McMonigal,Lewis,O'Byrne,
\newline
Barcelo,Finazzi and Liberati.
\newline
In order to travel significant interstellar distances in
reasonable amounts of time a ship would need to attain $200$ times
the speed of light but according to Clark,Hiscock and Larson the
impact between the ship and a single photon of Cosmic Background
Radiation(COBE)would release an amount of energy equal to the
photosphere of a star like the Sun.And how many photons of COBE we
have per cubic centimeter of space?This serious problem seems to
have no solution at first sight.
\newline
However some years ago Harold White from NASA appeared with an
idea that may well solve this problem:According to him the ship
never surpass the speed of light but the warp field generates a
Lorentz Boost resulting in an apparent superluminal speed as seen
by the astronauts on-board the ship and on the Earth while the
warp bubble is always below the light speed with the ability to
manoeuvre against these obstacles avoiding the lethal collisions.
\newline
In this work we examine the feasibility of the White idea using
clear mathematical arguments and we arrived at the conclusion that
Harold White is correct.

**Category:** Relativity and Cosmology

[109] **viXra:1212.0138 [pdf]**
*submitted on 2012-12-23 08:11:59*

**Authors:** Colin Naturman, Henry Rose

**Comments:** 26 Pages.

The elementary equivalence of two full relation algebras, partition lattices or function monoids are shown to be equivalent to the second order equivalence of the cardinalities of the corresponding sets. This is shown to be related to elementary equivalence of permutation groups and ordinals. Infinite function monoids are shown to be ultrauniversal.

**Category:** Algebra

[108] **viXra:1212.0137 [pdf]**
*submitted on 2012-12-23 13:38:03*

**Authors:** Andrew Nassif

**Comments:** 5 Pages.

For ten long years these two problems have not been solved after being offered a prize. Solving the Riemann hypothesis will bring dimensional analysis in mathematics and physics. Solving the P vs NP will increase our knowledge in programing and provide a wide expansion of mathematical understanding and industrilization.

**Category:** Functions and Analysis

[107] **viXra:1212.0136 [pdf]**
*submitted on 2012-12-22 16:30:46*

**Authors:** Vaclav Kosar

**Comments:** 6 Pages. my name with special characters in latex form: V\' aclav Ko\v sa\v r

This article should be easy to understand for anybody and is meant to prove that I proposed new kind of operation system based on wiki-like or graph-like structure is
1-more natural, thus easier to learn
2-more efficient on existing tasks in terms of human time spent
3-can handle new kind of tasks
Computer task is data transformation. It is improbable that current paper-like handling of data is the best way. I would like to show that current computers provide a much more natural and useful way of handling all-purpose data. The main idea is that one should store information in a structure as natural as possible, so that user does not have to give efforts to transform the information (express more, search naturally, write once then just reference ...).
I am not sure if I can claim any authorship of following ideas, since one can never be sure whether an idea existed before and what actually helped one to come up with this idea. The only purpose of this paper is thus the pure desire to make progress of thought, by starting the discussion and construction of crowd sourced operation system based entirely on idea of graph databases.
I cannot provide the reader with infinite detais and precision, thus I leave some uncertainties to be cleared by the reader himself for pleasure.
My main inspirations for this more natural operation system were: Graph database, Wikipedia, brain, Lisp, mind-mapping, QED manifesto, CSS 3, WikiOS.

**Category:** Data Structures and Algorithms

[106] **viXra:1212.0135 [pdf]**
*submitted on 2012-12-22 14:39:45*

**Authors:** R. Wayte

**Comments:** 7 Pages.

Abstract. The numerical value of G has been derived in terms of electron sub-structure and the Coulombic field, by using action principles. Theoretical values are within experimental error:
G = 6.6737846x10^-11 m^3kg^-1s^-2, and (e/m)^2/G = 4.1659308x10^42 .

**Category:** Relativity and Cosmology

[105] **viXra:1212.0134 [pdf]**
*submitted on 2012-12-22 08:31:48*

**Authors:** Zafar Turakulov

**Comments:** 6 Pages. Rejected from the Journal of Mathematical Physics.

Maxwell equations for electromagnetic waves propagating in dispersive media are studied as they are, without commonplace substituting a scalar function for electromagnetic field. A method of variables separation for the original system of equation is proposed. It is shown that in case of planar symmetry variables separate in systems of Cartesian and cylindric coordinates and Maxwell equations reduce to one-dimensional Schr¨odinger equation. Complete solutions are obtained for waves in medium with electric permittivity and magnetic permeability given as ϵ = e^−κz, µ = c^−2e^−λz.
keywords: Maxwell equations, dispersive media, complete solutions
PACS numbers: 41.20.Jb, 42.25 .Bs
Keywords: Maxwell equations, dispersive media, complete solutions

**Category:** Condensed Matter

[104] **viXra:1212.0133 [pdf]**
*submitted on 2012-12-21 12:55:50*

**Authors:** Colin Naturman, Henry Rose

**Comments:** 25 Pages. Published in Journal of the Korean Mathematical Society, 30(1), 1993, pp.1–23 content is free for download but PDFs distributed by the publisher are missing the diagrams and/or abstract and errata.

An interior algebra is a Boolean algebra enriched with an interior operator. Congruences on interior algebras are investigated. Simple, subdirectly irreducible, finitely subdirectly irreducible and directly indecomposable interior algebras are characterized and the classes of these are shown to be finitely axiomatizable elementary classes. Quotients by open elements, dissectable and openly decomposable interior algebras are investigated. Basic results concerning interior algebras and their connection to topology are discussed.

**Category:** Algebra

[103] **viXra:1212.0132 [pdf]**
*submitted on 2012-12-21 21:36:25*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages. 1 picture, 7 references

Redshift as a measurement of cosmological distance has been falsified by Mr. Halton Arp’s discovery.

**Category:** Astrophysics

[102] **viXra:1212.0131 [pdf]**
*submitted on 2012-12-21 03:39:27*

**Authors:** Colin Naturman, Henry Rose

**Comments:** 6 Pages. Published in Ordered Set and Lattices, 11, 1995 pp. 39-44, publisher does not provide offprints

An interior algebra is a Boolean algebra enriched with an interior operator. Given an interior algebra there is a natural way of forming interior algebras from its principal ideals. Basic results concerning these ideal algebras, Stone spaces of ideal algebras and preservation properties of ideal algebras are investigated.

**Category:** Algebra

[101] **viXra:1212.0130 [pdf]**
*submitted on 2012-12-21 10:06:28*

**Authors:** Alan M. Kadin

**Comments:** 12 Pages. Submitted to Foundational Questions Institute Essay Contest, June 25, 2012, http://fqxi.org/community/forum/topic/1296

A retrospective is presented of the rise and fall of Wave-Particle Duality as the central doctrine of quantum mechanics, from the viewpoint of the 2024 centennial of the matter wave. This is contrasted with the recent New Quantum Paradigm, in which there are no point particles or entangled probability waves, and classical trajectories follow directly from coherent quantum dynamics.

**Category:** Quantum Physics

[100] **viXra:1212.0129 [pdf]**
*replaced on 2012-12-21 17:02:16*

**Authors:** Seshavatharam U.V.S, Lakshminarayana S

**Comments:** 2 Pages. Look forward to receive your kind and valuable comments

During the cosmic evolution, magnitude of Planck’s constant increases with increasing cosmic time. This may be the root cause of observed cosmic red shifts. Thus the observed red shift is directly proportional to the age difference of our galaxy and distant galaxy. Hubble’s linear law can be derived with these new ideas.

**Category:** Relativity and Cosmology

[99] **viXra:1212.0128 [pdf]**
*submitted on 2012-12-21 00:50:41*

**Authors:** Steven Kenneth Kauffmann

**Comments:** 5 Pages.

It has recently been shown that self-gravitation reduces static spherically-symmetric cumulative energy distributions below the value of their radii times the "Planck force", which is the inverse of G times the fourth power of c. In this article quantitative treatment of self-gravitation is extended to any static energy density that is nonnegative, smooth and globally
integrable. The resulting dimensionless local gravitational energy-reduction factor (namely the inverse of the local gravitational time-dilation factor) is shown to satisfy the zero-momentum nonrelativistic Lippmann-Schwinger quantum scattering equation for a repulsive potential which is proportional (with a known coefficient) to that static energy density. Standard perturbative Born-type iteration of Lippmann-Schwinger equations can diverge for sufficiently strong potentials, which in the gravitational case correspond to sufficiently large static energy densities. We have been able, however, to
devise an alternate, completely nonperturbative iteration method for Lippmann-Schwinger equations in coordinate representation. Every one of this
nonperturbative method's successive approximations to the local gravitational energy-reduction factor turns out to be positive and less than or equal to unity. In consequence, the self-gravitationally corrected static energy contained in any sphere is bounded by that sphere's diameter times the "Planck force".

**Category:** Mathematical Physics

[98] **viXra:1212.0127 [pdf]**
*submitted on 2012-12-20 14:29:35*

**Authors:** Colin Naturman, Henry Rose

**Comments:** 10 Pages.

The concept of ultra-universal algebras in varieties is generalized to models of first order theories. Characterizations of theories which have ulta-universal models are found and general examples of ultra-universal models are investigated. In particular we show that a theory has an ultra-universal model iff it is consistent and its class of models satisfies the joint embedding property.

**Category:** Set Theory and Logic

[97] **viXra:1212.0126 [pdf]**
*replaced on 2018-03-25 20:32:45*

**Authors:** Antony Van der Mude

**Comments:** 75 Pages.

Philosophers have long pondered the Problem of Universals. Socrates and Plato hypothesized that Universals exist independent of the real world in a universe of their own. The Doctrine of the Forms was criticized by Aristotle, who stated that the Universals do not exist apart from things - a theory known as Hylomorphism. This paper postulates that Measurement in Quantum Mechanics is the process that gives rise to the instantiation of Universals as Properties, a process we refer to as Hylomorphic Functions. This combines substance metaphysics and process metaphysics into a metaphysical realism that identifies the instantiation of Universals as causally active processes and recognizes the dualism of both substance and information. Measurements of fundamental properties of matter are the Atomic Universals of metaphysics, which combine to form the whole range of Universals. We look at this hypothesis in relation to two different interpretations of Quantum Mechanics: the Copenhagen Interpretation, a version of Platonic Realism based on wave function collapse, and the Pilot Wave Theory of Bohm and de Broglie, where particle-particle interactions lead to an Aristotelian metaphysics. This view of Universals explains the distinction between pure information and the medium that transmits it and establishes the arrow of time. It also provides a distinction between Universals and Tropes based on whether a given Property is a physical process or is based on the qualia of an individual organism. Since the Hylomorphic Functions are causally active, it is possible to suggest experimental tests that can verify this viewpoint of metaphysics.

**Category:** History and Philosophy of Physics

[96] **viXra:1212.0125 [pdf]**
*submitted on 2012-12-20 10:46:44*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages. 12 references

It is hypothesized that the mechanism for allowing the universe to operate with zero resistance (indefinitely/forever) is superconductivity.

**Category:** Relativity and Cosmology

[95] **viXra:1212.0124 [pdf]**
*replaced on 2012-12-21 04:13:24*

**Authors:** Deepak Ponvel Chermakani

**Comments:** There are 6 Pages, 6 Theorems, 7 Figures. I also made a small correction that in Theorem-1, the correct word is "NP-Hard" and not "NP-Complete".

We convert, within polynomial-time and sequential processing, an NP-Complete Problem into a real variable problem of minimizing a sum of Rational Linear Functions constrained by an Asymptotic-Linear-Program. The coefficients and constants in the real-variable problem are 0, 1, -1, K, or -K, where K is the time parameter that tends to positive infinity. The number of variables, constraints, and rational linear functions in the objective, of the real-variable problem is bounded by a polynomial function of the size of the NP-Complete Problem. The NP-Complete Problem has a feasible solution, if-and-only-if, the real-variable problem has a feasible optimal objective equal to zero. We thus show the strong NP-hardness of this real-variable optimization problem.

**Category:** Algebra

[94] **viXra:1212.0123 [pdf]**
*replaced on 2018-05-09 11:15:28*

**Authors:** Kasibhatla Surya Narayana

**Comments:** 50 Pages. Published by IJSDR - International Journal of Scientific Development Research

ABSTRACT :
This theory is an attempt to describe the universal phenomena like space, time, matter and energy as an inter-relationship bound by a newly discovered force named as the universal force. The universal force is shown to be the force of gravitation, electricity, magnetism, strong nuclear and weak nuclear forces. I believe any other force, hitherto fore not discovered; also, can be explained in terms of this universal force.
Liberal use of wave-particle duality, relativity, quantum concepts is made to achieve a harmonious and comprehensive synthesis of all the existing beliefs in physics into a new theory with some new concepts added here and there. While adding new concepts, enormous care has been taken to ensure that the existing beliefs are not contradicted. Moreover, the new concepts are proved to be correct theoretically by deriving the constants like G, σ , etc.,

**Category:** Quantum Gravity and String Theory

[93] **viXra:1212.0122 [pdf]**
*submitted on 2012-12-19 13:55:38*

**Authors:** Dhananjay P. Mehendale

**Comments:** 7 Pages

We discuss a new simple method to solve linear programming (LP) problems, based on the so called duality theory and nonnegative least squares method. The success for this method as far efficiency is concerned depends upon the success one may achieve by further research in finding efficient method to obtain nonnegative solution for a system of linear equations. Thus, the suggested method points the need to devise better methods, if possible, of finding nonnegative solution for a system of linear equations. Because, it is shown here that the problem of linear programming reduces to finding nonnegative solution of certain system of linear equations, if and when it exists, and this system of equations consists of 1) the equation representing duality condition; 2) the equations representing the constraints imposed by the given primal problem; and 3) the equations representing the constraints imposed by its corresponding dual problem. In this paper we have made use of well known method of nonnegative least squares (NNLS), [1], as a primary start for finding nonnegative solution for a system of linear equations. Two simple MATLAB codes for testing method by implementing it to solving some simple problems are provided at the end.

**Category:** General Mathematics

[92] **viXra:1212.0121 [pdf]**
*submitted on 2012-12-19 10:32:13*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages. 5 references

An alternative cause for the Huronian Glaciation and the subsequent extinction is hypothesized via stellar metamorphosis.

**Category:** Geophysics

[91] **viXra:1212.0120 [pdf]**
*submitted on 2012-12-19 11:43:51*

**Authors:** James J Keene

**Comments:** 4 Pages.

Quantitatively large effects of lunar surface temperature on apparent gravitational force measured by lunar laser ranging (LLR) and lunar perigee may challenge widely accepted theories of gravity. LLR data grouped by days from full moon shows the moon is about 5 percent closer to earth at full moon compared to 8 days before or after full moon. In a second, related result, moon perigees were least distant in days closer to full moon. Moon phase was used as proxy independent variable for lunar surface temperature. The results support the prediction by binary mechanics that gravitational force increases with object surface temperature.

**Category:** Relativity and Cosmology

[90] **viXra:1212.0119 [pdf]**
*replaced on 2012-12-21 10:51:48*

**Authors:** William Dungan Jr.

**Comments:** 16 Pages. Bohr was wrong and Einstein was right, “God doesn’t play dice with the world.”

Peer review is no panacea; every generation must reevaluate empirical evidence in the context of its own time. For the past century, quantum mechanics has defied common sense. Consistent in every detail with holographic virtual images, Young’s double-slit experiment generates diffraction patterns with coherent light, even one photon at a time, while incoherent light does not. Hence, interference pattern analogies are flagrant misrepresentations of the facts. Bohr’s quantum leap scenario violates the second law of thermodynamics and contradicts phase transition temperatures. Common sense dictates that the stability of molecular bonds contradicts probabilistic, leaping electrons. Bell’s inequality, a specious proof of quantum mechanics, derives from a false premise whose revelation by Joy Christian went widely unnoticed. Misinterpreting the facts, the blind-leading-the-blind faith of quantum mechanics twisted inductive speculation into a Gordian knot of mass delusions. As long as peer review science chooses to legitimize theoretical speculation, the demarcation between science and pseudoscience will remain indefensible.

**Category:** Quantum Physics

[89] **viXra:1212.0118 [pdf]**
*submitted on 2012-12-19 08:00:00*

**Authors:** Hu Wang

**Comments:** 18 Pages.

Studies on nautiloids, coral fossils, rotation of the Earth, and Earth-Moon distance variation may lead to a conclusion that Kepler's constant is decreasing in the system with the Earth as the central celestial.

**Category:** Astrophysics

[88] **viXra:1212.0115 [pdf]**
*submitted on 2012-12-18 22:15:52*

**Authors:** Richard A Peters, Walter A Johnson

**Comments:** 15 Pages.

The fundamental forces of nature are mediated by the exchange of particles; photons mediate the electromagnetic force, gluons the strong force, W and Z particles the weak force. It is shown that the duration of the exchange of a force or messenger particle between two interacting particles is a function of the velocity through space of the two interacting particles and is also a function of the orientation of the two particles relative to their common velocity. The greater the velocity of the two particles the longer it takes for the particle exchange between them. The interaction between two particles is thus slowed by their motion through space. This slowing with velocity of the interaction between particles underlies the fundamental process occurring in any physical entity (proton, nucleus, atom, or macro object). It is the essence of time dilation. The dependence of time dilation on the orientation of two interacting particles relative to their common velocity is an important finding of this study and may appear to contradict the ‘known’ singular dependence of time dilation on velocity. The dependence of time dilation on the orientation of the interacting particles is overcome when a large number of particles are aggregated and their orientations relative to the velocity of the aggregate body or entity are randomized. Alternatively, the rapidly changing orientation of two particles in the turbulence of space will itself randomize the orientation of the particles and cause the value of time dilation of the particles to ‘average out’. We can look at time dilation in two ways; 1) time actually slows for the fast moving entity or 2) the basic process that governs the entity takes longer as the entity moves through space, but time itself does not slow down. This may be more a philosophical question than a scientific one. The momentum of current thought is clearly with time slowing down for the moving entity. The conclusion of this paper is that time dilation is a slowing of the fundamental process of an entity when the entity is moving through space, not a slowing of time itself.

**Category:** Relativity and Cosmology

[87] **viXra:1212.0114 [pdf]**
*submitted on 2012-12-18 12:48:56*

**Authors:** Martin Saturka

**Comments:** cc-by-sa; 64 pages; 60 figures

Geometrical structures presented in the work naturally provide the standard model and quantum gravity, with explaining the important features of quantum physics and cosmology.
Even though this endeavor has started with exploiting the intriguing spin of quantum mechanics, the geometry arising on that has directly lead to a comprehensive unified theory.

Together with that, a key distinction between two time notions, i.e. of the apparent time and of the manifold time is introduced.
It shall be fundamental for understanding both the relativistic dynamics and quantum phenomena.

The geometry of extended tangloids is developed in a step-by-step manner, so that it is relatively easy to understand and apprehend it.

A notable point is that a way for creation of the exotic, i.e. anti-gravitating, matter was found along the way.
Some applications of the exotic matter are discussed as well.

**Category:** Quantum Gravity and String Theory

[86] **viXra:1212.0113 [pdf]**
*replaced on 2013-01-30 08:41:43*

**Authors:** R.A.Isasi

**Comments:** 11 pages, 3 tables and 1 figure

In this article, we analyze some unspecific detalis which are significant in certain experiment related to Casimir effect. At the "point of closest approach", as the Casimir force equals the Coulombic force, we can calculate the the static energy density. Also, identical phenomena occurs in the cosmological H and HeI Rydberg atoms. In spite of the marked contrast between both scales, by extrapolation utilizing a dynamical expression for this microscopic magnitudes, we can obtain the Cosmological Constant. Due to its intesives form, these finding are fascinating, since from a specific microscopic empty cavity, we can equalize its expansive energy density with respect to the cosmological energy density.

**Category:** Relativity and Cosmology

[85] **viXra:1212.0110 [pdf]**
*submitted on 2012-12-17 07:46:38*

**Authors:** Andrej Rehak

**Comments:** 8 Pages.

Simple mathematical demonstration solves one of the problems of lunar motion, the regression of lunar nodes, observed more than 2000 years ago. Although their motions draw similar traces (such as retrograde motion in Ptolemy’s and Copernicus’s models of the universe) we show that celestial bodies do not rotate around their common centre of mass, but, by unified law, one around the other. Due to the rigidity of principle, regardless of the calculation of rounded up values, the range of discrepancy between predicted and observed cycle of regression is in level of magnitude of only 3.4x10-5 (due to lunar trajectory perturbation, its perceived values also on a small-scale vary cyclically). Besides the constant π and Terrestrial measure of time, the only variables used in the solution of this dual orbiting system problem are radiuses and surface accelerations of observed bodies.

**Category:** Astrophysics

[84] **viXra:1212.0109 [pdf]**
*replaced on 2018-04-14 16:53:02*

**Authors:** Matthias Mueller

**Comments:** 43 Pages.

Four __different__ polynomial 3-SAT algorithms named A, B, C and D are provided:

v1: "Algorithm A": | Obsolete, please ignore. |

v2: "Algorithm B": | Obsolete. Published in December 2013. Never failed for millions of test runs. Proof of correctness needs to be improved. Mr. M. Prunescu's paper 'About a surprizing computer program of Matthias Mueller' is about this Algorithm B. |

v3: "Algorithm C": | Obsolete, please ignore. |

v4: "Algorithm D‑1.0": | Newest and best algorithm. Never failed in tests. Related paper v4 contains a detailed description of this polynomial 3-SAT solving algorithm, an extensive proof of correctness and a link where you can download my compiled demo C++ implementation (with source code) for Windows and Linux, an alternative polynomial solver version, and an additional tool program. |

v5: "Algorithm D‑1.1": | Very same algorithm as v4, but better explained and with a re-written, completely new part of the proof of correctness. |

v6: "Algorithm D‑1.1": | Some helpful improvements (compared to v5). |

v7: "Algorithm D‑1.2": | Paper from May 22nd, 2016. |

v8: "Algorithm D‑1.3": | Parts of the proof of correctness have been replaced by a completely re-written, more detailed variant. |

v9: "Algorithm D‑1.3": | Please read this version for a figurative, linguistic description of the algorithm. Another part of the proof of correctness has been made more detailed. |

vA: "Algorithm DM‑1.0": | Completely re-written document. Still describes algorithm D, but as short as possible and in mathematical notation. |

vB: "Algorithm DM‑2.0": | Please read this version for a precise, mathematical description of the algorithm. Best paper of all. Heavily revised and extended vA document. A much more precise notation is used (compared to vA) and most formulas are now comprehensively commented and explained. Might be easier to understand for learned readers, while others prefer v9 (D-1.3). |

[83] **viXra:1212.0108 [pdf]**
*submitted on 2012-12-17 08:04:56*

**Authors:** V.A. Etkin

**Comments:** 9 Pages.

Показано, что создание машин, демонстрирующих получение избыточной мощности за счет энергии силовых полей, не противоречит законам физики.
Анализируется специфика таких устройств и излагаются основы их теории
(It is shown, that creation of the machines showing reception of superfluous
power at the expense of force fields energy, does not contradict physics laws.Specificity of such devices is analyzed and bases of their theory are stated)

**Category:** Classical Physics

[82] **viXra:1212.0107 [pdf]**
*submitted on 2012-12-16 23:38:26*

**Authors:** Golden Gadzirayi Nyambuya

**Comments:** 5 Pages.

Exactly 100 years ago, German scientist -- Alfred Lothar Wegener, sailed against the prevailing wisdom of his day when he posited that not only have the Earth's continental plates receded from each other over the course of the Earth's history, but that they are currently in a state of motion relative to one another. To explain this, Wegener setforth the hypothesis that the Earth must be expanding as a whole. Wegener's inability to provide an adequate explanation of the forces and energy source responsible for continental drift and the prevailing belief that the Earth was a rigid solid body resulted in the acrimonious dismissal of his theories. Today, that the continents are receding from each other is no longer a point of debate but a sacrosanct pillar of modern geology and geophysics. What is debatable is the energy source driving this phenomenon. An expanding Earth hypothesis is currently an idea that is not accepted on a general consensus level. Anti-proponent of the expanding Earth mercilessly dismiss it as a pseudo or fringe science. Be that it may, we show herein that from the well accepted law of conversation of spin angular momentum, Stephenson9 and Morrison (1995)'s result that over the last 2700 years or so, the length of the Earth's day has undergone a change of about +17.00 microsecond/yr, this result invariably leads to the fact the Earth must be expanding radially at a paltry rate of about +0.60mm/yr. This simple fact, automatically move the expanding Earth hypothesis from the realm of pseudo or fringe science, to that of real and ponderable science.

**Category:** Geophysics

[81] **viXra:1212.0106 [pdf]**
*submitted on 2012-12-16 17:19:41*

**Authors:** James J Keene

**Comments:** 22 Pages.

Binary mechanics (BM) used a pair of relativistic Dirac equations of opposite handedness to guide quantization of space and time into binary bit loci in a cubic lattice restricted to zero or one states. The exact time development of this BM state vector is determined by the four bit operations -- unconditional, scalar, vector and strong -- applied sequentially, one each in a quantized time unit.

**Category:** Quantum Physics

[80] **viXra:1212.0105 [pdf]**
*replaced on 2015-12-08 12:08:44*

**Authors:** Sylwester Kornowski

**Comments:** 5 Pages.

Within the Scale-Symmetric Theory we calculated the running coupling for the nuclear strong interactions applying three different methods. They lead to very close theoretical results. At very high energy there appears asymptote for 0.1139. When we add to the strong running coupling calculated within the Scale-Symmetric Theory the correction that follows from the weak interactions associated with the parton-shower production then we obtain theoretical results consistent with experimental data for the “strong” interactions. The Scale-Symmetric Theory shows that the origin of the strong running coupling results from the law of conservation of spin - this law forces that with increasing energy of collision of baryons, absolute mass of the virtual pions which are responsible for the strong interactions decreases. On the other hand, the asymptotic freedom described within the QCD is consistent with experimental data only because of free parameters.

**Category:** High Energy Particle Physics

[79] **viXra:1212.0104 [pdf]**
*replaced on 2015-12-07 13:13:01*

**Authors:** Sylwester Kornowski

**Comments:** 7 Pages.

Within the Scale-Symmetric Theory we described mass spectrum of the composite Higgs boson with a mass of 125.00 GeV. Due to the quadrupole symmetry characteristic for the weak interactions and due to the interactions of the Higgs-boson pairs with the dominant gluon balls 3.30 GeV, there appear two masses 126.65 ± 0.73 GeV and 123.35 ± 0.73 GeV. Due to the confinement characteristic for the weak interactions, there arise the pairs of Higgs bosons. In their decays appear groups of photons composed of two photon pairs, i.e. composed of four photons, or quadrupoles of leptons. The decays of the Higgs boson pairs into 4 photons lead to the mean central mass of Higgs boson equal to 126.65 GeV whereas the decays into the quadrupoles of leptons lead to the mean central mass 123.35 GeV or 125.00 GeV. The reformulated Theory of Branching Ratios leads to conclusion that the relative signal strength of the decays into two photons to the decays into two Z bosons should be in approximation 1.87 times higher than predicted within the Standard Model. Since there is the pairing of the Higgs bosons then for the decays into two photons, the relative signal strength in relation to the Standard Model is 1.732 whereas for ZZ channel is 0.926.

**Category:** High Energy Particle Physics

[78] **viXra:1212.0103 [pdf]**
*replaced on 2014-05-02 15:10:13*

**Authors:** Branko Zivlak

**Comments:** 3 Pages. 13 formulas

The aim of this article is to question: Is it possible to express fine structure
constant only through 2 dimensionless physical constants?

**Category:** Classical Physics

[77] **viXra:1212.0102 [pdf]**
*replaced on 2013-01-12 07:43:14*

**Authors:** Peter Hickman

**Comments:** 15 Pages.

In this paper solutions to the nature of Dark matter, Dark energy, Matter, Inflation and the Matter-Antimatter asymmetry are proposed The real spin representations of a 7d complex space are assumed to be the source of a chiral gauge group SU(8)xU(1) and a spin 2 quaternion field. The integral of the probability density of the spin 2 field results in a lower bound for r and consequently the Schwarzschild physical singularity is non-existent. Fermion mass is bounded by a lower and an upper limit. Cosmology of the universe is cyclic with no past or future singularities and the Cosmological density ratios are in agreement with WMAP 7 year data.

**Category:** Quantum Physics

[76] **viXra:1212.0101 [pdf]**
*submitted on 2012-12-16 11:13:29*

**Authors:** Ervin Goldfain

**Comments:** 3 Pages.

We give a concise but incomplete list of reasons why these theories are likely to point in the wrong direction. For the sake of clarity and due to the large volume of research on these topics, no references are included. The interested reader can look for key words describing these references using Google Scholar or similar search engines.

**Category:** High Energy Particle Physics

[75] **viXra:1212.0100 [pdf]**
*submitted on 2012-12-16 11:54:32*

**Authors:** Alexander Unzicker, Sheilla Jones

**Comments:** 2 Pages.

2012 seems to become a year to be celebrated in the high energy physics community. ``As a layman, I would say we have it!'' said CERN director general Rolf-Dieter Heuer at the press conference on July 4, 2012, announcing the discovery of a footprint of `something' in the LHC proton collision data. Evidently, such a short statement was necessary because the expert's account of the
discovery is a long story to tell. As physicists, we are seeking something in between. We would be curious if there are discussions in the community along our questions; in any case, they don't seem to have got outside so far. Therefore, we would like to invite a broader communication between the particle physics community and the rest of physics.

**Category:** High Energy Particle Physics

[74] **viXra:1212.0099 [pdf]**
*submitted on 2012-12-16 13:56:18*

**Authors:** Jose D Perezgonzalez

**Comments:** 3 pages, Journal of Knowledge Advancement & Integration (ISSN 1177-4576), Wiki of Science, Creative Commons Attribution-ShareAlike 3.0 License

Perezgonzalez assessed the nutritional balance of potato crisps in 2012, as part of a research on the nutritional composition of snacks in New Zealand.
Potato crisps had, on average, a nutritional unbalance of BNI 44.98-fb, being particularly biased towards deficiency in fiber. They were also adequate in carbohydrate and sugar, high in fat, saturated fat and sodium, and low in protein.

**Category:** General Science and Philosophy

[73] **viXra:1212.0097 [pdf]**
*submitted on 2012-12-16 14:44:57*

**Authors:** Sierra Rayne

**Comments:** 5 Pages.

In an earlier study (Kelly et al., PNAS, 2009, 106, 22346-22351), spatial patterns for the concentrations of particulate matter, particulate polycyclic aromatic compounds (PAC), and dissolved PAC in the snowpack around the Syncrude and Suncor upgrader facilities near the oil sands development at Fort McMurray, Alberta, Canada were determined. A reassessment of the datasets employed in this work yields significantly different deposition rates (by up to an order of magnitude) than reported, as well as reveals substantial sensitivity in deposition rate estimates depending on a range of equally valid regression types chosen. A high degree of uncertainty remains with regard to the quantities of particulate matter and PAC being deposited in the Athabasca River watershed from oil sands related activities.

**Category:** Chemistry

[72] **viXra:1212.0096 [pdf]**
*replaced on 2012-12-21 23:13:25*

**Authors:** Rodney Bartlett

**Comments:** 18 Pages. Intriguing! I support the Big Bang but have arrived at calculations Steady State theory proposed. Could reality be a "big bang-steady state" hybrid?

How the "Pioneer anomaly" refines Einstein's gravitation / space-time; and how equations he developed in 1919 show that the space warping in General Relativity extends to subatomic particles (with related topics: deflection of starlight, Optical Effect, electromagnetism, intergalactic and time travel, teleportation, the nuclear strong and weak forces, Theory of Everything or Unified Field Theory, quantum entanglement, retrocausality, dark matter, dark energy, Mobius strip, Klein bottle, Poincare conjecture, planet Mercury, precession, General Relativity, gravitation, dark flow, infinity, hidden variables, virtual particles, binary digits, wormholes, cosmic strings, quantum fluctuation, tides, origin of life, science-based eternal life, 5th-dimensional hyperspace [I think this can be called “prespacetime” which is a non-temporal and non-spatial domain theorized to be the foundation of spacetime], the Law of Conservation of Energy, and how data from both the Big Bang and Steady State theories is essential).

**Category:** General Science and Philosophy

[71] **viXra:1212.0095 [pdf]**
*submitted on 2012-12-14 20:19:32*

**Authors:** U.V.S. Seshavatharam, S. Lakshminarayana

**Comments:** 2 Pages.

Within the expanding cosmic Hubble volume, Hubble length can be considered as the gravitational or electromagnetic interaction range. Product of ‘Hubble volume’ and ‘cosmic critical density’ can be called as the ‘Hubble mass’. Based on this cosmic mass unit, authors noticed four peculiar semi empirical relations. With the observed relations it is possible to say that, in atomic and nuclear physics, there exists a cosmological physical variable. By observing its rate of change, the future cosmic acceleration can be verified. Fine structure ratio may be considered as the index of present cosmic expansion. By observing its cosmological rate of change, future cosmic acceleration can be verified.

**Category:** Relativity and Cosmology

[70] **viXra:1212.0094 [pdf]**
*replaced on 2015-10-19 18:08:28*

**Authors:** Michael J. Burns

**Comments:** Pages. mburns92003@yahoo.com

Be sure to use proper tensor rank and orientation. The recent efforts by Professor Kip S. Thorne do not meet this need. The utility of a correct method is remarkably strong. Here's how!

**Category:** Relativity and Cosmology

[69] **viXra:1212.0092 [pdf]**
*submitted on 2012-12-14 07:49:37*

**Authors:** E.C. Kunft, L. Vinagre

**Comments:** 5 Pages.

We grow a forest in a pot! Have you ever seen it before?! It's incredible, we're good!!

**Category:** High Energy Particle Physics

[68] **viXra:1212.0091 [pdf]**
*submitted on 2012-12-14 10:15:55*

**Authors:** L. F. Zagonel, J. Bettini, R. L. O. Basso, P. Paredez, H. Pinto, C. M. Lepienski, F. Alvarez

**Comments:** Surface and Coatings Technology Volume 207, 25 August 2012, Pages 72–78 ; http://dx.doi.org/10.1016/j.surfcoat.2012.05.081

A comprehensive study of pulsed nitriding in AISI H13 tool steel at low temperature (400°C) is reported for several durations. X-ray diffraction results reveal that a nitrogen enriched compound (Epsilon-Fe2-3N, iron nitride) builds up on the surface within the first process hour despite the low process temperature. Beneath the surface, X-ray Wavelength Dispersive Spectroscopy (WDS) in a Scanning Electron Microscope (SEM) indicates relatively higher nitrogen concentrations (up to 12 at.%) within the diffusion layer while microscopic nitrides are not formed and existing carbides are not dissolved. Moreover, in the diffusion layer, nitrogen is found to be dispersed in the matrix and forming nanosized precipitates. The small coherent precipitates are observed by High-Resolution Transmission Electron Microscopy (HR-TEM) while the presence of nitrogen is confirmed by electron energy loss spectroscopy (EELS). Hardness tests show that the material hardness increases linearly with the nitrogen concentration, reaching up to 14.5 GPa in the surface while the Young Modulus remains essentially unaffected. Indeed, the original steel microstructure is well preserved even in the nitrogen diffusion layer. Nitrogen profiles show a case depth of about ~43 microns after nine hours of nitriding process. These results indicate that pulsed plasma nitriding is highly efficient even at such low temperatures and that at this process temperature it is possible to form thick and hard nitrided layers with satisfactory mechanical properties. This process can be particularly interesting to enhance the surface hardness of tool steels without exposing the workpiece to high temperatures and altering its bulk microstructure.

**Category:** Condensed Matter

[67] **viXra:1212.0089 [pdf]**
*submitted on 2012-12-13 03:26:41*

**Authors:** Giorgio Fabretti

**Comments:** 17 Pages. Resume by the Fruitarian Association (Associazione FRUIT in Italy) (Fruitarian Society of Gandhi re-founded in Italy 1972)

Translated resume from Giorgio Fabretti’s essays on the following subjects:
WHAT MEANS HAVING REFOUNDED FRUITARIANISM SINCE OVER 40 YEARS (1972-2012);
ETHICAL MATERIALISM APPLIED TO COMPLEXITY OF NUTRITION;
MANAGING HUMAN EATING IN TECHNOLOGICAL CIVILIZATION;
Bioethical foundations of fruit eating;
“FRUIT = GIFT = SYNCHRONIZER
THINK GLOBAL = SAVE THE PLANET
SAVE YOUR HEALTH = SAVE YOUR MONEY
… TROUGH EATING FRUITS
(at least a 90-95% of the calories in your diet“;
… resumed according to the ethical science of the anthropologist Giorgio Fabretti, founder of Fruitarianism, that means the philosophy of gift (of synchronized reciprocal just opportunities rather than predatory quantitative redistribution of goods), that is re-adapting humans to their moral nature of respecting people, animals, plants and environment.

**Category:** General Science and Philosophy

[66] **viXra:1212.0088 [pdf]**
*replaced on 2013-06-04 05:22:15*

**Authors:** Qiu Kui Zhang

**Comments:** 10 Pages.

In this article some difficulties are deduced from the set of natural numbers. The demonstrated difficulties suggest that if the set of natural numbers exists it would conflict with the axiom of regularity. As a result, we have the conclusion that the class of natural numbers is not a set but a proper class.

**Category:** Set Theory and Logic

[65] **viXra:1212.0087 [pdf]**
*replaced on 2012-12-31 01:30:16*

**Authors:** James M. Chappell, Derek Abbott

**Comments:** 4 Pages.

The idealized Kish-Sethuraman (KS) cipher is theoretically known to offer perfect security through a classical information channel. However, realization of the protocol is hitherto an open problem, as the required mathematical operators have not been identified in the previous literature. A mechanical analogy of this protocol can be seen as sending a message in a box using two padlocks; one locked by the Sender and the other locked by the Receiver, so that theoretically the message remains secure at all times. We seek a mathematical representation of this process, considering that it would be very unusual if there was a physical process with no mathematical description and indeed we find a solution within a four dimensional Clifford algebra. The significance of finding a mathematical description that describes the protocol, is that it is a possible step toward a physical realization having benefits in increased security with reduced complexity.

**Category:** Digital Signal Processing

[64] **viXra:1212.0085 [pdf]**
*submitted on 2012-12-12 20:48:24*

**Authors:** Victor Christianto

**Comments:** 13 Pages. this short paper is not yet submitted to any journal

There are a number of articles published recently with intention to soothe the anxiety of many people in recent years. This anxiety is concerning the coming doomsday which is claimed to happen at December 21, 2012 [1][2][3][4]. These articles support the argument put forth by many scientists and majority of governments that the claimed prediction by the ancient Mayas of the end of the world is a false prediction.
This short note supports this kind of argument. I go even further to explore the notion of Hunab Ku, the Supreme Creator according to the ancient Mayan people, and see if that name can be related to The Unknown God (Agnostos Theos) according to St. Paul in his speech in Athena (Acts 17:23). I then conclude that it is difficult to relate the name of Hunab Ku with Agnostos Theos of St. Paul, even though there are many similarities between them.

**Category:** Religion and Spiritualism

[63] **viXra:1212.0084 [pdf]**
*submitted on 2012-12-12 13:28:26*

**Authors:** Jose D Perezgonzalez

**Comments:** 4 pages, Journal of Knowledge Advancement & Integration (ISSN 1177-4576), Wiki of Science, Creative Commons Attribution-ShareAlike 3.0 License

Perezgonzalez assessed the nutritional balance of potato crisps in 2012, as part of a research on the nutritional composition of snacks in New Zealand. The distribution of nutritional balance clustered into two groups. The median was located at BNI 67.85 and the middle 68% of products ranged between BNI 33 (P16) and BNI 78 (P84). There was a slight negative skewness (mean=61.02, zSkew=-1.29) as most of the products were grouped towards the unbalanced end.

**Category:** General Science and Philosophy

[62] **viXra:1212.0083 [pdf]**
*replaced on 2012-12-14 23:26:46*

**Authors:** Frank Dodd Tony Smith Jr

**Comments:** 4 Pages.

Protons from Hydrogen infalling into Sgr A* acquire enough energy and density to produce proton-proton collisions similar to those at the LHC. The 135 GeV Line observed by Fermi LAT is due to proton-proton collisions producing Higgs in the diphoton channel. The125 GeV Higgs-like evidence observed by ATLAS and CMS is also due to proton-proton collisions producing Higgs in the diphoton channel. The difference between 135 GeV at Fermi LAT and 125 GeV at LHC can be accounted for by comparing details of experimental setup and analysis-related assumptions. V2 adds Fermi LAT Earth Limb observations.

**Category:** High Energy Particle Physics

[61] **viXra:1212.0082 [pdf]**
*submitted on 2012-12-12 07:49:27*

**Authors:** José Francisco García Juliá

**Comments:** 4 Pages.

A note in favor of the correctness of the relativity, special and general.

**Category:** Relativity and Cosmology

[60] **viXra:1212.0081 [pdf]**
*replaced on 2013-01-06 01:38:58*

**Authors:** Hosein Nasrolahpour

**Comments:** 3 Pages. Another short report on " Fractional Classical Mechanics" :Prespacetime Journal| November 2012 | Volume 3| Issue 13 | pp. 1247-1250

In this paper we discuss some important consequences of application of fractional operators in
physics. Also we present a unified integro-differential equation for relaxation and oscillation. We
focus on time fractional formalism whose derivative is in Caputo sense.

**Category:** Mathematical Physics

[59] **viXra:1212.0078 [pdf]**
*submitted on 2012-12-11 15:26:26*

**Authors:** Jose D Perezgonzalez, Kam HP Yiu

**Comments:** 3 pages, Journal of Knowledge Advancement & Integration (ISSN 1177-4576), Wiki of Science, Creative Commons Attribution-ShareAlike 3.0 License

Perezgonzalez, Gilbey & Diaz Vilela (2010) examined the need for new technologies by single pilot operators in the general aviation industry on a global scale. This was achieved by using an online survey requesting participants to rank the importance of various flight management features. These were 22 technological features in total, grouped into five distinctive categories. Overall, results showed that cost factors were regarded as the most important feature by the group of general aviation pilots, followed by flight support. The results also indicated that instructors valued new flight technologies the most, while female pilots were less concerned with new flight technologies.

**Category:** Social Science

[58] **viXra:1212.0077 [pdf]**
*submitted on 2012-12-11 09:05:35*

**Authors:** Dhananjay P. Mehendale

**Comments:** 6 Pages. Presented and Published in the Proceedings of International Conference on Perspectives of Computer Confluence with Sciences 2012, ICPCCS 12.

In this paper we propose a new algorithm for linear programming. This new algorithm is based on treating the objective function as a parameter. We transform the matrix of coefficients representing this system of equations in the reduced row echelon form containing only one variable, namely, the objective function itself, as a parameter whose optimal value is to be determined. We analyze this matrix and develop a clear method to find the optimal value for the objective function treated as a parameter. We see that the entire optimization process evolves through the proper analysis of the said matrix in the reduced row echelon form. It will be seen that the optimal value can be obtained 1) by solving certain subsystem of this system of equations through a proper justification for this act, or 2) By making appropriate and legal row transformations on this matrix in the reduced row echelon form so that all the entries in the submatrix of this matrix, obtained by collecting rows in which the coefficient of so called unknown parameter d whose optimal value is to be determined, become nonnegative and this new matrix must be equivalent to original matrix in the sense that the solution set of the matrix equation with original matrix and matrix equation with transformed matrix are same. We then proceed to show that this idea naturally extends to deal with nonlinear and integer programming problems. For nonlinear and integer programming problems we use the technique of Grobner bases since Grobner basis is an equivalent of reduced row echelon form for a system of nonlinear equations, and the methods of solving linear Diophantine equations respectively.

**Category:** Data Structures and Algorithms

[57] **viXra:1212.0076 [pdf]**
*submitted on 2012-12-11 08:19:59*

**Authors:** Anatoly V. Belyakov

**Comments:** 11 pages, including 3 figures and 1 tables, Published: Progress in Physics, 2012, v.2, p.47-57

The proposed model is based on Wheeler’s geometrodynamics of fluctuating topology
and its further elaboration based on new macro-analogies. Micro-particles are considered
here as particular oscillating deformations or turbulent structures in non-unitaty coherent two-dimensional surfaces. The model uses analogies of the macro-world, includes into consideration gravitational forces and surmises the existence of closed structures, based on the equilibrium of magnetic and gravitational forces, thereby supplementing the Standard Model. This model has perfect inner logic. The following phenomena and notions are thus explained or interpreted: the existence of three generations of elementary particles, quark-confinement,“Zitterbewegung”, and supersymmetry. Masses of leptons and quarks are expressed through fundamental constants and
calculated in the first approximation. The other parameters — such as the ratio among masses of the proton, neutron and electron, size of the proton, its magnetic moment, the gravitational constant, the semi-decay time of the neutron, the boundary energy of the beta-decay— are determined with enough precision.

**Category:** Nuclear and Atomic Physics

[56] **viXra:1212.0073 [pdf]**
*submitted on 2012-12-10 14:35:54*

**Authors:** Jose D Perezgonzalez

**Comments:** 3 pages, Journal of Knowledge Advancement & Integration (ISSN 1177-4576), Wiki of Science, Creative Commons Attribution-ShareAlike 3.0 License

Some corn chips are sold under generic brands (eg, a supermarket brand) while others are sold under proprietary brands. Thus, it is of interest to test whether such characteristic informs about overall nutritional balance (BNI) and, thus, whether it may help choose more balanced products. As part of a research on the nutritional balance of corn chips (2012a), Perezgonzalez (2012b) also assessed whether generic and proprietary brands differed in regards to overall nutritional balance. This article summarizes that research.

**Category:** General Science and Philosophy

[55] **viXra:1212.0072 [pdf]**
*submitted on 2012-12-10 15:20:14*

**Authors:** John A. Gowan

**Comments:** 9 Pages. part 2 of 2

I had been blocked from understanding the Higgs role and mechanism through thinking there was only one Higgs boson; the dam burst when I realized there could be more than one Higgs. Suddenly I saw how the various Higgs bosons could serve as a selection mechanism to define, organize, and "gauge" the energy levels or symmetric energy states of several other processes I had known about for some time, such as the compression of the quarks by the "X" IVBs to produce "proton decay", and the creation of leptoquarks by an even higher energy process involving the splitting of primordial charged leptons by "Y" IVBs to produce both electrically charged and neutral leptoquarks. It all fell into place once my mind was opened to the possibility of multiple Higgs bosons, one each to "gauge" or scale the stages of the decay sequences of the cascade. Here was the natural conservation role for the Higgs I was seeking. The quantization of the Higgs and IVBs is necessary to ensure the invariance of the single elementary particles they produce. No matter if this was not the exact same role posited for the Higgs in other sources; given the ambiguity in the technical jargon and explanations I had encountered, it was close enough to satisfy.

**Category:** High Energy Particle Physics

[54] **viXra:1212.0071 [pdf]**
*submitted on 2012-12-10 10:41:57*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages. 2 pictures

It is hypothesized that a very dangerous assumption has been made in the science of astrophysics. It was assumed that interstellar space is a pure vacuum and that stars are seen as they are. It is understood by the author that this is a very dangerous assumption to astronomy. If it is not true that outer space is a pure vacuum then this realization sets entire educational establishments back to zero. An explanation is provided.

**Category:** Astrophysics

[53] **viXra:1212.0070 [pdf]**
*submitted on 2012-12-10 08:13:56*

**Authors:** sangwha Yi

**Comments:** 9 Pages.

The universe that the light’s velocity is instead of and is likely parallel universe names the alpha-parallel universe.The theory is the relativity theory in the alpha-parallel universe. In this time, this alpha-parallel universe is the universe that can treat inertial systems. In this universe, be able to consider that the light has the velocity instead of and the permittivity constant instead of , the permeability constant is instead of . Hence, In this theory, be able to consider that the light has the velocity instead of . Hence, In this theory, each alpha-parallel universe has each light velocity. Each light velocity or each permittivity constant and each permeability constant distinguishes each alpha-parallel universe.

**Category:** Relativity and Cosmology

[52] **viXra:1212.0069 [pdf]**
*submitted on 2012-12-10 04:50:53*

**Authors:** Andrej Rehak

**Comments:** 6 Pages.

Respecting the mechanism of simple machines, in described case the lever in balance, the application of universal principle (g=cd) is demonstrated by calculating the radius and velocity of the geostationary orbit. Derived is the ratio between geostationary and equatorial radius, specific to each celestial body. Implicitly, formulated is the law of geostationary orbits symmetrical to third Kepler’s law of planetary motion. As a derivation of these equations is not using the gravitational constant G and calculates the corrected celestial body masses, due to their mathematical equivalence, equalities presented give absolutely accurate results. The elegance, precision and simplicity of the presented model indicate misinterpretation of Newton's arbitrary masses and nature, in the conventional physics inevitable gravitational constant G, the so-called "Universal constant of nature".

**Category:** Astrophysics

[51] **viXra:1212.0068 [pdf]**
*submitted on 2012-12-10 05:06:49*

**Authors:** Otto G. Piringer

**Comments:** 12 Pages

Recent publications discussed a possible change with time of Sommerfeld's fine structure constant alpha, in which several of the fundamental constants of Nature are combined. The problem of a changing nature of alpha raises the question whether its value is ultimately a result of chance or reveals an objective law of nature. If the value of alpha is independent of human reason, a derivation of it may be possible from basic numbers, like e and pi, which appear in the logical development of mathematics[1]. In the following investigation a pure mathematical derivation of the fine structure constant is described, starting from a fundamental property of natural numbers. The constant alpha results as a limit value in an algorithm with exponential structures.

**Category:** Mathematical Physics

[50] **viXra:1212.0067 [pdf]**
*replaced on 2013-01-06 01:37:03*

**Authors:** Hosein Nasrolahpour

**Comments:** 2 Pages. A short report on " Fractional Classical Mechanics": Prespacetime Journal| October 2012 | Volume 3| Issue 12 | pp. 1194-1196

Time fractional formalism is a very useful tool in describing systems with memory and delay. In
this article, we reformulate the equations presented in [F. Mainardi, Chaos Sol. Frac. 7(9) (1996)
1461-1477] and show that solutions for the relaxation and oscillation equations represent
algebraic time-decaying behavior at asymptotic long time which depends on the order of time
derivative.

**Category:** Classical Physics

[49] **viXra:1212.0066 [pdf]**
*submitted on 2012-12-10 03:23:49*

**Authors:** Golden Gadzirayi Nyambuya

**Comments:** 12 Pages.

Exactly 100 years ago, German scientist Alfred Lothar Wegener (1880-1930), sailed against the prevailing wisdom of his day when he posited that not only have the Earth's continental plates receded from each other over the course of the Earth's history, but that they are currently in a state of motion relative to one another. To explain this, Wegener setforth the hypothesis that the Earth must be expanding as a whole. Wegener's inability to provide an adequate explanation of the forces and energy source responsible for continental drift and the prevailing belief that the Earth was a rigid solid body resulted in the acrimonious dismissal of his theories. Today, that the continents are receding from each other is no longer a point of debate but a sacrosanct pillar of modern geophysics. What is debatable is the energy source driving this phenomenon. Herein, we hold that continental drift is a result of the Earth undergoing a secular radial expansion. An expanding Earth hypothesis is currently an idea that is not accepted on a general consensus level. Be that it may, we show herein that the law of conversation of angular momentum and energy entail that the Earth must not only expand as a consequence of the secular recession of the Earth-Moon system from the Sun, but invariably, that the Moon must contract as-well. As a result, the much sort for energy source driving plate tectonics can (hypothetically) be identified with the energy transfers occurring between the orbital and rotational kinetic energy of the Earth. If our calculations are to be believed -- as we do; then, the Earth must be expanding radially at a paltry rate of about 1.50+/-mm/yr while the Moon is contracting radially at a relatively high rate of about -410+/- mm/yr.

**Category:** Astrophysics

[48] **viXra:1212.0063 [pdf]**
*submitted on 2012-12-10 00:03:50*

**Authors:** Dirk Pons, Arion Pons, Aiden Pons

**Comments:** 14 Pages. Submission to Foundational Questions Institute: Essay Contest 2012

The conventional conceptual framework for fundamental physics is built on a tacit construct: the premise of particles being zero-dimensional (0-D) points. There has never been a viable alternative to this, and the Bell-type inequalities preclude large classes of alternative designs with hidden variables. Although they do not absolutely preclude the possibility of particles having non-local hidden-variable (NLHV) designs, there is the additional difficulty of finding a solution within the very small freedom permitted by the constraints. Nonetheless we show that it is possible to find such a design. We propose the internal structures and discrete field structures of this ‘cordus’ particule, and the causal relationships for the behaviour of the system. This design is shown to have high conceptual fitness to explain a variety of fundamental phenomena in a logically consistent way. It provides insights into the fundamentals of matter, force, energy and time. It offers novel explanations to long-standing enigmas and suggests that a reconceptualisation of fundamental physics is feasible. We thus show that the 0-D point premise can be challenged, and is likely to have profound consequences for physics when it falls.

**Category:** Classical Physics

[47] **viXra:1212.0062 [pdf]**
*submitted on 2012-12-09 14:47:49*

**Authors:** Jose D Perezgonzalez

**Comments:** 4 pages, Journal of Knowledge Advancement & Integration (ISSN 1177-4576), Wiki of Science, Creative Commons Attribution-ShareAlike 3.0 License

As part of a research on the nutritional balance of corn chips (2012b), Perezgonzalez assessed whether generic and proprietary brands differed in regards to overall nutritional balance. This article provides descriptive information both about the sample of products under research (foodBNI) as well as about hypothetical diets based on those products (dietBNI).

**Category:** General Science and Philosophy

[46] **viXra:1212.0060 [pdf]**
*submitted on 2012-12-09 08:49:15*

**Authors:** Stephen J. Crothers

**Comments:** 5 Pages.

Professor Martin Rees, Astronomer Royal, gave a public lecture in the Great Hall at the University of Sydney on 9th November 2012, 6:30pm to 8:00pm. The lecture was titled ‘BIG BANGS, BIOSPHERES AND THE LIMITS OF SCIENCE’. The website announcing the lecture is:
http://sydney.edu.au/sydney_ideas/lectures/2012/professor_martin_rees.shtml
In the preamble on the aforementioned website we find the words, “But there are intimations that physical reality is hugely more extensive than the domain our telescopes can probe. Indeed we may inhabit a 'multiverse' – living in the aftermath of one among an infinity of 'big bangs'.” This short Letter proves that the black hole does not exist because it violates the physical principles of General Relativity and that General Relativity violates the usual conservation of energy and momentum and is therefore invalid. Many of Professor Rees’ arguments in his public lecture are based on the assumption of the validity of General Relativity and are therefore untenable. Professor Rees has also written much in various papers on the existence and properties of black holes. What followers is the content of a letter sent to Professor Rees in advance of his public lecture in Sydney.

**Category:** Relativity and Cosmology

[45] **viXra:1212.0059 [pdf]**
*submitted on 2012-12-09 07:34:57*

**Authors:** Adam G. Freeman, Policarpo Y. Ulianov

**Comments:** 20 Pages.

The Small Bang model has received this name because it considers that all the energy and matter in the Universe arose due to the process of cosmic inflation and thus at the initial time of creation space is essentially an empty bubble.
It is important to note that after the process of cosmic inflation the energy density in the Universe would be basically the same as proposed by the Big Bang theory and thus the two models differ only with respect to the behavior of the Universe and its galaxies before the end of cosmic inflation.
Besides eliminating the problem of infinite energy densities and temperatures, the Small Bang model also solves the problem of the loss of antimatter, proposing that focusing on the supermassive antimatter black hole that lies at the center of each galaxy allows for a better understanding of the formation process of galaxies.
The Small Bang model also suggests that the effects attributed to dark matter today are in fact due to the high angular momentum of supermassive antimatter black holes and the effect of drag on the space-time that extends beyond the limits of the galaxy and creates a region of constant rotation that is currently attributed to some type of “dark matter.”
The most basic principal of the Big Bang model is that of a "primeval atom" or "cosmic egg" that concentrates all the matter and energy in the Universe and was first proposed in 1927 by Georges Lemaître The Small Bang model eliminates the notion of the "cosmic egg" and represents an innovation to an idea that is almost a century old.

**Category:** Astrophysics

[44] **viXra:1212.0056 [pdf]**
*replaced on 2013-02-11 01:02:27*

**Authors:** Henok Tadesse

**Comments:** 10 Pages.

This paper presents a new theory of radiation and propagation of electromagnetic (EM) waves: EM waves caused by the same cause will never cross each other. It discloses a fundamental mistake in the established assumptions and method of analysis of radiation and propagation problems of electromagnetic waves and application of superposition principle. It implies invalidation of the established solutions to and implications of Maxwell’s equations, including the invalidation of the “radiation” concept of EM waves (light). The 1/r dependent fields implied by the established methods of analysis of Maxwell’s equations do not exist in reality. There is no wave “detached” from its source. The possibility of bending an EM wave on its way by another dependent EM wave will also be presented. The analogy of propagation of EM waves with the flow of fluids will also be discussed.

**Category:** Classical Physics

[43] **viXra:1212.0055 [pdf]**
*submitted on 2012-12-08 08:19:07*

**Authors:** Chun-Xuan Jiang

**Comments:** 6 Pages.

Using the complex trigonometric function of order n with (n-1) variables,where n is an odd number.We prove FLT for exponents 3p and p,where p is an odd prime .The proof of FLT must be direct.But indirect proof of FLT is disbelieving

**Category:** Number Theory

[42] **viXra:1212.0053 [pdf]**
*submitted on 2012-12-08 08:26:27*

**Authors:** Chun-Xuan Jiang

**Comments:** 4 Pages.

Using the complex trigonometric functions of order 2n with (2n-1) variables,where n is an odd number.We prove FLT for exponents 6p and 2p,where p is an odd prime.The proof of FLT must be direct.But indirect proof of FLT is disbelieving

**Category:** Number Theory

[41] **viXra:1212.0052 [pdf]**
*submitted on 2012-12-08 08:32:20*

**Authors:** Chun-Xuan Jiang

**Comments:** 4 Pages.

Using the complex trigonometric functions of order 4m with (4m-1) variables,where m=1,2,3,...We prove FLT for exponents 12p and 4p,where p is an odd prime.The proof of FLT must direct.But indirect proof of FLT is disbelieving.

**Category:** Number Theory

[40] **viXra:1212.0050 [pdf]**
*replaced on 2015-05-26 11:11:21*

**Authors:** Fran De Aquino

**Comments:** 6 Pages.

It is shown that, under certain circumstances, the sunlight incident on Earth, or on a planet in similar conditions, can become negative the gravitational mass of water droplet clouds. Then, by means of gravitational repulsion, the clouds are ejected from the atmosphere of the planet, stopping the hydrologic cycle. Thus, the water evaporated from the planet will be progressively ejected to outerspace together with the air contained in the clouds. If the phenomenon to persist during a long time, then the water of rivers, lakes and oceans will disappear totally from the planet, and also its atmosphere will become rarefied.

**Category:** Climate Research

[39] **viXra:1212.0049 [pdf]**
*submitted on 2012-12-07 11:38:20*

**Authors:** Michael A. Sherbon

**Comments:** 15 Pages. Journal of Science 11/2012; 2(3):148-154. DOI:10.2139/ssrn.1934553 Creative Commons Attribution 3.0 License.

Wolfgang Pauli was influenced by Carl Jung and the Platonism of Arnold Sommerfeld, who introduced the fine-structure constant. Pauli's vision of a World Clock is related to the symbolic form of the Emerald Tablet of Hermes and Plato's geometric allegory otherwise known as the Cosmological Circle attributed to ancient tradition. With this vision Pauli revealed geometric clues to the mystery of the fine-structure constant that determines the strength of the electromagnetic interaction. A Platonic interpretation of the World Clock and the Cosmological Circle provides an explanation that includes the geometric structure of the pineal gland described by the golden ratio. In his experience of archetypal images Pauli encounters the synchronicity of events that contribute to his quest for physical symmetry relevant to the development of quantum electrodynamics.

**Category:** History and Philosophy of Physics

[38] **viXra:1212.0047 [pdf]**
*submitted on 2012-12-06 20:44:51*

**Authors:** Jose D Perezgonzalez

**Comments:** 5 pages, Journal of Knowledge Advancement & Integration (ISSN 1177-4576), Wiki of Science, Creative Commons Attribution-ShareAlike 3.0 License

As part of a research on the nutritional balance of corn chips (2012b), Perezgonzalez assessed whether generic and proprietary brands differed in regards to overall nutritional balance. This article provides inferential information both about the population of products under research (foodBNI) as well as about hypothetical diets based on those products (dietBNI).

**Category:** General Science and Philosophy

[37] **viXra:1212.0046 [pdf]**
*replaced on 2012-12-14 23:31:50*

**Authors:** Frank Dodd Tony Smith Jr

**Comments:** 11 Pages.

The Conformal structure and Casimir Operators of the Conformal Gravity/Higgs Sector of E8 Physics produce 3 Generations of Fermions. V2 adds material about Dark Energy, Dark Matter, and Ordinary Matter and about Fermi LAT observations.

**Category:** High Energy Particle Physics

[36] **viXra:1212.0045 [pdf]**
*submitted on 2012-12-07 01:20:09*

**Authors:** U.V.S. Seshavatharam, S. Lakshminarayana

**Comments:** 4 Pages.

With reference to the previously proposed SUSY fermion-boson mass ratio 2.2627, based on the muon and proton rest masses, charged pion rest energy can be expressed as (1/2.2627)(mp.mm)^(1/2)=139.25 MeV. In the similar way a new boson related to electron-proton can be predicted as (1/2.2627)(mp.me)^(1/2)=9.677 MeV = mex. It can be called as the “EPION”. It can be suggested that, nuclear binding force is mediated by this hidden boson and charged pion is its excited state. In support of the existence the epion, rest mass of the neutral electro weak boson can be expressed as mZ=(mn2/mex) where mn is the rest mass of neutron. In this new direction fitted semi empirical mass formula energy constants are 16.29, 19.354,0.766,23.76 and 11.88 MeV respectively.

**Category:** Nuclear and Atomic Physics

[35] **viXra:1212.0043 [pdf]**
*replaced on 2013-06-01 04:33:12*

**Authors:** Justin Lee

**Comments:** 12 Pages. This is a preprint of an article published in International Review of Physics, Vol. 7 No. 1, p1-6.

This paper introduces a new theory called, "the theory of absolutivity", which challenges the two postulates of the special theory of relativity (special relativity). This paper analyzes how special relativity derives the time dilation, and then explains how it leads to a disagreement on the first postulate: the principle of relativity. Next, this paper continues to explain a disagreement on the second postulate: the universal speed of light. Furthermore, the paper written by Gjurchinovski on the reflection of light from a uniformly moving mirror is interpreted to give a classical explanation for the null result of Michelson-Morley experiment, by justifying Lorentz contraction of length without referring to special relativity.

**Category:** Relativity and Cosmology

[34] **viXra:1212.0042 [pdf]**
*submitted on 2012-12-06 10:28:38*

**Authors:** Anatoly V. Belyakov

**Comments:** 2 pages, including 2 figures

In this paper another explanation of the Cosmic Microwave Background Radiation is proposed.

**Category:** Astrophysics

[33] **viXra:1212.0041 [pdf]**
*submitted on 2012-12-06 09:38:20*

**Authors:** Aristotelis Kittas, Carito Guziolowski, Niels Grabe

**Comments:** 20 Pages.

We present a method to discover signaling pathways, quantify the relationship of preselected source/target nodes, and extract relevant subgraphs in large scale biological networks. This is demonstrated over the hepatocyte growth factor (HGF) stimulated cell migration and proliferation in a keratinocyte-fibroblast co-culture. The algorithm (MCWalk) is implemented with random walks using Monte Carlo simulations. We extract a master network by overlaying case specific microarray data from the NCI Pathway Interaction Database (PID) using a fully automatic pipeline without any manual network construction, and uncover the association of HGF receptor c-Met nodes, differentially expressed (DE) protein nodes and cellular states. We show that the network has a scale-free structure and identify key regulator nodes based on their random walk traversal frequency. This property is shown to be very weakly correlated to node degree, contrary to what is expected from similar centrality measures. The differences with standard methods, such as shortest-path, commonly used in the analysis of such networks are discussed and compared with this approach, highlighting important pathways which are exclusively obtained with our random walks algorithm.

**Category:** Quantitative Biology

[32] **viXra:1212.0040 [pdf]**
*submitted on 2012-12-06 09:40:58*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 3 Pages. 2 diagrams, 14 references

In this paper an actual cause for the formation of mountains will be brought forth, being that plate tectonics has been falsified extensively.

**Category:** Geophysics

[31] **viXra:1212.0037 [pdf]**
*submitted on 2012-12-06 02:39:16*

**Authors:** Chun-Xuan Jiang

**Comments:** 11 Pages.

f(R) gravity generalizes Einstein general relativity,therefore f(R) gravity is wrong

**Category:** Relativity and Cosmology

[30] **viXra:1212.0036 [pdf]**
*submitted on 2012-12-05 10:46:34*

**Authors:** Andrej Rehak

**Comments:** 12 Pages.

Through a link with the speed of light, explained is the nature of ubiquitous but one of the most secretive and controversial phenomenon in nature, gravity. Implicitly, explained is the nature of constantly measured speed of light and the vector of time. From this simple, rigid and tautological relation, without the use of physical constants, derived are universal formulas for motion.
Basic concept of concisely presented Universal Principle is compared with conventional, far more complex model, indicating the crucial Relativistic problem of understanding the indivisible scalar nature of space-time entity.

**Category:** Relativity and Cosmology

[29] **viXra:1212.0035 [pdf]**
*submitted on 2012-12-05 10:54:33*

**Authors:** Andrej Rehak

**Comments:** 9 Pages.

Applying the principle of tautological relationship of gravity and the speed of light (g=cd), solved and explained, by conventional physics misinterpreted nature of Pound-Rebka experiment. Also presented is an accurate universal formula for calculating the spectral shift between the measured points of any altitude difference, equivalent to the Special and General relativistic. Eventually, demonstrated is the link between gravitational, transversal and radial dilatations as a result of speed. Implicitly, presented is the evidence of the infinitely variable nature of universal scalar, the speed of light.

**Category:** Relativity and Cosmology

[28] **viXra:1212.0034 [pdf]**
*submitted on 2012-12-05 10:59:19*

**Authors:** Andrej Rehak

**Comments:** 17 Pages.

Application of the universal space-time principle (g=cd) explains a Pioneer’s anomaly, whose simple solution also proves the principle. Due to its rigid nature, despite the high degree of approximation of calculated values, plotted curves of annual trajectory shortage so as the accumulation of differences between predicted and registered positions of probe are equivalent to those recorded.

**Category:** Relativity and Cosmology

[27] **viXra:1212.0033 [pdf]**
*submitted on 2012-12-05 11:04:27*

**Authors:** Andrej Rehak

**Comments:** 7 Pages.

Implementation of tautological relation of light speed and gravity (g=cd) solved and explained the anomaly observed during the OPERA experiment. The observed disagreement with the predicted time has been calculated up to the level of tolerance of +3.375 ns, which is 4.236 times less than the specified tolerance. The only used data from the experiment was the length of a specific baseline. Because of the rigidity of the principle, the degree of approximation from the use of mean values of the Earth's radius and acceleration is irrelevant in the solution of the anomaly.

**Category:** Relativity and Cosmology

[26] **viXra:1212.0032 [pdf]**
*submitted on 2012-12-04 19:34:37*

**Authors:** YIU Kam HP [ed]

**Comments:** 4 pages, Journal of Knowledge Advancement & Integration (ISSN 1177-4576), Wiki of Science, Creative Commons Attribution-ShareAlike 3.0 License

Lin, Qiu and Perezgonzalez (2010) presented the results of a study by Qiu (2010) examining the sleep pattern disruption suffered by a group of flight attendants working on an Asia-Pacific route. Results indicated that the rapid time zone transitions of about four hours affected the participants’ sleeping pattern, and that a longer duration of their sleep did not necessarily indicate better sleep quality. Furthermore, on the first day of arrival, some participants elected to adopt the local time to cue their sleep. These participants had a shorter duration of sleep and found it harder to wake up the following day. However, after that first day, all participants showed similar sleep attributes despite the different sleep strategy.

**Category:** General Science and Philosophy

[25] **viXra:1212.0030 [pdf]**
*submitted on 2012-12-04 12:07:14*

**Authors:** Yuri Danoyan

**Comments:** 5 Pages.

Examples from the Nature supporting the Ratio 3:1 are given.
Concept of Metasymmetry and Broken Metasymmetry (BM) is introduced.
The 3:1 Ratio has been found as a numerical measure of BM.
An attempt have been made for explanation of BM as total effect
Bose(symmetric wave functions) and Fermi(antysimmetric wave functions) mixture.
They create together 2-dimensional non-euclidean foam.

**Category:** High Energy Particle Physics

[24] **viXra:1212.0029 [pdf]**
*submitted on 2012-12-04 12:24:27*

**Authors:** John A. Gowan

**Comments:** 7 Pages. part 3 of 3

The large mass of the Higgs and IVBs actually recreates the energy-density of the primordial environment in which the elementary particles whose transformations they now mediate were originally created. A weak force transformation is in effect a "mimi-Big Bang", reproducing locally the conditions of the global "macro-Big Bang", so that the elementary particles produced by each are the same in every respect. This is the only way such a replication could be accomplished after eons of entropic evolution by the Cosmos (because the mass of the Higgs and IVBs (or of particles generally) is not affected by the entropic expansion, spatial or historic, of the Cosmos. This is the fundamental reason why the weak force transformation mechanism employs massive bosons). The role of the Higgs is to select and gauge the appropriate unified-force symmetric energy-density state (usually the electroweak (EW) force-unification energy level) for the transformation at hand; IVBs appropriate for that particular symmetric energy state (the "W" family of IVBs in the electroweak case) then access (energize) the state and perform the requisite transformation. (See: "The 'W' IVB and the Weak Force Mechanism".)

**Category:** High Energy Particle Physics

[23] **viXra:1212.0028 [pdf]**
*submitted on 2012-12-04 09:37:02*

**Authors:** Mahgoub. A. Salih, I. B.I. Tomsah, Tayssir. M. Al Mahdi

**Comments:** 6 Pages.

We use the Modified Reissner-Nordstrom solution to Einstein's field equation for static charged
spherical black hole in microscopic scale, by redefine the potential. Therefore, we obtained circular
orbits and horizons in atomic domain. Generalization of special relativity in the sense of the model,
predicted proton radius, mass defect, bending of light and precession of perihelion. Mass defect used in
estimation electron, positron, neutrino and anti-neutrino mass, in addition of proton decaying energy
and mass charge equivalent relation.

**Category:** Relativity and Cosmology

[22] **viXra:1212.0026 [pdf]**
*submitted on 2012-12-04 06:34:06*

**Authors:** Anatoly V. Belyakov

**Comments:** 5 pages, including 4 figures. Published: Progress in Physics, 2010, v.4, p.90-94.

This study suggests a mechanical interpretation of Wheller’s model of the charge. According
to the suggested interpretation, the oppositely charged particles are connected through the vortical lines of the current thus create a close contour “input-output” whose parameters determine the properties of the charge and spin. Depending on the energetic state of the system, the contour can be structurized into the units of the second and thirs order (photons). It is found that, in the framework of this interpretation, the charge is equivalent to the momentum. The numerical value of the unit charge has also been calculated
proceeding from this basis. A system of the relations, connecting the charge to the constants of radiation (the Boltzmann, Wien, and Stefan-Boltzmann constants, and the fine structure constant) has been obtained: this give a possibility for calculating all these constants through the unit charge.

**Category:** Classical Physics

[21] **viXra:1212.0025 [pdf]**
*submitted on 2012-12-04 04:48:37*

**Authors:** Daniele Sasso

**Comments:** 10 Pages.

In the Standard Model (SM) mesons are considered hadrons like baryons and they have therefore a quark structure. Moreover they are considered bosons because of the exclusive property in the world of massive elementary particles to have integer and equal to zero spin. In this paper we propose both a critical reading of these properties and a new classification for mesons based on the Non-Standard Model (NSM) in which they have a leptonic and electrodynamic nature. The introduction then of the Principle of Decay in the new NSM involves a different classification of elementary particles and at the same time different fundamental physical properties like the spin. In this treatise the complex structure of mesons isn’t considered because their hadron nature comes down and because of their instability we introduce the energy new particle called meson neutrino. Moreover we dentify a different physical behaviour between charged mesons and neutral mesons and let us propose for neutral mesons a physical structure compatible with the positronium.

**Category:** High Energy Particle Physics

[20] **viXra:1212.0024 [pdf]**
*submitted on 2012-12-03 17:04:29*

**Authors:** John A. Gowan

**Comments:** 6 Pages. part 2 of 3

t should be easier to understand and appreciate the functional activity and role of the weak force (and its associated Higgs bosons) when seen in its full-spectrum array than when glimpsed, as usual, only in its partial, low energy, electroweak domain. At the electroweak energy level the "W" IVB creates/destroys/transforms single leptons and quarks (and transforms, but does not create or destroy, single baryons). The "X" IVB at the GUT energy level creates/destroys single baryons and transforms/destroys but does not create leptoquarks. The "Y" IVB at the TOE energy level creates/transforms/destroys leptoquarks (including the crucially important electrically neutral leptoquarks). Without the "X" and "Y" IVBs, we have no source for either single baryons or electrically neutral leptoquarks, so we need them both (or their analogs). The primordial heavy leptons or "Ylem" (Gamow's term) are evidently created during the "Big Bang" by a group effort involving all four forces.

**Category:** High Energy Particle Physics

[19] **viXra:1212.0021 [pdf]**
*submitted on 2012-12-03 15:44:50*

**Authors:** Sari Haj Hussein

**Comments:** 15 Pages.

This is a slide presentation of the paper entitled "Scandinavian SD - The SAFE Way", which can be found at: http://vixra.org/abs/1207.0045.

**Category:** General Science and Philosophy

[18] **viXra:1212.0020 [pdf]**
*submitted on 2012-12-03 16:42:35*

**Authors:** Fernando Loup, Daniel Rocha

**Comments:** 26 Pages.

Warp Drives are solutions of the Einstein Field Equations that
allows superluminal travel within the framework of General
Relativity. There are at the present moment two known solutions:
The Alcubierre warp drive discovered in $1994$ and the Natario
warp drive discovered in $2001$. However as stated by both
Alcubierre and Natario themselves the warp drive violates all the
known energy conditions because the stress energy momentum
tensor(the right side of the Einstein Field Equations) for the
Einstein tensor $G_{00}$ is negative implying in a negative energy
density. While from a classical point of view the negative energy
is forbidden the Quantum Field Theory allows the existence of very
small amounts of it being the Casimir effect a good example as
stated by Alcubierre himself.The major drawback concerning
negative energies for the warp drive are the huge negative energy
density requirements to sustain a stable warp bubble
configuration. Ford and Pfenning computed these negative energy
and concluded that at least $10$ times the mass of the Universe is
required to sustain a warp bubble configuration. However both
Alcubierre and Natario warp drives as members of the same family
of the Einstein Field Equations requires the so-called shape
functions in order to be mathematically defined. We present in
this work two new shape functions for the Natario warp drive
spacetime based on the Heaviside step function and one of these
functions allows arbitrary superluminal speeds while keeping the
negative energy density at "low" and "affordable" levels.We do not
violate any known law of quantum physics and we maintain the
original geometry of the Natario warp drive spacetime We also
discuss briefly Horizons and infinite Doppler blueshifts.

**Category:** Relativity and Cosmology

[17] **viXra:1212.0019 [pdf]**
*replaced on 2013-04-14 08:36:13*

**Authors:** Florentin Smarandache

**Comments:** 183 Pages.

Following the Special Theory of Relativity, Florentin Smarandache generalizes the Lorentz Contraction Factor to an Oblique-Contraction Factor, which gives the contraction factor of the lengths moving at an oblique angle with respect to the motion direction. He also proves that relativistic moving bodies are distorted, and he computes the Angle-Distortion Equations.
He then shows several paradoxes, inconsistencies, contradictions, and anomalies in the Theory of Relativity.
According to the author, not all physical laws are the same in all inertial reference frames, and he gives several counter-examples. He also supports superluminal speeds, and he considers that the speed of light in vacuum is variable.
The author explains that the redshift and blueshift are not entirely due to the Doppler Effect, but also to the medium composition (i.e. its physical elements, fields, density, heterogeneity, properties, etc.).
He considers that the space is not curved and the light near massive cosmic bodies bends not because of the gravity only as the General Theory of Relativity asserts (Gravitational Lensing), but because of the Medium Lensing.
In order to make the distinction between “clock” and “time”, he suggests a first experiment with a different clock type for the GPS clocks, for proving that the resulted dilation and contraction factors are different from those obtained with the cesium atomic clock; and a second experiment with different medium compositions for proving that different degrees of redshifts/blushifts would result.

**Category:** Relativity and Cosmology

[16] **viXra:1212.0018 [pdf]**
*submitted on 2012-12-03 12:30:39*

**Authors:** W. B. Vasantha Kandasamy, Florentin Smarandache

**Comments:** 200 Pages.

The authors introduce the concept of neutrosophic super matrices and the new notion of quasi super matrices. This new notion of quasi super matrices contains the class of super matrices. The larger class contains more partitions of the usual simple matrices. Studies in this direction are interesting and find more applications in fuzzy models. The authors also suggest in this book some open problems.

**Category:** Algebra

[15] **viXra:1212.0017 [pdf]**
*submitted on 2012-12-03 11:47:28*

**Authors:** E. Koorambas, G. Kakavas, G.Lampousis, A.Alambeis, A.Aggelopoulos

**Comments:** 8 Pages.

We report here a new approach in human harvesting with piezoelectric generators, for mass energy production. Nanogenerators capable of converting energy from mechanical sources to electricity with high effective efficiency are attractive for many applications, including energy harvesters. For the purpose of massive energy production, the concept of piezo-generators detached from the human body and the idea of viewing population dynamics as mechanical stress sources were tested. Instead of modeling dynamics of traffic states on individuals, we used spatial configurations of traffic states and temporal dynamics in the metro network of Athens. PMS (Piezoelectric Metro Seats) which contain PZT stacks piezoelectric generators considered to be placed in all the EMUs (electric multiple units) of the Athens metro network. The results, either by taking into account the correlation between the amount of passengers and the amount of train departures or considering the frequency of the departures constant, are promising and the energy production can reach the amount of 40 MWh annually.

**Category:** Classical Physics

[14] **viXra:1212.0016 [pdf]**
*replaced on 2012-12-29 04:50:05*

**Authors:** Demjanov V.V.

**Comments:** 14 Pages. corrected typos and added Russian text

It is shown, that the measured speed of light (and the phase and group) everywhere on Earth, and in near-Earth of air atmosphere, and in cosmic vacuum, has the same of variative characteristic, like y many natural phenomena. For describ of constant value of tempo of relativistic processes need quite another measure. She is named here "tempo aether-permeability" of electromagnetic waves (EMW) through light-carrying medium.
Because of the refusal from aether, in SRT in more 100 years did not know "mechanism aether-permeability" space EMW and have no to him of interest. Instead this mechanism being imposed erroneous declaration of "sameness" of phenomena and velocities of EMW in the "vacuum" of still and moving inertial reference of systems (IRS). Created in 1870th aether-dinamic theory EMW (Maxwell's) has revealed only a unity electric and the wave nature of light phenomena , but not "sameness" speed and procedural characteristics of their implementation, strongly emphasizing the variety of speeds to implement them in the expanse of the world.
Formed on the basis of Maxwell's theory in the period 1890-1904s efforts Lorentz and Poincare aether-dinamic theory of relativity (ADTR) was based on more accuracy wording of "postulates relativity movements". According them: 1) the speed of light in different IRS is is not constant, but only is finite as well as unattainable for infinite accelerating objects, since particles was considered "clusters" the aether; 2) only the formulas of laws in different ways of moving IRS is identical (Lorentz-invariant), but not their proces implementation. Namely ADTR, not his the "without-aether kinematic copy" of SRT, with efforts of genius of relativists 20th century gives the successful development of industrial applications. On this of worthy the basis we is explain, how to maintain the moment of EMW at different speeds (c*=c/n) by propagation in a medium with a refraction n≠1. This resolves a long-standing dilemma of the Abraham-Minkowski and to prove invariance the radical of the Lorentz transformations not only for n=1, but also in the mediums with n≠1.

**Category:** Relativity and Cosmology

[13] **viXra:1212.0015 [pdf]**
*replaced on 2014-01-25 19:30:33*

**Authors:** Steven Kenneth Kauffmann

**Comments:** 11 Pages.

The self-gravitational correction to a localized spherically-symmetric static energy distribution is obtained from energetically self-consistent upgraded Newtonian gravitational theory. The result is a gravitational redshift factor that is everywhere finite and positive, which both rules out gravitational horizons and implies that the self-gravitationally corrected static energy contained in a sphere of radius r is bounded by r times the fourth power of c divided by G. Even in the absence of spherical symmetry this energy bound still applies to within a factor of two, and it cuts off the mass deviation of any quantum virtual particle at about a Planck mass. Because quantum uncertainty makes the minimum possible energy of a quantum field infinite, such a field's self-gravitationally corrected energy attains the value of that field's containing radius r times the fourth power of c divided by G. Roughly estimating any quantum field's containing radius r as c times the age of the
universe yields a "dark energy" density of 1.7 joules per cubic kilometer. But if r is put to the Planck length appropriate to the birth of the universe, that energy density becomes the enormous Planck unit value, which could conceivably drive primordial "inflation". The density of "dark energy" decreases as the universe expands, but more slowly than the density of ordinary matter decreases. Such evolution suggests that "dark energy" has inhomogeneities, which may be "dark matter".

**Category:** Relativity and Cosmology

[12] **viXra:1212.0014 [pdf]**
*submitted on 2012-12-02 23:30:36*

**Authors:** Rajan Dogra

**Comments:** 26 Pages.

In this paper, it is shown that for the hard gluon emitted in 3-jet event, the existence of the
Hamiltonian H on a particular gauge orbit is only for the infinitesimal time–period . During this infinitesimal time–
period , the uncertainty principle implies that there must be certain minimum amount of uncertainty, or quantum
fluctuation in the eigenvalue of the Hamiltonian H of the hard gluon emitted in 3-jet event. One can think of these
quantum fluctuations as Gribov copies that appear at some time, move along with the real hard gluon and then get
annihilated. Like virtual particles, Gribov copies cannot be observed directly with particle detectors, but their
indirect effects like anomalous scaling can be observed and measured.

**Category:** High Energy Particle Physics

[11] **viXra:1212.0013 [pdf]**
*submitted on 2012-12-02 15:39:55*

**Authors:** Janko Kokosar

**Comments:** 12 Pages.

Relativistic mass is not incorrect. The main argument against it is that it does not tell us anything more than the relativistic energy tells us. In this paper it is shown that this is not true, because new aspects of special relativity (SR) can be presented. One reason for this definition is to show a relation between time dilation and relativistic mass. This relation can be further used to present a connection between space-time and matter more clearly, and to show that space-time does not exist without matter. This means even a simpler presentation than is shown with Einstein's general covariance. Therefore, this opposes that SR is only a theory of space-time geometry, but it needs also rest mass. Phenomenon of increasing of relativistic mass with speed can be used for a gradual transition from Newtonian mechanics to SR. It also shows how relativistic energy can have properties of matter. The postulates, which are used for the definition of SR, are therefore still clearer and the whole derivation of the Lorentz transformation is clearer. Such derivation also gives a more realistic example for the confirmation of Duff's claims.

**Category:** Relativity and Cosmology

[10] **viXra:1212.0012 [pdf]**
*replaced on 2013-03-08 16:21:21*

**Authors:** Henok Tadesse

**Comments:** 4 Pages. This paper is not consistent with my latest paper.Posted for a different reason.

This paper presents a ‘paradox’ which is meant to challenge the firmly established belief that the speed of EM waves (light) is an absolute constant, C. An electromagnetic wave travelling on a transmission line (terminated by an antenna) which itself is moving relative to an observer will be investigated to show the contradictions arising from the assumption of the source independent speed of electromagnetic waves (light) and to invalidate the postulate that the speed of light is a universal constant and the highest possible speed.

**Category:** Relativity and Cosmology

[9] **viXra:1212.0011 [pdf]**
*submitted on 2012-12-02 11:57:18*

**Authors:** Reginald B. Little I

**Comments:** 51 Pages. Previously submitted to sciprint.org in 2005

The Little Rule and Effect describe the cause of phenomena of physical and chemical transformations on the basis of spin antisymmetry and the consequent magnetism of the most fundamental elements of leptons and quarks and in particular electrons, protons and neutrons causing orbital motions and mutual revolutionary motions (spinrevorbital) to determine the structure and the dynamics of nucleons, nuclei, atoms, molecules, bulk structures and even stellar structures. By considering the Little Effect in multi-body, confined, pressured, dense, temperate, and physicochemically open systems, new mechanisms and processes will be discovered and explanations are given to the stability of multi-fermionic systems for continuum of unstable perturbatory states with settling to stable discontinuum states (in accord to the quantum approximation) to avoid chaos in ways that have not been known or understood. On the basis of the Little Effect, the higher order terms of the Hamiltonian provide Einstein’s missing link between quantum mechanics and relativity for a continuum of unstable states.

**Category:** Relativity and Cosmology

[8] **viXra:1212.0010 [pdf]**
*submitted on 2012-12-02 12:05:04*

**Authors:** Stephen J. Crothers

**Comments:** 9 Pages.

This document is the transcript of an interview of me conducted by American
scientists who requested me to explain in as simple terms as possible
why the black hole does not exist. I provide five proofs, four of which each
prove that General Relativity does not predict the black hole, and one which
proves that the theoretical Michell-Laplace Dark Body of Newton’s theory
of gravitation is not a black hole. The interview is located at this URL:
http://www.youtube.com/watch?v=fsWKlNfQwJU

**Category:** Relativity and Cosmology

[7] **viXra:1212.0009 [pdf]**
*submitted on 2012-12-02 09:55:53*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages. 2 illustrations

A simple diagram can illustrate the similarities between many plasma oriented atmospheric phenomena on Earth in a diagram of Voltage-Current characteristics of a low pressure DC discharge tube.

**Category:** Geophysics

[6] **viXra:1212.0008 [pdf]**
*submitted on 2012-12-02 07:12:32*

**Authors:** Xianzhao Zhong

**Comments:** 10 Pages.

For free electromagnetic field, there are two kinds of the wave equation, one is Maxwell
wave equation, another is generalized wave equation. In the paper, according to the matrix transformation the author transform the general quadratic form into diagonal matrix. Then this can obtain both forms of wave equation.
One is the Maxwell wave equation, another is the second form of the wave equation. In the half latter of the paper the author establish other two vibrator differential equations.

**Category:** Statistics

[5] **viXra:1212.0007 [pdf]**
*submitted on 2012-12-02 09:03:58*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 1 Page.

It will be experimentally shown why gravity is not a force. It will be understood that gravitation is probably a secondary process similar to heat and light or even a tertiary process such as buoyancy or wind.

**Category:** Relativity and Cosmology

[4] **viXra:1212.0006 [pdf]**
*submitted on 2012-12-01 20:06:13*

**Authors:** Alexander Bolonkin

**Comments:** 7 Pages.

For Protection of the Earth from asteroid we need in methods for changing the asteroid trajectory and theory for an estimation or computation the impulse which produces these methods. Author develops some methods of this computation. There are: impact of the space apparatus to asteroid, explosion the conventional explosive having form of plate and ball on asteroid surface, explosion the small nuclear bomb on the asteroids surface, entry asteroid to Earth atmosphere, braking of asteroid by parachute.
Offered method may be also used for braking of apparatus reentering in the Earth from a space flight.
The offered theory also may be used for protection the Earth from impact of a big asteroid.

**Category:** General Science and Philosophy

[3] **viXra:1212.0003 [pdf]**
*submitted on 2012-12-01 21:02:56*

**Authors:** Alexander Bolonkin

**Comments:** 8 Pages.

Currently reentry of USA Space Shuttles and Command Module of Lunar Ships burns a great deal of fuel to reduce reentry speed because the temperatures are too high for atmospheric braking by conventional fiber parachutes. Recently high-temperature fiber and whiskers have been produced which could be employed in a new control rectangle parachute to create the negative lift force required. Though it is not large, a light parachute decreases Shuttle speed from 8 km/s (Shuttle) and 11 km/s (Apollo Command Module) up to 1 km/s and Space Ship heat flow by 3 4 times (not over the given temperature). The parachute surface is opened with backside so that it can emit the heat radiation efficiently to Earth-atmosphere. The temperature of parachute is about 600 1500o C. The carbon fiber is able to keep its functionality up to a temperature of 1500 2000o C. There is no conceivable problem to manufacture the parachute from carbon fiber. The proposed new method of braking may be applied to the old Space Ship as well as to newer spacecraft designs.

**Category:** Classical Physics

[2] **viXra:1212.0002 [pdf]**
*submitted on 2012-12-02 04:00:28*

**Authors:** Anatoly V. Belyakov

**Comments:** 4 pages, including 9 figures. Published: Progress in Physics, 2010, v.4, p.36-39.

The Author suggests that frequent distributions can be applied to the modelling the influences
of stochastically perturbing factors onto physical processes and situations, in order to look for most probable numerical values of the parameters of the complicate systems. In this deal, very visual spectra of the particularly undetermined complex problems have been obtained. These spectra allows to predict the probabilistic behaviour of the system.

**Category:** General Science and Philosophy

[1] **viXra:1212.0001 [pdf]**
*submitted on 2012-12-01 03:51:37*

**Authors:** s Orlov

**Comments:** 5 Pages.

Are investigated a trajectory of new type in distant, space flights unlike usual trajectories of direct flight to heavenly object (Moon) it is supposed to use asymmetry of a gravitational field and to carry out flight bypassing the most power gravitational impact on the spacecraft. It leads to economy of power for 20-30 %.

**Category:** Relativity and Cosmology