By Douglas Vogt and Gary Sultan
Vector Associates, 11250 Old St Augustine Rd., #15, Suite 133, Jacksonville, FL 32257; Sales@vectorpub.com
Light
We have all marveled at the spectrum of color produced by a prism or a rainbow. We have all turned on light bulbs which produced every color imaginable. Even scientists still wonder what light really is, but so far it has defied understanding. Is it a particle, like Newton and quantum mechanics envisioned, or is- it an electromagnetic wave as theorized by Maxwell?
Quantum Mechanics Viewpoint
The theory of quantum mechanics was first developed by Max Planck in 1900. Later contributors included Albert Einstein. Planck was trying to find what the energy distribution of light emitted from a heated object was. He concluded from his experiments that the energy emitted could only be radiated in bundles of energy which he called quantums (Einstein later called these bundles photons). Planck calculated that these quantums of energy all had the same energy level (h=6.626 x 10-27 erg. sec.) times the fre-
89
quency of radiation. Planck’s conclusion was that solid matter can only radiate quantums of energy in the form of light.
Einstein added to Planck’s conclusion by trying to explain the photoelectric effect. This is when electrons are produced from a metal when light strikes it. It was found through experimentation that the intensity of light has little to do with the velocity of the electrons “produced,” rather it was in direct proportion to the frequency of the light. It was found that the electrons “freed” had a constant velocity. Einstein theorized that each element had a given number of electrons. The electrons were held in place by magnetic forces. When light, with sufficient energy levels, strikes the element, the energy overcomes the attractive forces holding the electrons to the atom. If there is any excess energy left over, it is imparted to the electron as kinetic energy. The quantity of energy needed to release an electron varies from element to element. The surface electrons receive the greatest amount of energy; therefore, they have the greatest amount of kinetic energy. Less energy is necessary to overcome the binding force on the surface of the metal than in its interior. In summary, Einstein-Planck’s theory considers light as particles called photons, each photon having a certain amount of energy depending on its frequency (color). The momentum of each photon is equal to Planck’s constant times the frequency, divided by the speed of light (h f/c).
The quantum theory is not considered perfect because it cannot explain the phenomenon of interference lines and defraction spectrum formed by a prism. Maxwell’s electromagnetic theory of light was able to explain those phenomena but could not explain the photoelectric effect.
Multidimensional Reality Explanation
As far as we can see, this theory is the only one that can logically explain all the observable phenomena of light. The first and most important question to be asked is what is light? The answer is simple when analyzed using our theory. Light is the demodulated information of an element passing us at the speed of approximately 3 x 108 meters/sec. The reason it travels at this “speed” is because the head device is passing over the information that makes up the elements at that particular speed. The information is in turn trans-
90
mitted and modulated into our dimensional existence at the rate of approximately 3 x 108 meters/sec. This is how light is produced. First let’s consider a stable element like sodium. When no potential is added to sodium, it gives off no light. The sodium atom has its own group of frequencies that make it up. These frequencies include a carrier wave frequency, several frequencies that make up the physical information of sodium, and some clocking and synchronizing frequencies. This could amount to ten or more frequencies bundled together. Let’s say these frequencies normally modulate at 1,500 GHz (1,500,000,000,000 cycles per sec.); as potential is added, the sodium frequencies start to produce higher harmonics of its information (Figures 4.1 and 4.2). Each series of frequencies is produced by additional potential. In other words, series six needs six times more potential than series one. This is not to say that the potential follows a linear relationship. We use this example merely to simplify the explanation. The power function might very well follow an exponential function. When enough potential is added and these higher harmonic frequencies are produced in the visible light spectrum, we see the sodium as incoherent light (white light). We are able to separate this incoherent light into its unique spectral lines (frequencies). The device used to separate the incoherent light is called a prism, and the phenomenon is called dispersion.
Figure 4.1 The light spectral line series’s of sodium
91
Figure 4.2 Higher harmonic frequencies of a element
The Phenomena of a Prism
Aprism can be made of any clear hard material. When light is passed through the prism at an angle to the surface, (Figure 4.3) the incoherent light is immediately divided into separate color (frequency) lines. This is called dispersion. Each element in the universe has its own unique spectral frequency lines. Another phenomenon happens at this time. The speed of light slows down when it enters the prism. In fact, light slows down when it passes through anything denser than a vacuum. This velocity-decrease is directly related to the refractive index of the materials of which the prism is made. Denser elements have higher refractive indexes than do less dense elements (Figure 4.4). The phenomena of defraction, dispersion, and the decrease in velocity are directly related. The traditional explanation for dispersion is that short wave lengths are bent more than longer wave lengths of light. This means that the violet colors are bent more than the red colors. The problem is that scientists never explain what is so unique about the wave lengths. The next important point to remember is that as the wave length becomes smaller, the potential of the frequency increases.
The reason why the light bends and dispersion appears is the most difficult concept to be explained in this book, but our’s is the only theory that can explain it logically.
First you must realize that the white incoherent light you direct toward the prism represents the information of one or more elements. We now refer you back to the tape analogy. When you raise the potential of an element high enough (in the light spectrum
92
Figure 4.3 Two examples of dispersion
and above), you have actually caused some of the information of that element not to exist (demodulate) in this time and space. We see it as light. That bit of information is no longer in this dimension. It is back in the first dimension as a small domain of information on the tape. Its velocity on the tape is zero, whereas the velocity of our information being modulated is 3 x 108 meters/sec. To us it appears that the light is traveling faster, but in reality it is stopped; we are the ones that are moving faster on the tape. The reason why the light bends (refraction) in a prism is exactly the same reason why it bends and appears to slow down while passing through a strong gravitational field. The gravitational field is really
93
Example of refraction. A is the angle of incidence, B is the angle of refraction. Angle B is different for each element used.
Figure 4.4 Diagram showing refraction and reflection
strong concentrations of information going to a planet or a star. Usually this gravitational field is not as strong as the modulated information of a prism, as exemplified by its shape in this dimension. Also, a gravitational field does not have an immediate effect on the light beam but rather its force follows the inverse square law. The result is that the light beam gradually curves in, toward the modulation point of the gravitational field (see Figure 4.5).
When the light beam enters the prism, it’s immediately bent and changes velocity. This is much different from the effect in a strong gravitational field because the light is not just passing through concentrations of information but is literally passing through the domains of information of the prism in the diehold. The reason why the velocity changes is because the modulation velocity of the prism is at 3 x 108 meters/sec., and the modulation velocity of the light is 0. The result is that the information of the light increases its velocity slightly by being “pushed” or affected by the much denser information of the prism. Once the light beam passes through the domains of information of the prism, the light resumes its zero velocity. To us in this dimension, the light appears to speed up. The immediate dispersion of the light when it enters the prism is due to the different energy levels of the frequencies of the light,
94
the violet and ultraviolet colors, possessing much more potential than the infrared or red colors. The analogy for the bending and the potential is exactly like the example of the effects on measuring rods and clocks as described in Chapter 3. (Figure 4.6) As mentioned in that chapter, the energy necessary to accelerate an object is equivalent to saying that the object has that much more potential. An equivalent statement can be made for an ultraviolet domain of information (color). It possesses a great deal more potential than the infrared, so you can look at it as possessing a greater mass, which is the result of a greater amount of information than the infrared end of the spectrum. If it possesses a greater mass and gravitational field, then it will be more attracted to the domains of the prism. This is why the violet light is bent in more toward the prism than the red light. This analogy holds equally true for much lower levels of wave form energies, like microwave. Using Figure 4.3, the microwave information would pass straight through the prism more or less undeflected by the information of the prism.
Figure 4.5 Diagram of a light beam passing a gravitational source
Electromagnetic Properties of Light
Traditional Theory-The electromagnetic theory of light was first introduced by James C. Maxwell. He theorized that light was like transverse waves, similar to the lower frequency, electromagnetic waves (Figure 4.7). The electrostatic field vector (E) and the magnetic field vector (H) that make up the wave are perpendicular to each other and to the direction of the wave. The E and H waves are said to oscillate in phase. That means that E and H both reach maximum value at the same time.
95
Figure 4.6 A visualization of dispersion in the diehold
E = electrostatic field vector
H = magnetic field vector
The
diagrams show that E and H are pulsing in phase with each other.
Figure 4.7 “Traditional” diagram of the electromagnetic waves of light
Multidimensional Reality Explanation
We agree with Maxwell’s concept of light being an electromagnetic wave except for one point. The idea that the E and H wave pulsate
96
in phase is incorrect because it goes against all observations in electronics, from the back electromotive force in a coil, to the fact that a current lags the magnetic field by 90’ out of phase. To look at it logically, (Figure 4.8) the magnetic information must be present before the information of the electrostatic field. We perceive these oscillations as happening simultaneously only because the frequency at which it is oscillating is too rapid for our detection.
Figure 4.8 MDR diagram of the electromagnetic waves of light
Polarization of Light
Incoherent light can be polarized if it is passed through perpendicular to the surface of a crystal. The two parts of the information for the crystal are lined up perpendicular to themselves (Figure 4.9). The crystal is said to be vertically polarized if the magnetic information is lined up horizontally. The light will pass along only the same plane of direction as the electrical field. The electrical field is 90’ out of phase from the magnetic information. As the randomly polarized light strikes the surface of the crystal, any light waves that are not vertically polarized will be absorbed by the magnetic information of the crystal.
97
Figure 4.9 Polarization of light
In conclusion, the other directional wave lengths are cancelled out because their signals are being grounded out.
Interference Lines
This phenomenon occurs when an incoherent light is passed through a very narrow slit. The interference lines appear as light and dark fringes of light near the edges. The traditional theory explains this phenomenon by saying that half of the light wave is being distorted.
Multidimensional Reality Explanation
The reason the interference lines appear when they pass near the edge of any object is because, as mentioned in Chapter 3, at the edge of any surface is where the maximum modulation area is to be found; therefore, the greatest amounts of surface potential will be located along this plane. As light passes through this plane, or modulation zone, the light is bent in the same manner as the effects previously mentioned in prisms.
Ultraviolet Light
The human eye can only detect light waves between 7,600 angstrom units (A) on the red end of the spectrum to about 3,130 A at the ultraviolet end. The most fascinating part of the light spectrum is the ultraviolet end. The visible light rays cover only a small
98
portion of the light spectrum (2-fold of wave lengths), but the ultraviolet spectrum covers 100-fold in wave lengths. (1-p4) This part of the light spectrum represents a greater amount of energy than any other part of the spectrum below it.
Fluorescence
Fluorescence is the phenomenon of a substance giving off a particular color (frequency) when it is exposed to a higher frequency color. The glowing stops when the exposure to the higher frequency light stops. There is no traditional explanation for this phenomenon. Some of the elements and minerals that will exhibit fluorescence are fluorite, some diamonds, rubies, and calcite. Each element will give off its own distinct color frequency.
Multidimensional Reality Explanation
Fluorescence can be produced in three different ways. One is by low-frequency electric discharge, the second by heat from the infrared spectrum or from the higher, more energetic frequencies of ultraviolet light. The first and second methods will produce the effect but will take much longer compared to the ultraviolet light. When ultraviolet light strikes a fluorescent material, it instantly glows. The reason for this effect is that the ultraviolet has so much potential along a broad band of frequencies that some of those frequencies are bound to be higher energy level harmonic frequencies of the element. The result is that the information of the element is immediately raised to the visible light spectrum.
Phosphorescence
Phosphorescence is like fluorescence except that the illumination of light continues after the higher wave length light is turned off. This illumination could continue for a considerable time after. In addition to ultraviolet light causing this effect, x-ray radiation can also produce it. Some of the elements that phosphoresce are some diamonds (carbon), willemite, kunzite, phosphorus, and radium.
99
The reason these and other elements phosphor is that the higher frequency harmonics of the element are being raised across its spectral frequencies. None of its ultraviolet spectral lines are being missed. The result is that the information for the element at lower frequencies is also having its potential raised simultaneously. Since the ultraviolet light possesses a tremendous amount of potential, much of this potential is transferred to the frequencies of the element. The effect is similar to a capacitor. After the capacitor has been charged up, it will gradually release its energy. The light emitted by the element is always of lower frequency (color) than the ultraviolet light, which is usually invisible. The light we see represents the strong light spectral lines of that element. Since the element gives off no heat due to being exposed to ultraviolet light, it seems possible that the increased potential of the element is actually happening in the first dimension and not in this dimension at all. We merely see the results of the increase in potential by observing light in this dimension.
The Light Spectrum and Iron
As mentioned in the chapter on magnetism, iron is the only element in the universe that can be magnetized. Our theory is that the iron element must be very close or a first harmonic of the carrier wave frequency of all information. This hypothesis holds if one examines the light spectral lines of iron. Iron has the second greatest number of light spectral lines (4,612). (2) The only element that has more spectral lines is cerium with 5,739, but there is a big difference between these two elements. Iron has a total of 275 strong spectral lines. These are lines whose light intensity (measured on a scale from one to 1,000) measures over 200. Cerium has only five such strong light spectral lines. We theorize that many of the spectral lines we associate with iron are really the spectral lines of the carrier wave. If we examine the strong spectral lines of the next most numerous elements, such as cobalt and nickel, we find that many of those element’s spectral lines are found no more than 1 A away from a spectral line of iron. The same observation holds true for many other elements, especially ones whose crystal shapes are also octahedron. We do not believe it is mere coincedence that iron has more than twice the
100
number of strong spectral lines than most of the other elements. If some of these spectral lines are in fact frequency representations of various carrier waves, then we could say that this is why iron seems to have enough spectral lines to represent two elements.
Laser Holograms
As mentioned in the introduction to this book, there is a strong possibility that many of man’s inventions (such as television, tape recorders, radios, etc.) may be mirror images of the technology which makes up his own existence. This would mean that many of man’s inventions are excellent clues and analogies to his own existence. One of the best clues to dramatically prove our theory of existence is man’s invention of the hologram. A hologram is a three-dimensional image produced by coherent light. The object observed appears to have three-dimensional qualities. In fact, some advanced laser holograms produce images that make it impossible to tell the difference between the image and the actual object.
As an example, Figure 4.10 is a patent by William C. Jakes, Jr. (Patent No. 3,566,021). The patent is for a real time, three-dimensional television system. Other laser television systems have been patented, we merely use this one as an example to show the parallel similarities between our own existence, as theorized by us, and the image created by a hologram television.
“This disclosure relates to a television system that utilizes wave front reconstruction techniques to provide a real time three-dimensional image at the receiving end of the system, with the image changing in perspective as the object and/or observer moves. The coherent light from a laser is first modulated at a frequency in the microwave range and one sideband of the coherent light is filtered out and used to illuminate an object scene. The light reflected from the object scene impinges on a photodetector while a narrow reference beam of coherent light raster scans the photodetector to thereby generate a signal which is modulated in phase and amplitude in accordance with the interference pattern formed on the photodetector. The signal carrying the modulated phase and amplitude information is then transmitted to a remote receiver. At the received end, the phase and amplitude modulated information is recovered and stored, a frame at a time, in respective storage devices. At the end of a complete frame the stored infor-
101
Figure 4.10 Diagram of a laser holographic TV
mation is read out and respectively applied to an array of phase and amplitude optical modulators. Also, at the end of a complete frame received information, a second laser at the receiver is pulsed with the light therefrom directed toward said array. In this manner, an image of the original object is obtained at the receiver. The described operation is continued a frame at a time.” (3-pl68)
There are direct analogies between this type of invention and our own existence. In Figure 4.10 Object 15 could be analogized as being the information in the diehold. Items 13 and 23 can be considered the tapehead. In this invention it is actually the two parts of the laser beam directed at the object that pick up the information that makes up the image of the object. The microwave oscillator (Item 18) and the optical modulator (Item 16) can be considered the carrier wave and synchronizing frequencies which we have been talking about. This information is directed to Items 25 and 26, which converts the light information to electromag-
102
netic waves. This is similar to what we define as being the second dimension or the transmission dimension. Items 32 and 34 are the phase detector and frequency modulator for the vertical lines of information of the image. Items 31 and 33 are the amplitude detector and frequency modulator of the horizontal information. In our existence there are no physical phase or amplitude detectors in this dimension. The diehold somehow uses the phase angles and potentials from its eight different transmitting sides in such a manner that the signal modulates itself into existence. Item 30 can be considered a continuation of the carrier wave frequencies.
It is within man’s grasp to have a computer produce the images that we see without the necessity of photographing any object. This would be even closer to the way of our own existence. Sometimes an analogy is so obvious and so simple that it is difficult to comprehend its application to man’s own existence. Maybe this is a result of man’s fear of knowing the truth of his own existence.
REFERENCES
1. Koller, L. R., “Ultraviolet Radiation” (London, John Wiley & Sons, 1952).
2. Zaidel, A. N., Prokof’ev, Raiskii, S. M., “Tables of Spectrum Lines” (London, Pergamon Press, 1961).
3. Kallard, T. (ed.), “Holography, State of the Art Review, 1971-72” (N.Y., Optosonic Press, 1972).
Bibliography
Abell, G. O., Exploration of the Universe (N.Y., Holt, Rinehart & Winston, 1975).
Ahrens, L. H., Wavelength Tables of Sensitive Lines (Mass., AddisonWesley Press, 1951).
Bauer, M., Precious Stones, vol. 1 (London, Charles Griffin & Co., 1904).
Carhey, W. T., Optical Information Processing & Holography (N.Y., John Wiley & Sons, 1974).
103
Dana, J. D., A Textbook of Mineralogy, 4th ed. (N.Y., John Wiley & Sons, 1932).
Di, R. W., The New Encyclopeadia Britannica, vol. 10, pp. 928-49 (Chicago, Helen Hemingway Benton, Publ., 1976).
Kock, W. E., Sound Waves and Light Waves (N.Y., Anchor Books, 1965).
Pearse, R., Gaydon, A., The Identification of Molecular Spectra (N.Y., John Wiley & Sons, 1941).
Plank, Max, Introduction to Theoretical Physics -Theory of Light (London, Mac Millan & Co., 1932).
Ripley, J. A., Jr. The Elements and Structure of the Physical Science (N.Y., John Wiley & Sons, 1965).
Thompson, H. W., A Course in Chemical Spectroscopy (Oxford, Clarendon Press, 1938).
White, H. E., Introduction to Atomic Spectra (N.Y., McGraw-Hill Book Co., 1934).
104
The Atom
Over 2,000 years ago the Greek philosophers used the word atom to describe the smallest bit of matter. They taught that the atom was indivisible, the most perfect particle of matter.
In the 20th century, scientists believe they have proved the atom has an internal structure and is not indivisible. They believe they have discovered that the atom is made up of many stable and unstable particles, some of which have internal structure. We will attempt to show that the Greeks may have been more correct than our present-day scientists give them credit for. Even though we have all types of measuring devices and other sophisticated equipment today, all the ancient Greeks had was a scientific philosophy they had received from the Egyptians; and who knows for sure how and from whom the Egyptians received it.
Almost all the important basic theories of physics were written before 1 91 5. It is unimportant whether we feel these theories were right or wrong. What is important is that they were thought out and developed before this date. There has been very little in the way of revolutionary breakthroughs of thought in physics since this date. There are a few exceptions, as in the fields of astronomy
105
and geophysics. The only correlation with 1915 is the development of sophisticated testing equipment. Before 1915, electronics was in its infancy. Testing and measuring equipment was gross compared to what was to be developed from the 1920’s through to the present day. What seems to have happened after about 1915 is that scientists began to rely more and more on the results of their equipment. They didn’t do very much reasoning about what they were observing from their equipment. They were more and more relying on the artificial senses of their sophisticated equipment and less and less on their minds. Many scientists have forgotten that their intelligence is the best tool they have. Many PhD’s in physics today have become nothing more than highly skilled technicians of the few pieces of equipment they use. Their whole existence is centered around a cyclotron, electron microscope, etc. They usually can’t relate and apply an observation from their field of physics to another field of physics. They have become too specialized. They can see the tree in front of them, but they don’t see the forest. People have forgotten that everything in the universe is related to one idea, and the purpose of science was to discover this single idea, not to be buried in a pile of useless, unrelated information produced by all types of sophisticated electronic instrumentation.
However, in the fields of astronomy and geophysics, there have been advances. The astronomer has only two main tools: the telescope and the radio telescope. Since he is not actually able to travel to the far distant stars, black holes, and planets, he must instead theorize the conditions on those celectial bodies. He must rely more on his inductive reasoning power than on any of his equipment. The same can be said for the geophysicist, since he has not been able to explore the interior of the earth. He must rely on new theories to try to explain the movements of the continents, the earth’s magnetic field, and the heat source at the center of the earth.
Particles or Waves?
The first insights as to how the atom worked were glimpsed by using radioactive elements such as radium and uranium. When these elements decay, they gave off alpha, beta, and gamma ray
106
particles.” These particles were later found to be the fundamental parts of the atom. The alpha rays were found to be a helium nucleus (protons), the beta rays were electrons, and the gamma rays were associated with X-rays, but of a shorter wave length. The gamma rays were electromagnetic forms of radiation which were undeflected by magnetic or electrostatic fields. It was found that the alpha particles could be used very effectively to probe the structure of the atom. This was because its mass was 8,000 times greater than an electron. (2-p402) The problem was finding a way to accelerate them fast enough to produce enough potential to “split” the atom. The answer was finally found by using the radio-active element, polonium. The alpha particles emitted from this element had a velocity of 10,000 miles per second (1.6 x 107 meters/second). This was the fastest speed available before the invention of the cyclotron. Scientists had theorized that the atom had an internal structure. They hoped to prove their theory by striking the nucleus of an atom with an alpha particle to see if the atom would break up.
In 1910 cosmic rays were found to be highly penetrating particles that could also be used to bombard atoms. Scientists were able to show that cosmic rays originated in deep space-many seemed to come from our own sun. They found that these rays consisted of “hard” and “soft” components. The soft particles could be stopped by four inches of lead, whereas the hard particles needed 80 inches. (1-p2l5-16) It was also discovered that cosmic rays were very energetic protons usually possessing a potential greater than 500 MeV. Some solar eruptions cause potentials in excess of 1020 MeV. (1-p2l6) These discoveries all confirmed what Nikola Tesla theorized in the early 1890s but for which he received no credit. Scientists use a device called a “bubble chamber” to detect the cosmic ray trails striking other atoms. A bubble chamber is a partially evacuated chamber containing ionized particles. When an ionized cosmic ray particle passes through the chamber, the ionized particle forms small vapor bubbles through the emulsion, thereby leaving a “track” of the path where the cosmic ray passed.
A scientist by the name of Louis de Broglie was working on a theory in 1924 to explain the wave-like properties of matter. He wanted to try to explain the wave-like and quantum (particle) characteristics of light. He felt that since light waves appeared to
107
have particle-like characteristics, it might be possible that electrons (assumed to be particles) also had wave-like characteristics. He felt that these wave properties were undetectable because the wave length of their frequencies was so short that our instruments could not detect them. He deduced that the short wave lengths (high frequency) do not bend or defract as easily as the long wave lengths (lower frequency). Short wave-length particles which travel in straight lines wouldn’t spread out after hitting other particles, they would be reflected from other particles as a bullet is reflected when it strikes a hard surface. He concluded that electrons would exhibit these properties as well as X-rays, gamma rays, and cosmic rays. This means that their wave lengths were even shorter than ultraviolet light. He concluded that since their wave length was so much shorter than light, they would not exhibit the phenomena of diffraction, dispersion, and interference lines. It was also naturally assumed that an electron had a certain given mass. De Broglie was further encouraged that his theory was correct by the diffraction pattern formed by X-rays passing through a crystal. The diffraction patterns did not look like the patterns formed by light, but scientists used this as adequate evidence that the X-rays had a higher frequency than light. From the work done by de Broglie, scientists extended the electromagnetic spectrum (Figure 5.1) to include X-rays, gamma rays, and cosmic rays above the ultraviolet light waves and in that order.
Multidimensional Reality Explanation
What is the real nature of these alpha, beta, gamma, and cosmic rays. Are they particles, or are they wave forms; and what is their real frequency? This is to determine their actual placement on the electromagnetic spectrum.
Since great numbers of atoms collectively have mass and can be seen in this dimension, it is logical to say that the primary part of the atom, the proton, has a mass and exists in this dimension. As mentioned earlier, the alpha particle and the cosmic rays are mostly made up of protons; so we can conclude that they are in this dimension and have a given mass. In other words, each is a particle. Let us now consider the beta rays or electrons. Scientists do know that the beta particles do not exist in the nucleus of the atom but
108
GAMMA RAYS
X-RAYS
ULTRAVIOLET
VISIBLE LIGHT
Figure 5.1 The traditional electromagnetic spectrum
rather are created at the surface at the instant of emission. As mentioned in Chapter 3, we theorize that the electron is actually a small domain of potential in the first dimension. We will explain how an electron is created later in the chapter, but for now we will say that the electron is not really in this dimension and, therefore, has no mass, only potential. It can affect mass in this dimension by adding potential to an atom. If an electron does possess a great deal of potential for its size, it could behave like a particle bouncing around between the atoms or be accelerated to great velocities. Eventually the electron will be grounded by an atom. The main factor which seems to determine whether an electron will be grounded by an atom or reflected from it is the frequencies of each atom. If the frequency of the atom is similar to the frequency of the electron-be it a first, second, or third harmonic-this will determine the extent to which the electron will be absorbed by the atom. Atoms which “absorb” electrons can be considered conductors. Atoms whose frequencies are dissimilar from the frequencies of the electrons will not absorb the electrons as well, and would act as an insulator.
X-rays are formed when streams of electrons strike atoms. The X-rays seem to have particle-like characteristics. When two streams of X-rays are directed at each other at a certain angle, these particles will deflect from each other. This is totally unlike light which will have virtually no affect on another beam of light. This seems to indicate that unlike light, X-ray particles are at least partially in this dimension. Scientists use diffracted beams of X-rays passing through a crystal as evidence that X-rays are just like light and that they possess more energy and are therefore of a higher frequency than light. There is one very big problem with this idea: the diffraction pattern formed by X-rays is totally different from dispersion or refraction of light through a crystal. As illustrated in Figure 5.2, this photograph of a diffraction pattern of a copper crystal shows that the X-rays are being deflected 360’ around their point of impact. This tends to prove only one thing: that the X-rays seem to leave particle-like tracks on the photographic plate; they are not like light nor do they have the velocity of light. These X-ray diffractions also indicate something very interesting about the atoms. They seem to indicate that they have geometrical shapes and are not really round spheres, as envisioned by the Bohr model of the atom. You will also notice in Figure 5.3 that the
110
Kossel
lines from copper crystal stimulated by X-rays.
Plate
parallel to [1001 (Bormann)
Figure 5.2 Photo of X-ray detraction of a cube crystal
diffraction pattern is exactly like a stereographic projection of a cubic crystal used by crystalographers to describe the shapes and angles of crystals.
The last is gamma rays. These rays are also given off by decaying radioactive elements. They are defined as being the same as X-rays, except they have a shorter wave length; in other words, they possess a higher potential. It is further theorized that these gamma rays are emitted in quantums of energy called photons. This description is also used to describe light; but as discussed in the previous chapter, light is not a particle nor should it be considered like quantums of energy, as envisioned in quantum mechanics. Since these gamma rays do not travel at the speed of light nor do they behave like light, they are not truly light.
Going back to the electromagnetic spectrum, you will notice that by using de Broglie’s wave theory, scientists concluded that the X-rays, gamma rays, and cosmic rays were of shorter wave
111
A. Stereographic Projection of Isometric
Forms (Cube (100), Octahedron (1 1 1), Dodecahedron (1 1 0), Tetrahexahedron
(21 0), Trisoctahedron (221), Trapezohedron (21 1), Hexoctahedron (321))
B. Spherical Projection (after Penfield)
Figure 5.3 Stereographic projection of a cube crystal
112
C. Relation between Spherical and
Stereographic Projections
Figure 5.3 (continued)
lengths than light. We believe this is quite wrong; logically they belong below the infrared light spectrum.
This is why: according to de Broglie’s theory, an electron has a mass; our theory is that the electron has no mass. One of the first formulas used by de Broglie is momentum = mass x velocity (P=mv). This formula comes from classical mechanics. It works well for things that are in this dimension; but when things are on the hairy edge of our dimension, these formulas just do not work. The next formula in de Broglie’s theory is the calculation for wave
length = h/mv : h = 6.6 x 10-34 joules/sec. Planck’s constant. From
mv
this formula, as you can see, if M is 0, the equation means nothing. According to our theory of existence, Planck’s hypothesis of quantums of energy seems highly doubtful. Also since it is basic to his theory that only matter could emit quantums of energy, it seems possible that the value of Planck’s constant may be wrong. De Broglie’s final formula is
wave length n/Ö 2Vem e = the charge of an electron
m = mass of an electron in klgms
113
Vis equal to the potential difference measured in voltage. As you can see by his formula, as the voltage increases, the wave length of the particle becomes extremely small. Since Planck’s constant (6.6 x 10-34 joules/sec) is such a small number in the numerator, no matter what voltage is in the denominator, it is still going to be a wave length smaller than visible light. But considering the fact that m (mass) in the formula is 0, the equation comes out to 0. In other words, a mathematical formula was created on several premises which we believe to be wrong; they were designed to produce the desired results. Whether the results fit reality and observations seems to be irrelevant to what has been taught. We get very suspicious when we see formulas or constants such as E = mc2, which will produce a large value no matter what number is plugged in; or a number like Planck’s constant, which is so incredibly small that it cannot be accurately measured.
From the above discussion, you can see that they have not really calculated the frequency of these particles. This is not to say that these particles do not have a frequency. They do, and this will be explained later.
The other fact that seems to make their theory about the frequencies of those particles wrong is the logic of their electromagnetic spectrum. The lower frequencies up through the microwave range can be produced by oscillating matter in this dimension at various frequencies. At some higher point in these frequencies, the matter no longer appears in this dimension; it appears to us as light. What de Broglie is trying to convince us is that after this piece of matter has left this dimension and is strictly a wave form, it then comes back again as a piece of matter called X-ray, gamma ray, and cosmic ray particles. How do they explain how a object can first be here at a lower frequency, disappear to become light, then come back in this dimension at an even higher frequency as a particle? Even to the laymen this sounds illogical. Scientists are going to have to decide that the X-ray, gamma, and cosmic rays are either only wave forms or only particles. According to our Theory of Multidimensional Reality, these rays really belong just below the infrared spectrum. They are just below the stage where an atom has so much potential that it is able to leave this dimension and go to the first dimension (appear as light).
114
The Bohr Model of The Atom
Most of us have been taught this theory in school at one time or another. It is simple and works well for chemists, so they can understand what they are doing. This does not mean that it is correct. It just means that it works in a small frame of reference. Several changes have been made by scientists since its introduction, but the theory remains fundamentally the same. The following is a brief description of the theory including some of the changes made to it. This description is to familiarize you with current theory before we explain our theory, which is completely different. With our theory we were able to explain all phenomena of nuclear physics, simply by using the one basic theory.
The Bohr model of the atom starts as a solar-system type conceptualization of the atom. The center of the atom, which possesses all the mass, is called the nucleus; it is made up of protons (positive charge) and neutrons (no charge). The electrons (negative charge) circle the nucleus balancing the charge of the protons. The atomic number of the element is equal to the value of the charge. Bohr’s theory had to be able to describe the light spectrums of different elements. He did this by borrowing from Planck’s theory that energy is emitted in quantums. Bohr theorized that the electrons were confined to orbits a given distance from the nucleus. As the electrons “jump” from orbit to orbit, they will emit or absorb energy only in single quantum units (h x f). This he felt would explain the discrete spectral lines of each element, but it also meant that the electron would jump instantaneously from one orbit to another and would never occupy any position in between. In other words, the instant it disappeared from one orbit, it would appear in the other orbit. This implies another dimension. His theory was considered a success because it was able to explain the spectral lines of hydrogen. It could also be used to predict the spectral lines of other elements up to element lithium (atom number 3).
The original Bohr model was later revised because it had three failings: 1) it could not account for the intensity of the spectral lines or the occurrence of some spectral lines that were actually two lines very close together; 2) it could not be used to deal quantitatively with elements with more electrons than lithium, and
115
3) other scientists considered his theory awkward and “ad hoc” because it could not be related to other basic theories of physics.
The next improvement of the Bohr model was done by de Broglie. He applied the idea that the electron traveled around the nucleus in a wave-like path, similar to the concept of a standing wave (Figure 5.4). This idea was supposed to explain the different quantum states of the electron, since the circumference of the orbit would automatically correlate with the energy level of the electron. This was because a standing wave cannot collapse into a smaller orbit, because a fraction of a standing wave is impossible.
Figure 5.4 Standing wave conception of an electron ring
There is one very big problem with this theory. As you will notice in Figure 5.5, at Point A, the electron has a greater attraction to the nucleus than at Point B. You must ask yourself the question: what is causing the electron “particle” to form this wave form? Since acceptable scientific theory holds that the electron is a particle, what then is acting on the electron to increase or decrease its velocity around the atom, or what is varying its attraction to the nucleus? What is increasing or decreasing its potential? When you try to analyze these questions, you come to the realization that the nucleus is the only thing that could be affecting this orbit. So now we have the problem of explaining why the nucleus is oscillating, thereby increasing and decreasing its attraction on the electron “particle.” At this point, our current theories of physics fall apart because there is no way of explaining what external force could be making the electron oscillate.
The mathematician-physicist, Erwin Schroedinger, elaborated on de Broglie’s standing wave idea by coming up with his psifunction
116
Figure 5.5 Vector analysis of the standing wave concept
y = Ö2/L x e -(2/h) En it sin n ·x/L i = the imaginary quantity Ö-l
The one problem with the equation was that it had no counterpart in physical reality. He did succeed in describing a “matter wave,” as called for in de Broglie’s theory; but in order to accomplish this his equation had to have the imaginary value of the Ö-1. Usually in math equations with such imaginary numbers, the imaginaries disappear toward the end of the calculations. But in Schroedinger’s equation, the Ö-1 enters as an intregal part of the expression and cannot be eliminated. The conclusion of his equation is that the electron must be in the first dimension. (5-p64) Mathematicians later squared the imaginary quantity, thereby giving the resultant as being the probability of finding the electron at any position X. You will see next that Schroedinger’s original equation best describes what is really going on.
Per our theory of existence, the information for an atom exists in the first dimension. The information is made up of a variety of
117
frequencies varying in number from approximately 10 (for hydrogen) to possibly as many as 100 different frequencies making up the heavier elements. This idea of multiple frequencies is born out by the series of spectral lines produced by all the elements. Each element has its own distinct set of frequencies that can be easily observed in light spectrum analysis. This topic was mentioned in the previous chapter on light. This is not to say that the frequencies we observe in the light spectrums are the frequencies at which these elements are being modulated into our existence. The spectral series we see are higher frequency harmonics of the initial modulated frequency. Each series, as mentioned in the last chapter, represents a higher potential state of that element. We do not know at what frequency the elements are originally being modulated into our existence, but we feel it would be found above 1,000 GeHz. to 2,500 GeHz.
When the initial series of frequencies modulate into this dimension, they will form a modulation point similar to the point described in the third chapter. The analogy is exactly the same. Whether the atom has a surface is almost unimportant. We do know from electron photographs of the atom, taken by the University of Chicago, that when even a small number of atoms collect together they start forming geometrical shapes. (6) These shapes represent their crystal forms. This subject will be covered in the chapter on crystals.
In our theory, the problem of deciding whether and how the electrons around the atom take certain specific orbits, or what their energy levels are, becomes irrelevant. The electron cloud, if we are to call it an electron, has been observed in the most recent electron photographs of the atom. This subject will be covered later, but for now we will say that the electron clouds observed do not resemble anything close to the Bohr model of the atom. What we theorize this “electron cloud” to be, is wave groups formed by the different frequencies that make up the proton. This wave group forms 360’ around the surface of the atom. It is exactly like the D, E, Fl , and F. ionospheric layers above the earth. This means that the wave group will automatically adjust itself for the energy level at which the atom is to be found. It is also unimportant to think of it as any type of orbiting particle. It exists because it is a function of the frequencies making up that element. It would have the equivalent of a negative charge; but since it is a
118
wave group that is produced by the atom, that means that this wave group really never changes. It doesn’t give up what scientists call electrons. The electron “particle” is produced when the wave groups of two atoms cross each other. When this happens a standing wave is produced, thereby causing a voltage difference between the two wave group frequencies. This in turn, we theorize, will form a small domain of potential which in turn we call an electron. We will go on to explain some other conditions of the atoms using our theory.
Radioactivity
One of the laws which we believe is present in our reality is that the diehold will not permit too much information entering a certain given space and time. This principle seems to hold true for the atoms as well as for large celestial bodies. It seems that as the information for an object increases, it becomes more and more unstable. This instability can be further enhanced if some of the frequencies that make up that element are dissonant to each other.
As most people know U235 will eventually degrade to a more stable element, always of much less combined atomic weight. These elements are barium and krypton. Their combined atomic weights are 221.14. The atomic weight difference between the U235 and the barium and krypton is 13.86, the difference being made up by 1 to 3 neutrons and various photons. This is a more traditional explanation to what is happening during the decay of U235 Our explanation is that when the potential of U235 is increased sufficiently over the binding forces of the nucleus (7.5 MeV), the uranium atom can no longer exist in this dimension. The result is that the diehold replaces the information of the uranium with the information of the barium and krypton. The neutrons that are produced will be discussed a little later, when we cover sub-atomic particles.
A brief description of the binding forces is necessary now. As mentioned earlier, iron, cobalt, and nickel have the greatest binding forces of their nuclei (8.8 MeV). This means that it takes more energy to break up these elements than any of the other elements with greater atomic weight. This binding force goes down to 7.5 MeV for uranium and other very large unstable elements. The
119
potential could be added in two ways: the easiest way is by using a great number of electrons. These domains of potential would be absorbed by the atom, thereby increasing the atom’s potential. The other method used is by accelerating an alpha particle at the atom. The key to the amount of potential these “particles” possess is in their velocity. As discussed in the chapter on light, if the velocity increases, you are actually increasing the potential of that object. This will be true for the alpha particles, which are protons. Regarding the neutron, this is not completely accurate, since the neutron, we theorize, is not really in this dimension. The atom will increase its potential by absorbing the frequencies of the particle or wave form that strikes it. This is proven by two types of collision phenomena that have been observed. One type is called an elastic-type collision, where the particle does not lose any of its potential energy when it comes in close proximity with a nucleus. But a considerable number of collisions are inelastic, which means the energy represented in that particle or wave form is absorbed by the nucleus. (2-p505) These inelastic-type “collisions” are the type that increase the potential of the atom. They come about because the frequency of the wave form or “particle” is either the same frequency or a close harmonic of it. This means that the nucleus absorbs the other frequency and amplifies its own. The result is that the atom starts giving off a higher series of frequencies. If its potential is raised high enough, we see it merely as a light spectrum. In the same line of thought, it seems possible that if the atom only takes in a small amount of energy and produces a first or second harmonic above its original frequency, this could account for unstable radioactive isotopes of various elements.
The idea that an electron has a frequency may not seem logical because electrons can be produced by any number of different elements passing each other. How then can an electron have a unique frequency related to it? Per our theory, if everything in the universe exists in a computer-like structure, the electrons (domains of potential) would have the same frequency as the carrier wave of the diehold. This is not to exclude the possibility of the electron having other frequencies. This idea was proved by Willis Lamb and E. Retherford. Their experiment was to see if there was a resonant frequency to a flow of electrons being created by a stream of atoms. They found that the resonant frequency of the electrons was 1.05777 GeHZ. (1-pl47) Another resonant frequency was
120
detected at 3.095 GeV. (4-p56) Scientists attribute this to what they call “vacuum polarization” (whatever that is). Actually what they have discovered is one of the lower harmonics of the frequency of the electron.
Subatomic Particles
The field of subatomic particles is one of the most complex, complicated, and confusing of all the fields of physics. It is even confusing to the physicists who are attempting to make some sense of the over 300 subatomic particles they have discovered. The confusion stems from the fact that they are pursuing an incorrect philosophy. The scientists even have great difficulty in trying to incorporate the Theory of Relativity with their observations of these subatomic particles. To quote Professor Sir Harrie Massey from the University of London:
“The underlying significance of the four types of interaction still escapes us but a great deal of thought is being devoted to these basic questions, particularly in relation to the new conservation laws which seem to be valid. Conservation of energy, momentum and angular momentum can be related to the properties of the space-time of special relativity but it is difficult to see how to include baryon number, strangeness and lepton number as well. The existence of these further laws indicates a deeper underlying symmetry in Nature which we have not yet appreciated. We are at a most interesting state-major clariflcation with deeper understanding may come at any time.” (1-p269)
Our opinion is that with their present theories of existence, his “deeper understanding” will never come. At the time Professor Massey wrote his book in 1966, there were about 35 of these “particles” discovered. Today there are over 300. We will now attempt to make some sense out of all these subatomic “particles.” We will cover only the major particles, but it would not be difficult to apply our theory further to understand what are the rest of these subatomic particles.
The reason scientists pursued the field of subatomic particles is because they felt they would be getting some insight into the material that made up each individual atom. They should have realized, after they started discovering so many of these little
121
“particles,” that they were being lead down a dead-end, primrose path. One of the first observations we have to make is that all subatomic particles decay to more stable elements, such as proton, or to light. As you know from the previous chapter, when you see light, you are seeing the information of an object leaving this dimension. We will now go through some of the particles that are listed in Figure 5.6. The first one is the proton (the atom). It is probably the only one that does exist in this dimension. The information for its existence could be visualized as being transmitted in the form of a sine wave or it could be in the form of pulse modulation. In Figure 5.7, you can observe a sinusoidal representation of this frequency. H represents the magnetic information of the proton entering this dimension; E represents the electrostatic information of this dimension. The electrostatic part is what we perceive. This means that E would really be the image of the atom existing in this dimension. H represents the neutron, which naturally has no charge. As you can see by the sinusoidal wave, at certain times it is possible to observe a negative proton or a positive neutron. The reason it appears to us that the proton and the neutron are separate entities is that the frequency that makes up the atom is oscillating so fast we see the E and H vectors simultaneously. If we perceive just the peaks of these sinusoidal curves, it would appear that they are two separate entities; but in reality we are looking at the information making up only one proton. This idea is further proved by the mass differences between the proton and the neutron. The proton weighs 2.53 units less than the neutron. This difference, we theorize, is due either to the carrier wave frequency or to one of the clocking frequencies. The 2.53 units is greater than the corresponding weight difference of the electron, which is supposed to balance the proton and neutron. This has always been a phenomenon in physics. The other fact that proves this point is that the life-time of a neutron is 1010 seconds before it decays to another proton with its corresponding electron and one neutrino. The reason the mean life-time is so long for a neutron is because it is just the magnetic information of the atom. But its potential has been raised so high that it has caused its frequencies to produce higher harmonics. It will not modulate back into our existence, as an atom or proton, until it has lost this excess potential. The neutrino is defined in physics as having no mass and no charge. In other words, it isn’t in this dimension,
122
Mass Lifetime Most
Probable
Symbol Particle (MeV) (Secs) Decay Products
p Proton 938.26 Stable
Mass
in electron mass = 1836.12
n Neutron 939.55 1010 p
+ e- + V
Mass in electron mass = 1838.65
e- Electron .551 Stable -
Mass in electron mass = 1
e + Positron .551 Stable -
v Neutrino -0- Stable -
‘Y Photon -0- Stable -
p
+ Muon 105.66 2.22 x 10-6 e+
+ V + -V
· 6
Muon 105.66 2.22
x 10 e- + v + v
7r+ Pions 139.6 2.54 x 10-8 p
+ + V
7T- Pions 139.6 2.54
x 10-8 il- + V
7ro Pions 135.0 l x 10-16 2,y
or y + e+ e-
K+ Kaons 493.8 1.2 x 10-8 ll + + v (63%);
7T+
+ 7TO (21%);
27r+
+ 7r- (5%)
K- Kaons 493.8 1.2 x 10-8 ii- + v (63%);
7r-
+ 7r’ (21%); 27r- + 7r+
K
lo Kaons 497.8 10-10 1T+ + 7r- (69%);
27ro
(31%)
K20 Kaons 497.8 5.6 x 10-8 7r±
+ e± + v (33%);
7r±
+ il± + v (27%);
3iTo
(27%)
17 Eta-meson 548.8 very short? 2,y
(3 5%);
3iTo
or 7r’ + 2,y (327o);
IT+
+ 7r- + 7r’ (27%)
AO Lambda- 1115.5 2.6 x 10-10 p
+ 7r- (68/@);
particle n + 7r’ (3 2’/o)
2:+ Sigma-particle 1189.5 8.1 x lo-“ p + 7ro (51%);
n
+ 7r+ (49%)
2;- Sigma-particle 1197.4 1.6 x 10-10 n + 7r-
2;0 Sigma-particle 1192.5 <10-14 AO + y
· 0 -10 AO + ao
Xi-particle 1314.9 3.1 x 10
Xi-particle 1321.3 1.7
x 10-10 A’ + iT-
E2-
Omega-particle 1672 i.i x lo-10 @-- + 7TO; ---‘ +
iT-; + K-
Figure 5.6 List of atomic “particles”
123
Figure 5.7 Sinusoidal representation of the information that makes up the elements
which again proves our theory. You will notice from the rest of the subatomic particles that their mean life span is extremely short. The only reason they are as long as they are is because they are produced from highly accelerated particles, which in turn gives them a great acceleration. When an object is accelerated toward the speed of light, its time slows down. If the particles had no velocity, they would decay and disappear instantaneously from this dimension. Per our theory, all the subatomic particles, including neutrinos, are not particles at all. They have no mass in this dimension. They are in reality the separate frequencies that make up the element.
At the modulation frequency, where let’s say a nitrogen atom is being modulated, all the frequencies are being modulated to one specific point in time and space. In other words, all their vectors are directed to one point; but as potential is applied to this modulation point, these vectors change and begin to spread out. What they are doing in these cyclotrons is taking an alpha particle or other “particle” and imparting to them a tremendous equivalent
124
voltage, sometimes well into the billions of volts. This voltage is imparted to the atom, thereby causing its information vectors to start demodulating and spreading out. Since an element like nitrogen might have as many as 15 or 20 different frequencies making it up, each frequency will start demodulating in different directions. Generally, these vectors will be in the direction of the original force of impact. We are actually observing the various frequencies that make up the atom leaving this dimension. You will notice in Figure 5.8, that all the subatomic “particles” listed eventually decay to either protons, neutrinos, electrons, or photons; the last three being states of existence not in this dimension.
The electron voltage difference between the different particles listed in Figure 5.6, would be represented in the light spectrum as strong and weak line spectrums. From the figure, you will also notice that particles have been observed having positive, negative, and neutral charges. If you will refer to the sinusoidal wave diagram, Figure 5.7, you will notice that one wave form will have positive, negative, and neutral characteristics depending where the sine wave is. In reality it is only one wave form. We theorize that if scientists continue on the track they are taking, they will come up with literally thousands upon thousands of different subatomic “particles” because there are literally thousands of different frequencies that make up the elements in our universe. This is easily proved by picking up a book of spectral-line tables of the different elements.
Professor Massey also said: “Common sense is a wholly inadequate and misleading guide to the world of antiparticles and the strange particles, but the phenomena which we are now going to discuss, and particularly for interpretation, are even further beyond the experiences and ideas of everyday life.” (1-p250) Professor Massey is correct in saying that common sense has no place in understanding these subatomic particles, especially when you use the old theories of existence. But using our theories of existence, these “particles” and “antiparticles” can be understood since they finally have a place of reference and don’t fit in a philosophy “off in left field.”
The latest theory by scientists to explain the elementary particles is called Dual-Resonance Models. To quote: “In this new theoretical approach, the strongly interacting particles classified as hadrons are viewed mathematically as massless strings whose
125
Figure 5.8 Table of life tracks of subatomic “particles”
126
ends move with the speed of light in multidimensional space.” (5-p61)
This gets much closer to our Theory of Multidimensional Reality. In other words, the only way they were able to explain resonance and the behavior of these subatomic particles was by building a mathematical model using one-dimensional strings, attaching these subatomic particles. (5)
Another phenomenon in subatomic “particles” has also proved our theory. There are five laws of conservation at the level of nuclear reactions that have been considered fundamental. They are: 1) the law of conservation of mass-energy; 2) the law of conservation of momentum (linear and angular); 3) conservation of charges; 4) conservation of particles; 5) conservation of parity. We will not comment on the first three in the scope of this book. Number four, we consider doesn’t hold for the level of subatomic “particles,” since they are not particles. Number five, the conservation of parity (mirror symmetry), means that nature has no preference between right-handedness or left-handedness. This means that the laws of physics are the same in a right-handed system of coordinates as they are in a left-handed system. This principle was found to be invalid with the behavior of subatomic “particles” which prefer spinning in certain directions. The only way you can logically understand why a subatomic “particle” prefers one direction over another is by coming to the realization that there is a consciousness behind its actions. In other words, the diehold at this particular level of existence functions in very specific ways, and it’s only in the macrocosmic domains that the laws of conservation will be valid.
The last point we wish to cover in subatomic particles is the topic of annihilation. Scientists describe this as being when matter and “antimatter” come together (such as an electron and positron). They annihilate each other forming electromagnetic energy. What they are describing here is easily visualized in Figure 5.7. The positive and negative charged fields ground each other out and leave this dimension as the original wave form that made up the stable element.
Radioactive Isotopes
After considering our theory of the atom, you might come to the conclusion that the only elements that would appear in our dimen-
127
sion would be of only one atomic weight. For instance, all the hydrogen found in our dimension would have an atomic weight of one or all uranium found would have an atomic weight of 238. It is obvious that many elements in our dimension have radioactive isotopes. The definition of an isotope is an atom of the same element having the same atomic number but a different atomic weight. The isotope is identical in all physical and all chemical properties. The only difference between them is their atomic weight. Nearly all of the elements found in our dimension are mixtures of several isotopes. This is a phenomenon in nuclear physics. No one has ever figured out why isotopes should even exist.
Let’s take the example of nickel. The vast majority of nickel found has an atomic weight of 58.7. Why should we find a small percentage of nickel having atomic weights of 59 and 63 and still be nickel? With an atomic weight of 63, it should be copper; but why is it still nickel? We don’t know if anyone has ever thought about the phenomenon or wondered why it should occur. The only way we can see how to explain the occurrence of isotopes is by using our Theory of Multidimensional Reality.
The analogy we will use best describes why the phenomenon shows up. Assuming we have the diehold transmitting the information for the element, nickel, the information consists of a variety of frequencies. The big question is how does the diehold know that it is always transmitting the information for nickel at the correct frequencies? The way it could do so is by using the same method that is used in radio and television transmitters. The device is called a phase detection circuit. Briefly, this circuit insures that the transmitter is producing the correct frequencies. It does this by comparing the correct frequency with any higher or lower frequency produced by the circuit. If the frequencies produced go too high, the phase detector senses it and corrects it by lowering the transmitted frequency. The same thing occurs if the transmitter frequency goes below its acceptable parameters. The point to be made here is that without the error, the transmitter would not know if it is producing the correct frequencies. The same would be true for the diehold. Ninety-nine percent of the signal information is transmitted at the correct frequencies, which in turn modulates into the correct atomic weight for an element. The isotopes are the error factor found in our dimension. We would further conclude that all elements in our universe have isotopes
128
and these isotopes are necessary for the transmitting of information into our dimension.
Atomic Images
We mentioned in the beginning of Chapter 3 that eventually man’s technology catches up with his theories and philosophies of existence. One of the best examples of this is in the tremendous advances made in the technology of electron microscopes. They have been able to take pictures of atoms using a method of X-ray holography. The hologram of the atom is a reconstructed image of the central nucleus and the electron rings around it. (3-pl,164) The electron “shells” observed are accurately reproduced except for the last shell which is sometimes too light to be observed (Figure 5.9). They have discovered that these electron shells look much different from what was originally envisioned by the Bohr model of the atom. (7-p4l5) Some holographic images are so good, the electron shells have been seen to oscillate. (7-p4l3) These pictures seem to indicate that these electron shells are really wave groups formed around the center of the atom. There does not seem to be any indication from any of these photographs that there is actually an electron particle orbiting around the center nucleus.
One technique of producing these images is to unfocus the image slightly, placing the focal length a little above the actual object. The blurred image is then reconstructed by a computer which enhances the true shells of the atom. This process is called “deblurring.” (8) The shells that are seen are much wider than the Bohr model would tend to indicate. The image of the atom looks very much like a star, when the image is focused perfectly (Figure 5.10). It may be possible that there are great similarities between the atom and a star.
Avery interesting technique of three-dimensional holography was developed by George Stroke and Maurice Halioua at the State University of New York at Stony Brook, and Friedrich Thon and Dieter Willasch of Siens Ag. They developed a method of photographing a crystal of magnesium bromide tetrahydrofuran. In Figure 5.1 1, you will see the geometric arrangement of the magnesium, oxygen, and carbon atoms. You will notice that there is a very specific vector angular relationship between the magnesium, oxy-
129
COURTESY OF DR. STROKE AND DR. HALIOUA, STONY BROOK
Figure 5.9 An atom showing the electron shells
gen, and carbon. (8-p59) This seems to illustrate the idea that elements are made up of frequencies that manifest themselves in this dimension as specific angles. This idea will be further demonstrated in the chapter on crystals. These geometric arrangements
130
Figure 5.10 A focused atom which looks similar to a star
you are observing are quite different than what is theorized by most chemists and physicists. Whether they realize that these fantastic photographs tend to indicate that their theories of atomic structure and behavior are wrong, we don’t know. Some of them must realize there is something very wrong with their present ideas.
Atomic Jumping Beans
During a study conducted by M. S. Isacson and some of his colleagues at the University of Chicago, they discovered that the atoms observed in their electron microscope jumped around from place to place. They used for their experiments several heavy
131
Figure 5.11 A crystal of magnesium bromide showing the orientation of the atoms
Images of atoms in a section of the crystal “magnesium bromide tetrahydrofuran complex” obtained by a scientific team headed by Dr. George W. Stroke and including Dr. M. Halioua, Dr. V. Srinivasan and Dr. R. Sarma using the new “X-ray microscopy” opto-digital computing method.
The images of the large atoms in the unit cell shown are magnesium atoms; the smaller symmetrical pair around each magnesium atom are oxygen atoms and the still smaller pair furthest away around the magnesium atom are carbon atom images. The x-dimension of the unit cell shown between magnesium atoms is 9.26A.
The new “X-ray microscopy” opto-digital computing method makes use of the principles of holography (3-D photography) and of optical computing. It permits one to reconstruct 3-D models of molecules automatically and thereby provides the scientist with a new tool to help unlock the mysteries of chemical and biological functions of molecules, for example the functions of antibiotics and the body’s natural immunological defenses. Applications range from geology and material sciences to medicine, pharmaceutical chemistry and molecular biology.
atoms such as uranyl chloride, silver, and uranium. They placed these atoms on a thin film of carbon. This enabled them to see the atomic images. They took a series of photographs from a minute to five minutes apart; they made two important observations. One is that the heavy atoms seem to have a preferred spacing between them. The majority of the heavy atoms were spaced approximately 4 to 5 A units. There is no explanation of why these atoms would have a preference spacing between them. (6-p373) They also observed that some of these atoms were grouped in clusters for which there is no explanation. We would explain it by saying that these atoms were beginning to form crystals along specific angles or vectors caused by their information.
The second observation is that the atoms “jumped around” on the carbon film. At first the scientists thought this was caused by mechanical or electromagnetic instabilities of the equipment; but when calculated for this possibility, they concluded that this type of movement could only cause motion up to 1 A unit. (6-p372) The other possibility was that the electron beam from the microscope caused the atoms to move around. But this idea was also discounted because it was calculated that the electrons would cause only a very small atomic movement. Therefore, the movement could not be caused by the electron microscope. They calculated the average frequency of the movement found for uranyl chloride molecules to be between 1,400+ to 3,300 per second. There is no explanation for such a rapid movement of the atoms. It is a shame that there were not enough observations of this phenomenon so that more could be learned from this rapid movement.
We would try to explain this phenomenon by saying that these atoms move because they are constantly being phased in and out of our existence-that they don’t actually move from Point A to Point B, but disappear at Point A and reappear at Point B. This demodulation and modulation of its information could be caused by the “electron” stream used to produce the image. Since the electrons are really domains of potential, they would be increasing the potential of these atoms sufficiently high enough to cause their demodulation. Their reappearance would be due to the fact that the diehold will not permit too little information to be present in our dimension; so it retransmits the information for the same element back down to the approximate same time and space, thereby
133
causing the reappearance of the element but at a slightly different location. The reason a tremendous rush of energy is not produced, such as in a nuclear reaction, is because the potential used (the electron) is not in this dimension. Its effects are only to the information of the element in the diehold and are not really applied in this dimension. This is different from kinetic-type energy which is applied from this dimension. Kinetic energy could be looked at as being the mechanical way of adding potential to something, rather than the way used in this electron microscope.
The possibilities of raising an object’s potential using “electrons” at specific frequencies should cause an object to be moved in time and space-as demonstrated by the images observed from these electron microscopes.
REFERENCES
1. Massey, Sir H., “The New Age in Physics” (N.Y., Basic Books, 1966).
2. Ripley, J. A., “The Elements and Structure of the Physical Science” (N.Y., John Wiley & Sons, 1964).
3. Bartell, L. S., Ritz, C. L., Atomic Images by Electron-Holog-raphy: Science, vol. 185, p. 1163-4, Sept. 27, 1974.
4. Drell, S. D., Electron-Positron Annihilation and the New Particles: Sci. America, vol. 232, p. 50-62, Jan. 1975.
5. Schwartz, J. H., Duel-Resonance Models of Elementary Par-ticles: Sci. America, vol. 232, p. 61-67, Feb. 1975.
6. Isaacson, M., et. al., The Study of the Absorption and Diffusion of Heavy atoms on Light Elements Substrates by means of the Atomic Resolution STEM: Ultramicroscopy 1, p. 359, 376, 1976.
7. Bartell, L. S., Images of Atoms by Electron Holography, II Experiment and Comparison with Theory: Optik, vol. 43, no. 5, p. 403-418, 1975.
134
8. Stroke, G., et. al., Image Improvement and Three-Dimensional Reconstruction using Holographic Image Processing: Proceedings of the IEEE, vol. 65, no. 1, Jan. 1977.
9. Beck, V. and Crewe, A., High Resolution Imaging Properties of the STEM: Ultramicroscopy, vol. 1, p. 137-144, 1975.
Bibliography
Apparent discovery of Long-sought monopole, Science News, vol. 108, p. 118-120, Aug. 23, 1975.
Bartell, L. S., Images of Atoms by electron holography, 1, Theory Optik, vol. 43, no. 4, p. 373-390, 1975.
Benvenuti, A., Still another possible new particle, Science News, vol. 107, p. 132, March 1, 1975.
Bohr, Niels, Atomic Theory and the Description of Nature (Cambridge, University Press, 1934).
Bohr, Niels, The Application of the Quantum Theory to Atomic Structure, Part 1, Proceedings of the Cambridge Philosophical Society (supplement), (Cambridge University Press, 1924).
Born, Max, Einsteins Theory of Relativity (London, Methuen & Co., 1924).
Born, Max, The Constitution of Matter (N.Y., E. P. Dutton, 1921).
Bundle of Charm; Two Chi particles, Science News, vol. 108, p. 180, Sept. 20, 1975.
Charging Quarks through beta decay, Science News, vol. 108, p. 215, Oct. 4, 1975.
De Broglie, Louis, Matter and Light, the new Physics (N.Y., W. W. Norton & Co., 1939).
De Broglie, L., Physics and Microphysics (N.Y., Phantheon Books, 1955).
135
De Broglie, L., The Revolution in Physics (N.Y., The Noonday
Press, Inc., 1953).
Evidence for a New quantum number, Science News, vol. 108, p. 309, Nov. 15, 1975.
Frazier, K., High Stakes in the Monopole Claim Game, Science News, vol. 108, p. 222-3, Dec. 15, 1975.
Glashow, S. L., Quarks with Color and Flavor, Scientific America, vol. 233, p. 38-50, Oct. 1975.
Kalogeropoulos, T. E., A new puzzle in Physics; the ‘Cosmion,’ Science News, vol. 20, p. 20, Jan. 11, 1975.
Life among the Leptons: Yet another new Particle?, Science News, vol. 108, p. 68, Aug. 2, 1975.
Monopole Claim; Storm of Scrutiny, Science News, vol. 108, p.
164, Sept. 13, 1975.
New Particles Continue to Charm Physicists, Science News, vol. 107, p. 252, April 19, 1975.
Quark Theory: A prediction confirmed, Science News, vol. 108, p. 390, Dec. 20, 1975.
Reiche, F., The Quantum Theory (N.Y., E. P. Dutton, 1968).
Rujula, A. De, Has a Heavy Lepton been Discovered?, Science News, vol. 108, p. 180, Sept. 20, 1975.
Schwitters, R. F., The New Particles: Subtleties Mount, Science News, vol. 107, p. 88, Feb. 8, 1975.
The Fading Charm of Pandamonium, Science News, vol. 107, p.
300, May 10, 1975.
Thomsen, D. E., Charmed I’m Sure?, Science News, vol. 107, p. 58-60, Jan. 25, 1975.
Thomsen, D. E., Old Particles Out, Science News, vol. 108, p. 140, Aug.23,1975.
What are the New Particles?, Science News, vol. 232, p. 43, May 1975.
136
Astronomy
It is difficult for man to understand existence and how it works unless, one, he stumbles on the correct theory of existence and is then able to relate all the phenomena of his existence to one idea; or two, he begins observing the extremities of existence. The two extremes are the micro-cosmic world of atoms and the macrocosmic world of quasars, galaxies, and stars. At these extremes, man’s false ideas of how existence functions begin immediately to break down. As demonstrated in the chapter on atoms, man’s idea of what atoms look like and how they work has changed totally since man first saw an actual picture of the atom. The same applies to the field of astronomy. Astronomers are seeing things in the universe and recording energy levels far beyond anything that can be explained by currently acceptable theories. There is one big difference between the astronomers and the nuclear physicists, which is that the astronomers are not as attached to traditional ideas as are the nuclear physicists. Nevertheless, the field of astronomy has been delayed from evolving to a theory that can explain observations in an organized, coherent manner because astronomers still apply some nuclear theories that were shown in previous
137
chapters to be wrong. The only way to understand what is going on in the universe is to scrap almost all of the old ideas of existence and start again from scratch. Our theory of multidimensional reality is able to explain quite logically all the phenomena observed in astronomy. Of all the fields of physics, the field of astronomy will be most affected by our theory.
By applying the theory of multidimensional reality to these different phenomena, we found a very important secret of the universe which affects all of us on this planet. Normally we think of astronomy as a field totally unrelated to the average man. But we discovered that one of the most important phenomenon of the universe has an immediate effect on all men. No matter what one does on this planet or how he lives, be it moral or immoral, when this phenomenon occurs it will affect all men. There is no way to ignore it, and maybe this phenomenon is the real reason why man must evolve to higher thoughts.
When studying the universe and the galaxies in it, we are immediately impressed by the mind-boggling distances between any two galaxies, more or less the distances from one side of the universe to the other. When we examine the subject of time relative to the galaxies, time seems infinite or even irrelevant to us. But time does matter to the galaxies and the universe, because important events occur after certain intervals of time have passed from the beginning of a galaxy’s creation to its end, whenever that is. To poor mortals, who live only about 70 years on the planet, to conceive of time as being billions of years makes one feel extremely insignificant in relation to what is happening in the universe. Since man is part of that universe, and is intricately involved in its evolution, we can assume that we can learn a great deal about our own evolution; more important, we can learn what we are to evolve to by closely examining the events happening in the universe.
Time and Distance
It is necessary to the following discussion to briefly understand some of the astronomical units of measure used. We will start with the basics. In astronomy, the metric system is used. The speed of light is 299792.5 kilometers per second. A light year is the distance traveled by a beam of light in 365.26 days. Since the distances be-
138
tween stars and galaxies are so huge, astronomers have used a much larger unit of measure. It is called an astronomical unit (AU). One AU is equal to 149,598,500 kilometers ±5 kilometers. There are 206,265 astronomical units in one parsec (pc). One parsec is equal to 3.0855 x 1013 kilometers or 3.262 light years. The distances measured to other galaxies can have a 10-to-20 percent error factor. (1-p6l7)
Our Universe
What is the age of the universe? What is the age of our galaxy? What is the age of our sun? What formed the galaxies? What process was necessary to produce all the energy that makes up the galaxies? Why is the universe expanding, and what is the reason for its expansion? These are some of the questions we are going to try to answer in this chapter.
On a clear night if you look into the sky with a good pair of binoculars or a telescope, you will be looking at literally billions upon billions of stars from our own galaxy and unknown billions of galaxies all over the universe. What you are looking at is truth, untouched and uncorrupted by man’s ignorance. It is man’s purpose, for the short time he is on the planet, to understand why and how the universe works.
There is incredible order and balance throughout the entire universe. The galaxies are homogeneously spaced throughout the universe. There is a theory about the universe which is called the “Big Bang Theory.” Basically, this theory states that the entire universe started at one finite point. After the initial “big bang,” the galaxies spread out in all directions from the point of the explosion. To understand the Big Bang Theory better, we must first explain how astronomers determined that the universe is expanding. One of the ways astronomers calculate the distance between other galaxies and ours is by measuring the red shift of the light spectrum produced by a galaxy or star. As mentioned in earlier chapters, each element produces its own specific line spectrums of light. The more potential added to an element, the higher the light spectrum series that will be produced. This process will continue all the way up into the high ultraviolet range. As the velocity of a galaxy increases away from us, these line spectrums are shifted
139
down to the infrared end of the spectrum. Edwin Hubble discovered in 1929 that this red shift followed a linear relationship to the distance the galaxy was from us. This linear relationship is expressed as a constant for a particular unit of distance. The unit of distance is one million parsecs (I Mpc) or 3.262 million light years. The most commonly used rate for this receding rate is 50 km/sec., per 1 Mpc. At 2 million Mpc the rate would be 100 km/ sec. As the distance doubles, the rate doubles. This is because it is a linear relationship. This rate of expansion is called the Hubble Constant. In recent years there has been work done on determining the actual value of the Hubble Constant. One group of astronomers used the weighted average between 12 different methods used to determine distance and came up with a weighted average for the Hubble Constant of 93 ±7 km/sec/Mpc. (2-p256) Somewhere between these two values lies the actual value of the constant. There is no disagreement among astronomers about the universe expanding; they just don’t know why.
Once the Hubble Constant was determined, it was then possible to determine an approximate date for the age of the universe. It was reasoned that since nothing can travel faster than the speed of light, a galaxy could not be receding from us faster than 299792.5 km/sec. Dividing the Hubble Constant of 50 km/sec per Mpc into a value slightly less than the speed of light, we get
299792 km per sec. = 5995.84 million pc x 3.262 light years
50 km per sec.
= 19,558,430,000 years.
as an age for the universe. (3-pl9)
The question is, can we verify this age by observation or by experimentation? Yes, on both counts. Scientists have observed many quasars receding from us at 91 percent the speed of light (272811 km/sec.). This converts to a distance of 17.8 billion light years away. Another way of looking at it is that we are looking 17.8 billion years back into time. This is quite close to the estimated value of 20 billion years. The atomic way of dating the universe also confirms the 20-billion year figure. The method used is to measure the time it takes the radioactive element, rhenium187 to decay to the stable element, osmium l87. This method proves to be much more reliable than the other methods of dating ele-
140
ments because its half life is 44 billion years-much longer than the age of the universe. Using this method, astronomers calculated that nucleosynthesis began about 18 billion years ago. Cosmologists theorize that it was two billion years after the big bang that any type of nucleosynthesis could have started. The result totals to 20 billion years as calculated for the age of the universe. (3-pl9)
After you have obtained a rough estimate of the age of the universe, you may ask yourself, where in the universe did the big bang occur? If it was from a particular location, then all the matter in the universe should be traveling away from that point. The problem astronomers have with this very logical premise is that no matter where they look in the sky, they see numerous quasars and other types of radio galaxies moving away from us at velocities greater than 80 percent of the speed of light. They can observe this phenomenon 360’ around our galaxy (except in the direction of the Milky Way, where there is too much intergalactic dust for distant light sources to pass through). It is generally accepted that a quasar is the first stage in the evolution of a galaxy. Since we are always looking back into time when we look at these objects, our only conclusion must be that the moment of creation happened at the same time everywhere in the universe. Creation was all around us including our own galaxy at the same moment of time. This also implies that the oldest galaxy we can observe is our own.
More evidence of the big bang was discovered by Bell Telephone Laboratories in 1965. One of their very sensitive radio receivers was detecting weak radio noise at 14.08 GHz. The only way they could account for its source was to accept it as being extraterrestrial. Astronomers later confirmed these observations and found other frequencies as well. They also discovered that the radio noise came uniformly from all directions in the universe. No direction as to a center for the big bang was indicated. The only conclusion they could reach, and rightfully so, is that the universe originated all around us with equal intensity. There have been some attempts to explain why the universe is expanding, but all have failed because they cannot explain all the phenomena using a single frame of reference. None of them try to explain where the initial matter for the big bang came from, and what caused the incredible gravitational field that must have been present to hold all this matter together. It is easy to see why they can’t figure out the last question. How do they expect to know where the gravity
141
came from originally if they don’t know what gravity is?
Regarding the first question, the acceptable theory is that the matter came from dust particles in space that collected at one point, heated up, blew apart, and formed the universe. They apply the same theory to the creation of a galaxy. The observation that makes this theory highly improbable is that if there was that much dust particles between the existing galaxies, then we should not be able to see anywhere near the distances we presently do. It is considered quite amazing that the space outside the galaxies is as clear as it is. If we accept the traditional explanation of the initial big bang, we would expect to detect some dust clouds between the galaxies as remnants of this primal explosion. No such dust clouds have been detected. Astronomers also describe this type of model of the universe as “canonical” models. A good summary of the predicament that astronomers are faced with regarding the expansion of the universe was well stated by Professor M. J. Rees from the Institute of Astronomy, Cambridge, England. He said:
“The ‘canonical’ models, therefore, necessitate the unpalatable postulate that all parts of the universe are accurately synchronized, and start expanding at the same time and with the same entropy and curvature, even though there was then no communication or causal connection between neighboring regions. In these Friedmann models, therefore, the observed overall uniformity is postulated, never explained; and is even unexplainable. Matter at one point, however willing to live and evolve in conformity with matter at another point in the universe, has no means whatever of knowing ‘the conditions of life’ beyond its own horizon.” (2-p320)
Multidimensional Reality Explanation
If we accept the idea that all matter in the universe is formed from information that exists in the first dimension, we have a tremendous advantage in explaining from where the initial matter for the universe came. The only problem would be to try to determine the shape of the structure of the computer (the diehold) that holds all this information. Ironically the answer to this problem is found in the smallest groups of matter, the crystals. An in depth study of crystals to further prove this idea is covered in Chapter 9; but for
142
now we will say that the perfect crystal shape and, in turn, the shape of the diehold, is an octahedron. (Figure 6.1)
The next step is to understand why this shape would have an effect on the expansion of the universe and how a cosmological model can be based from this structure and the theory of multidimensional reality. We will do this by first stating the theoretical conditions that would be present in the model and then compare that with the actual astronomical observations. This is to test each part of the theory, step by step.
The first thing we must do is state some basic postulates of the diehold. The first concerns time. Time is measured vertically down through the diehold (Figure 6.2). Each horizontal layer of the diehold represents all the information in the universe at one moment of time. This dividing of time is used merely as a convenience to explain changes as time progresses. In actual working there would be a continuous flow of information from layer to layer. The distance between layers one and two could represent one second, a
Figure 6.1 An octahedron crystal
month, a decade, or a billion years. The big bang, or more aptly put, the beginning of the universe, occurs at Point A. This is also where time starts. The next postulate is that the information for the new galaxies will be present at Point A. At Point A + I the big bang has occurred; at level one in the diehold, the information for all the galaxies, as complete galaxies exists. Since the distances between the newly formed galaxies are very small, the volume of area to which the information is directed to our universe, is in turn very small. The problem is that the information for these new galaxies are for complete galaxies. This means a tremendous amount of information will be forced through a very small modulation point in our time and space. As time progresses, the distance be-
Figure 6.2 Diagram of the diehold showing what time is and how it passes through the information
144
tween the domains of information for the individual galaxies becomes greater and greater. This is why the universe is expanding. It is revealed in the very shape of the diehold; because of its shape, distance between Points B and C at time level three will increase at a linear rate to time level four. The rate of change is always the same at any time; and it is, therefore, a linear function, just as the Hubble Constant has been proven to be.
Atremendous amount of information is being directed toward a very small point; the result of this, as mentioned in the magnetism chapter, is that the potential of the magnetic field will go up proportionately. Gravity, or magnetism, is nothing more than information from another dimension being modulated to our dimension. If we were to see this new galaxy, we should see a tremendous amount of ultraviolet light, along with an incredibly strong gravitational field around one relatively small modulation point. The reason why only ultraviolet light would be present is because, as mentioned earlier, when the potential of the information for an element has been raised incredibly high, it will produce mostly ultraviolet wave lengths of light. As the distances increase between these newly formed galaxies, more information will be able to be directed through those modulation points because the modulation points will also expand. As the modulation points become larger, the potential created by the force of so much information passing through a small opening will decrease; therefore, we would see the temperature and other forms of radiated energy decrease.
There is a great deal of evidence collected from observations of quasars that tends to agree with our theoretical model of the universe. The evidence is as follows:
Astronomers do realize that the evolution of a galaxy is more dependent on forces outside the galaxy than on the evolutionary forces of its component stars. The first thing we should try to find in our observations is: what do the farthest celestial objects look like, and do they give off great amounts of energy. The reason we look for the most distant objects is because we are trying to look as far back into time as we can. The farthest objects that have been observed are quasars and radio “galaxies.” Quasars are observed as very blue points of light. They appear more like individual stars, but sometimes appear with small wisps of matter sprouting from them. One of their distinctive characteristics is that they emit an abnormal amount of ultraviolet radiation. They are, also, variable
145
points of light; this means they flare up in light magnitude as much as 2.5. They are giving off a tremendous amount of energy quite quickly and in short bursts. (1-p645) Some outbursts of energy have been calculated to be 1060 ergs/sec. (1-p640) This is a material percentage of the total energy available to a whole galaxy, if we assume that nuclear energy is the source of galactic energy. The average energy output of a quasar is at least 1046 ergs/sec over the life-time of the quasar stage. The initial active life-time of a quasar is estimated to be 10 million to 1 billion years. (2-p291)
The other astronomical objects detected at these far distances are called radio galaxies. These are radio sources produced by a celestial body and detected by using large antenna systems. This is a rather new type of astronomy but it has proven to be very effective for extremely distant galaxies. Many of these radio sources have not been confirmed optically because the light produced by the radio source is too faint and too far away. Since 1974, 300 such radio sources have been discovered and about one third of them have been optically confirmed as being quasars. They, also, have been found to have red shifts indicating receding velocities of at least 91 percent the speed of light (272,811 km/sec). This confirms that they are very old objects (at least 17.7 billion years). For the rest of this section, we will consider radio galaxies to be quasars because that is what they are when we talk of distances that far away. The centers of quasars are believed to be only light months across or less. Two examples of these radio galaxies are Cygnus A and 3C-236. They have received a great deal of attention from astronomers because of the amount of energy they produce. It has been calculated that the internal energy constant is 1058 ergs per output for Cygnus A and 1060 ergs for 3C-236. This amounts to consuming five hundred million of our suns per second to produce the energies emitted throughout the electromagnetic spectrum. (2-p288) This energy amount is calculated by using the acceptable theories of thermal nuclear reactions (fusion and fission). As mentioned in previous chapters, our theory of existence does not accept these theories at all; and this proves we are correct in our belief. The above estimates of the internal energy are probably underestimated. Astronomers know that if Cygnus A was really powered by thermal nuclear reactions, Cygnus A should have expanded much farther than it has. With this observation, they know something may be very wrong with the accepted atomic theory.
146
Another theory about Cygnus A is that energy may be produced by a low-pressure plasma. A strong magnetic field would hold the plasma together with other highly excited particles. The results would be a much slower expansion of Cygnus A, which would agree more with observations, but this would mean that it would consume 10 billion solar masses every second. This process would generate one thousand times more energy than is observed for Cygnus A. (2-p289) To sum up the problem of the energy output of quasars, the following is from the noted professor of astronomy, George 0. Abell:
“We have then the perplexing picture of a quasar: an extremely luminous object of small size displaying enormous changes in energy output over intervals of months or less from regions less than a few lightmonths across; 100 times the luminosity of our entire galaxy is released from a volume more than 1017 times smaller than the galaxy.” (1-p645)
The point we wish to make is that the idea that energy comes from matter (E=mc2) is wrong. We will go even further and say that the entire theory of nuclear energy needs a great deal of revision. As Tesla said: “Nuclear energy is an illusion.” The reason why it is incorrect is simple and blatant: if our galaxy, like all galaxies, went through the quasar stage of evolution some 17 to 20 billion years ago and had consumed matter for energy to produce the power quantities mentioned above, we believe our galaxy would have had to use up all its matter (certainly all of its hydrogen and helium) billions of years ago. To put it in simpler terms, this galaxy shouldn’t exist today. After all that, it is now time to safely state several more postulates of the diehold.
Energy is not the product of matter but rather is a result of the amount of information modulated into our dimension by the diehold. The amount of information that can be modulated into our dimension is not limited by time or space. This means that if a tear or a lack of information developed in the universe, the diehold would instantly fill up that area with as much information as necessary to fill up that tear. This also means that to measure energy levels in ergs over short or long periods of time is pointless, because the process we are observing is not matter being consumed but rather is a product of matter (a whole galaxy) entering this
147
dimension. When the quasar first forms in this dimension, it is spewing forth so much information and creating so much potential that it is most likely that the matter-part of the information is too unstable to be modulated into existence. Also, since its potential is so great that it is producing only ultraviolet light, it is unlikely that we would be able to see it for several billion years. Not until the modulation point has expanded large enough to reduce the resultant energy will lower frequencies of light be produced. It’s like water pressure through a valve: if the pressure is too high, you cannot divert that stream of water. The information coming out of a quasar is similar: the pressure of the information is so great that it has, in turn, imparted this potential to the information. The information now is at such a high state of potential that it is too high to actually modulate into three-dimensional objects (gases and other forms of matter). Instead, it just remains as ultraviolet light. It is only until the pressure is relieved by increasing the modulation point large enough that the information is reduced to lower levels of potential. We only “see” the quasar and its radio frequencies after many years of expansion. This theory would explain why quasars and other radio sources have not spread out as far as we might have thought possible. In other words, their period of producing matter in the form of gases has only been a relatively recent occurrence. Strong magnetic fields from quasars is the last observational proof we must find. The first point we should ask ourselves is what are the effects produced by a strong magnetic field. A wellknown effect produced by strong magnetic fields is called the Zeeman Effect. This is when a magnetic field will split the spectral lines coming from a light source (Figure 6.3). Quasars do exhibit this effect. Radio sources that have been identified with quasars exhibit another phenomenon that has not been fully explained before. We will describe and explain the phenomenon and relate its cause to a strong magnetic field. Our example is the galaxy, Cygnus A. It is estimated to be at least ten billion light years away. The energy level has been observed to be about 1048 ergs/sec. Cygnus A is detected at a frequency of about 5 GHz. The galaxy produces two radio sources, about two seconds of arc, on either side of the optical image (Figure 6.4). We believe these two separate radio sources are produced in a similar manner as the Zeeman Effect. The actual point of source is the optical image of the center. The strong magnetic field from the center causes the radio fre-
148
LIGHT
SPECTRUM UNDER THE INFLUENCE OF A STRONG MAGNETIC FIELD
NO
MAGNETIC FIELD
An
example of the splitting of a light spectrum caused by a strong magnetic field.
Figure 6.3 Drawing showing the effects of a strong magnetic field on a light spectrum; Zeeman Effect
RADIO IMAGE VISUAL IMAGE RADIO IMAGE
1,000,000 LY EQUIDISTANT RADIO SOURCES
Figure 6.4 Drawing of the radio galaxy, Cygnus A
quencies to separate in a similar manner. The Zeeman Effect and the dual radio sources have been observed in quasars and in later stages of galaxies. This observational proof seems to be adequate to prove our point, that a strong magnetic field is indeed present at the center of these quasars.
There is one more observational bit of evidence that indicates the same thing. There is evidence that the line spectrums of iron are emitted from these early galactic stages of evolution. It has been found that 20 percent of the iron present in a mature galaxy is
149
produced within the first 500 million years of its inception.(2-p208) Scientists have been at a loss to understand how heavy metals could be produced in such a percentage considering the nuclear activity they theorize is going on. Per our theory of existence, the light spectrum of iron we are observing does not really represent the physical metal iron. But, as mentioned in the light chapter, many of the spectral lines of iron are really the frequencies of various carrier, synchronizing and resynchronizing frequencies. We would expect to see these light spectrums present where there is such a tremendous amount of information being directed toward such a small area. We are actually looking at the carrier wave frequencies of the diehold.
Later Evolution of the Universe
Most of the quasars we observe are over 16 billion years old. As described by our initial description of the diehold, we would expect to see them that old; but what about some quasars that have been observed much closer, such as 3C-273, which is 800 million parsecs (2.6 billion light years) away? Per the initial explanation of the theory of the diehold, we would expect all the quasars to be of roughly equal age. How does a quasar show up that appears to be only 2.6 billion years old? It seems that it’s about 15 billion years behind the older quasars. To best explain this, we will have to restate another postulate about the diehold and existence. The diehold will never permit, on a permanent basis, too much information to be directed toward one particular volume in time and space. It will also not permit too little information in a certain volume of space. The reason we observe younger quasars in space becomes apparent when we look at what happens to the information after it passes down through levels of time in the diehold (Figure 6.5). As you can see from the diagram, at Level one, Point A and B which represent galaxies, are a proportionate distance from each other and the sides of the diehold. At time level two, Points A’ and B’ are separated by the same proportion as they were at Level one; but notice that the distance between galaxy A and B has increased by three times. This is evidence of the universe expanding. As Points A and B have grown farther apart, there has been less information occupying that volume of space. As men-
150
Figure 6.5 Drawing explaining why new quasars are formed
tioned in the postulate, we would expect to see the diehold rush information in somewhere between Points A and B to fill up that void, thereby creating quasar C of a much younger age. This process would hypothetically continue until the mid-point (1/2 infinity) is reached. At this point the oldest galaxies should have completely evolved to pure information again, and the newer galaxies will be the only ones present.
There is another important observation to be made from this
151
Picture of the giant elliptical galaxy, M87, the Virgo cluster. Short exposure.
152
diagram which is that the information on Level one would decrease by the square of the distance down through time. In other words, this may explain why the Inverse Square Law is present in our universe. The Inverse Square Law states briefly that the intensity of an effect at the receiving end decreases inversely as the square of the distance between the transmitting source and the receiving source increases. For example, the light intensity striking a surface one foot away from a light source is 16 times greater than if the surface was four feet away. This law holds for all forms of information transfer, such as light, magnetism, heat, electricity, and gravity. There has never been an explanation of why the Inverse Square Law even shows up. There is no logical reason why energy should divide like that except if it was an intricate byproduct of the information that makes up all matter. We know of no other cosmological model that even attempts to answer that phenomenon.
Per our theory, the stage after the initial quasar stage is when the galaxy produces matter in the form of gas and begins forming stars. At this stage we would assume that the galaxy is much larger than the initial quasar. Its modulation point has expanded sufficiently to produce all forms of electromagnetic radiation, from ultraviolet light down to radio noise. There is a great deal of observational evidence proving all of the points of our theoretical model. The scientists call this type of galaxy a Seyfert galaxy. They produce 10 times more radio luminosity than normal spiral galaxies and a thousand times more light. (2-236) The Seyfert galaxy NGC 1068 emits energy at 1045 ergs/sec, and gas is seen being ejected from the nucleus at least 600 km/sec. They also produce a great deal of ultraviolet radiation, such as the Orion galaxy, which produces great amounts of ultraviolet light between 1,250 to 2,000Å. What is very interesting about this galaxy is that we can see stars coming from the center. (4-39) The same unexplainable phenomenon is observed in the Virgo cluster M87 (Illustration), which is a giant elliptical galaxy emitting thousands of times more radio energy than a typical galaxy. It is also observed from photographs of short exposure, ejecting a stream of matter (gases or stars)
153
away from the center. (1-638) The last observational fact is that it is normal that 70 percent of the energy coming from Seyfert galaxies originate from an area no more than 50 parsecs in diameter. (2-p280) With these observations we have proved all the points laid down by our galactic model. The observations fit logically into one theory of existence; no part of the theory disagrees with any other part. There is a logical flow from one idea to the next without having to come up with any new theories.
Before leaving this section, we must make some mention of the jets of matter or stars that are being observed coming from these types of galaxies. There is no logical explanation in the acceptable theories of physics to explain the way these galaxies form their stars. Normally, one would think that a revolving, round mass of gas and energy would evenly spread out; and that stars would form uniformly, perpendicular to its axis. When you see photographs of M87 or the Orion galaxy, you have to add into the equation of the universe the obvious possibility of a conscious intelligence directing creation. We see the same type of phenomena in young stars called T Tauri stars (to be covered later). Our conclusion has to be that creation happens on all levels of existence. We can see from the smallest forms of life to the largest forms of existence, a galaxy, that matter directed by some conscious intelligence is creating other conscious intelligences.
Elliptical galaxies are the next stage of development for a galaxy. These galaxies start forming the more familiar ring of stars around a large bright center mass of more densely packed stars. The center is still quite bright and produces varying amounts of radio noise.
The last stage of a galaxy that we know of is the spiral-type galaxy. Our galaxy (the Milky Way) is this type of galaxy. All spiral galaxies have active, bright centers, one to two percent of the time. The centers of these galaxies give off gravitational waves and radio waves from 10 to 300 MHz. (1-p503) The galactic center (2 degrees from the middle) is observed ejecting hydrogen gas, as well as all types of radiation. The energy is estimated to be about 1042 ergs. Our galaxy is estimated to be one hundred thousand light years in diameter, and our sun is estimated to be about thirty thousand light years from the center.
154
Our Sun
The most important recurring event in the universe is revealed in the reason why a sun novas. The purpose of this section will be to give you a brief introduction as to the evolution of a star, and then a greater understanding to what makes up a star and what causes it to nova. This is the most important event in man’s evolution.
The traditional description of a star is a mass of hot gasses tightly compressed due to a strong gravitational field. The stronger the gravity, the greater the gas density and, in turn, the greater the temperature.
The surface of the sun is called the photosphere. The temperature of the photosphere is estimated to be about 6,000º C. The surface gravity is 27.9 times greater than the earth’s gravity. Gravity is measured in “gausses.” The surface gravity on the earth, measured at sea level, is a little less than one gauss. It is estimated that the photosphere is 260 km thick. The pressure at the surface is said to be only a fraction of the earth’s atmospheric pressure at sea level. The photosphere consists mostly of hydrogen (60-80 percent) and helium. Most of the light produced by the sun comes from this layer.
The next layer of the sun’s atmosphere is called the chromosphere. It is 2,000-3,000 km thick and has a temperature of 100,000’C. Notice that the temperature increases while the pressure of the gas decreases. This is opposite to what was stated previously about the pressure-temperature relationship. This temperature increase will be explained later in the chapter. The same is true for the third and highest known layer of the sun, the corona. Scientists have been able to calculate the temperature of this layer by observing the very ionized spectral lines of iron, calcium, nickel, argon, and other elements. They estimate the temperature to be one to 10 million degrees centigrade. During some solar eruptions, 20 million degrees centigrade have been recorded.
The rotation of the sun is very unusual. The equatorial region revolves faster than the higher latitudes. For instance, the equator revolves on its axis every 25 days; but at around 30’ latitude, north or south, it takes 27.5 days. Still further away from the equator, around 70º latitude, the sun rotates in 33 days. The reason the sun can have these differential rotation rates is because its surface is like a fluid rather than a solid-unlike the surface of
155
the planets. No one knows what causes this differential rotation rate; but we will take a crack at it. Regarding our description of the interior of the earth, we said that the heat source was caused by the modulation point of the information for the earth. This modulation point formed from the information which makes up the planet is responsible for the magnetic and gravitational field. The same thing would be true for the sun (Figure 6.6). The only difference between the sun and the earth is that the sun has much more information and potential than does the earth. This also means that the sun has no matter in its core. What we would expect to find there, instead, is a tremendous magnetic and gravitational field. This is the information for the sun entering this dimension and being directed toward the modulation point. We can make an important assumption here: that the modulation point must be rather small relative to the amount of information that must pass through it. We say this because of the tremendous energy being emitted, along with the large amount of ultraviolet light, along with only small amounts of matter or gases present on the
CORONA
CHROMOSPHERE
PHOTOSPHERE
MODULATION
POINT
Figure 6.6 Drawing of the cross section of the sun
156
surface. We would theorize that most of the matter part of the information leaves this dimension almost as fast as it enters. We would see it leave as ultraviolet light of very short wave lengths. A very small amount of the information would modulate into matter around the “surface” of the sun.
This idea seems to be proven by observations of sun spots. No one has ever figured out why sun spots appear to be black. Astronomers assume them to be approximately 1,300º C cooler than the surrounding surface. These spots last from a few hours to a few months. The only reason astronomers think they are cooler is because they appear as black spots on photographs using infrared filters. (1-p527) There are two faults with their theory. One is that since the sun spot appears as a black spot by using either no filter or an infrared filter, it is assumed that the true wave length of the spot is lower then the infrared spectrum. The surface around the sun spot shows up in the infrared wave lengths. We would say, all this observation proves is that the wave length of the spot is higher than visible light and infrared radiation. This can only mean that the spot is emitting ultraviolet light of very short wave lengths. The appearance of these sun spots means we are looking into the center of the sun. When sun spots occur, magnetic fields of 100 4,000 gausses have been detected. This further proves we are literally looking into the center of the sun. Usually a short time after the sun spot disappears visually, the magnetic field lingers on. This is because the surface of the sun is still too thin to totally veil the magnetic field.
If our model of the sun is correct, then the question is: what causes the light? The answer is easy. Nikola Tesla said it back in the 1890’s. The sun’s light is produced by the process called fluorescence. Fluorescence is when a short (higher) wave length of light strikes an element and is absorbed. The element, in turn, gives off a longer (lower) wave length of light. In the sun, the ultraviolet light strikes the hydrogen and helium gases in the photosphere. Most of the ultraviolet light is then absorbed. The hydrogen and helium, in turn, give off a lower wave length of light which is in the visible light spectrum. The reason no light is present at sun spots is because the hydrogen and helium have been evaporated or pushed away, thereby enabling the ultraviolet light to pass straight out into the upper atmosphere of the sun. It is a well-known fact that the sun gives off vast quantities of ultraviolet light. Today,
157
enviornmentalists are complaining that fluorocarbons are destroying the ozone layer in our upper atmosphere. This ozone layer is responsible for shielding us from most of the ultraviolet light rays from the sun.
We feel it is only fair that we describe how traditional physics tries to describe the energy source of the sun. It is necessary to compare the two so you realize the discrepancies in the amount of energy that has been produced compared with the energy they calculate could be produced, using the thermal nuclear principle. It has been calculated that the energy leaving the sun amounts to 3.8 x 1033 ergs/sec. They theorize this energy is produced by the principle called fusion. The fusion is caused by a tremendous gravitational field which raises the pressure and, in turn, the temperature of the gases to as much as 18 million degrees centigrade. In the fusion reaction, some 600 million tons of hydrogen is converted into approximately 596 million tons of helium. The missing four million tons of matter is said to be producing the energy which we see. The energy levels emitted per year is 1041 ergs, and it is estimated that the total energy the sun has available to it amounts to 6 x 1051 ergs. This is the amount of energy that the sun will emit over its life-time, if it is powered by a fusion-type process. This means that the sun should emit energy over a lifetime of a little more than ten billion years. They estimate that at the present rate of consumption, the sun should shine over another 100 million years. (1-p546) It is estimated that the earth is at least 4.6 billion years old, and chances are the sun existed many billions of years before the earth was formed. Some sources say that the sun was formed at the same time as the earth, but this does not hold up against observations of young stars, such as the T. Tauri stars, as mentioned earlier. These stars are observed giving off a wisp of matter from the side (Figure 6.7). At first this increase in light was considered to be a nova, but it didn’t fit the characteristics of a nova. Its brightness slowly increased, and it remained bright over a long period of time; and then only gradually diminished. No one has given a hypothesis of what this phenomenon is. We theorize that it is similar to the same type of observation seen in early galaxies. This type of star is “giving birth” to its planets. This star is already many billions of years old, and it’s just forming its planets. After the planet is formed, it would take time for the matter to cool down and finally start solidifying to form the crust.
158
Figure 6.7 Pictures of a T tauri star
159
The problem with current dating methods is that it only measures the time when rock crystals first formed, and the period of time when matter was in a liquid or gaseous state is not measured. What we are trying to say is, it is impossible to connect the age of the sun with the age of its planets since we do not know how many years after the sun was formed that it produced its own planets. We would have a better idea of the age of the sun by assuming that the matter of the sun was produced two billion years after the galaxy was first formed. This is assuming, of course, that you accept the idea of nuclear fusion. The logic of this reasoning is that the hydrogen gases it is allegedly consuming should be the same gases that originally formed it. Hydrogen is, by the way, one of the gases that is seen spewing forth from the center of galaxies. So the question really is: How old is our galaxy? We can safely assume that since we are looking back onto time when we look at other galaxies, our own galaxy must be relatively older and more mature than most of the ones we are observing. Since we are observing quasars that are 17 to 19 billion years old, we can assume that our own galaxy is at least 17 billion years old. If we assume that two to three billion years after our galaxy was formed, our sun was formed, it means that our sun would be from 14 to 15 billion years old. As mentioned earlier, if the sun’s generating process was nuclear energy, all of its nuclear fuel should have been used up from four to five billion years ago. There does seem to be a major problem with the acceptable nuclear theory!
Astronomers also know that the gravitational energy of the sun is grossly inadequate to produce the luminosity it has produced over its life-time. So, even they know that something is very wrong with their theory, physicists have tried to test their theory of the source of the sun’s energy by an ingenious experiment done by Brookhaven National Laboratory. If the sun does produce its energy by nuclear fusion, three percent of the energy released should be in the form of particles called neutrinos. The experiment was to see if they could detect a certain percentage of neutrinos striking the earth. They would detect the neutrinos by using an isotope of chlorine (CL37). When Chlorine37 is struck by a neutrino, it is supposed to be transmuted into Argon37 and an electron. The idea was to detect how many radio-active argon atoms would be produced over a period of time. Theoretically the experiment should have detected quite accurately the quantities of
160
neutrinos striking the earth. The result of the experiment was that there were five times less neutrinos than originally expected. There has been no explanation for this discrepancy. To quote again from George 0. Abell:
“It is conceivable that there is a fundamental oversight in our nuclear physics theory, or as yet unexplained defects in (the) experimental techniques; otherwise we are left with the possibility that the sun, at present, is not deriving energy from the proton-proton chain. The latter conclusion, however, would truely send us back to the drawing board...”
Per our theory, the discrepancy comes from the fact that the sun does not derive its energy from thermal nuclear reactions. The neutrinos being detected are probably produced in the corona, where the temperatures necessary for thermal nuclear reactions could occur.
As you will remember from our discussion of the earth’s magnetism, there are various synchronizing and resynchronizing frequencies that manifest themselves as increases and decreases in the earth’s magnetic field. It appears that the sun also has these resynchronizing frequencies of short duration. The 11-year sun spot cycle and the alternating 22-year sun spot cycle seem to be caused by two types of synchronizing frequencies. For instance, during the 22-year sun spot cycle, the polarity of the sun spots seem to change. You will see by Figure 6.8 that as the magnetic field reaches zero, the potential increases considerably. This is not in any way to say that the entire magnetic field of the sun changes polarity, just one or two frequencies. The reason the sun spots appear only around the equatorial region is because the electrostatic field created by the collapsing field is radiated perpendicular to the magnetic field (Figure 6.9). This is similar to the phenomena found on the earth.
The next point to be explained is what causes the chromosphere and the corona. We would analyze these upper atmosphere layers of the sun as being similar to the D, E, Fl, and F2 layers above the earth. As on the earth, the temperature of the F2 layer is 40 times greater than at the surface of the earth. These highly ionized layers are caused by wave groups of frequencies that are formed by the modulation point at the center of the sun. A simpler way of saying this is that these upper layers are ionized by the information that makes up the sun.
161
x
H
- CONTROLLING FREQUENCIES
Figure 6.8 Graph of a theoretical synchronizing frequency of the sun
INFORMATION
DIRECTION OF POTENTIAL
Figure 6.9 Drawing of the sun showing the resulting potential caused by a collapsing field
162
The Evolution of Stars
Our galaxy is estimated to contain over 100 billion stars in varying stages of development. All these stars revolve around the nucleus of our galaxy. It has always been a mystery to astronomers how a star is created and what life trek it goes through before it has finished its purpose. The following is a brief description of what scientists theorize about stars. In Figure 6.10 is the HertzsprungRussell diagram. This diagram is accepted as being a good model for the stages all stars go through. Keep in mind that this is their theory; it is not our interpretation of stellar evolution. Astronomers do admit that since they are only looking at a “snapshot” of lots of stars in various stages of evolution, the diagram is just an educated guess based on a theory of nuclear physics, which we have shown is not correct.
Astronomers have attempted to describe the evolution of a star by making the assumption that stars are formed by dust particles in the galaxy. They purport that as the density of particles becomes great enough, their mutual gravitation will attract them together. As more and more particles collect, their collective gravitational field increases. Eventually their collective mass becomes great enough to cause the kinetic energy at the center of the mass to go up. This is what they theorize forms a star. The one problem with their theory is that it does not explain what gravity is nor why a particle has a gravitational field. As explained in Chapter 3, unless you can define what gravity is and why it is present, even on the particle level, it makes no sense to guess how a star is formed. As the heat increases, the light it produces increases. Refering to Figure 6.10, the area between the dotted lines is this hypothetical infant stage of a star. The reason it’s vertical is because the size of the star is dependent on the amount of particles that are attracted toward a specific spot. The area above zero absolute bolometric magnitude represents the energy released by very large stars, such as super red giants and red giants. The area marked one solar mass is the approximate location of our sun. As you will notice, all measurements of stars are made relative to the mass or our star. So the area above one solar mass is all stars larger than ours and below is of lesser mass. The zero-age line is most often called the main sequence. It is called this because it’s at this stage that the star is relatively stable, whether it is of large or small size. The
163
Figure 6.10 The Hertzsprung-Russell diagram
distance from A to B represents the time it takes a star to go from the loose particle state to the main sequence of stars. Once they reach the main sequence, it is theorized stars will lose temperature and light magnitude, over time. The cycle will proceed down and to the right of the main sequence.
The first group of stars to be discussed is the red super giants and the red giants. They are at the top half of this graph. The surfaces of these red giants are calculated to be 2,700’ C and less. It
164
is estimated that less than one percent of the stars are giants and super giants. The red giants have low density and, therefore, produce less light per square area than stars of smaller mass. It is also theorized that red giants last only about 100 million years. Because of their tremendous size and mass, they tend to collapse quite fast when their nuclear fuel is used up.
The next group of stars are of the type like our own sun, with masses ranging from slightly greater to ours to a little less.
The final stage for normal size stars, less than three solar masses, is called a white dwarf. A white dwarf cannot be greater than 1.4 solar masses, or its mass will be so great that it will collapse to zero diameter. Its density will be one million times greater than water. It is theorized that most stars do reach this white dwarf stage by somehow losing part of their mass over a long period of time. The larger stars (above 1.4 solar masses) eventually lose most of their mass by ejecting it during novas or other processes unknown to astronomers. For large stars that have not been able to lose sufficient mass and are larger than 1.4 solar masses and below 3, their fate is to become a neutron star. For instance, one neutron star with one solar mass would be 20 km in diameter. The neutron star would possess a magnetic field of 1010 _ 1012 gausses. (1-p590)
If the star has a mass greater than three solar masses at the end of its evolution, it will collapse to become a black hole.
Amore detailed description of black holes is appropriate now, because it is one of the most interesting subjects in astronomy. The conclusions derived for a black hole by noted scientists like Oppenheimer, Schwarzchild, Einstein, and Rosen, imply that a first dimension exists, and helps prove our idea.
The main characteristic of a black hole is that it has an extreme gravitational field directed toward an extremely small area in time and space. Since most people associate gravitational fields with matter, we will use the acceptable scientific explanation of black holes to simplify the description of the effects. Later, we will give our theory and its cause and effects. The black hole is formed if a star’s mass is three times greater than our own sun’s. Stars with masses greater than our own sun, in fact many with 20 solar masses and more, are quite common in our galaxy. So astronomers assume that many such stars do eventually become black holes.
When a star has finally used up its atomic fuel, it begins to contract. There is no longer any more energy to maintain the star’s
165
size and stability. So the star continues to collapse to a finite point possessing an infinite gravitational field but no matter. This point is called singularity (Figure 6.11). Before it reaches singularity, there are two important zones surrounding it that must be mentioned. The first is the photon sphere. The photon sphere is the distance at which a beam of light is able to circle the black hole, if directed at the correct angle. Below the photon sphere, all light directed toward the black hole will curve in toward the center.
The next zone is called the event horizon. Any light that is directed perpendicular from the center of the black hole will not reach any further than the event horizon. This is what distinguishes it from the other layer. The event horizon represents the distance from the center at which the gravitational field is so intense that even light cannot escape the gravitational pull. This means that the original star has disappeared from our universe. What is more interesting is what is happening to time. Let’s say we send a spacecraft with a clock aboard to the center of a black hole. As the spacecraft approaches the black hole, time would begin to slow down. As the spacecraft approaches the event horizon, time would become infinitely long. After it passes through the event horizon, time would stop completely. What is interesting is that if there were occupants aboard, they would not notice that time had
Figure 6.11 Cross section of a black hole.
166
slowed down, because their nervous systems would slow down along with time.
If we were observing the spacecraft from a vantage point outside the photon sphere, we would see the spacecraft traveling slower and slower; eventually it would appear to stop and then disappear; but we would never see it go past the event horizon.
The big question scientists ask is where did all the matter for the star go when it collapsed to the singularity point. Einstein and Rosen came up with the theory that the black hole created a hole in our universe which led to another universe. This theory is called the Einstein-Rosen Bridge. The matter would enter this other universe through a “white hole,” thought to be a quasar. This description of a black hole was a conclusion mandated by Einstein’s Theory of Relativity. The funny thing is that the idea that matter comes from or can go to another dimension has no place in the Theory of Relativity, per se. It only became a conclusion to the problem of black holes because the mathematics of the problem of super gravitational fields led it to that logical conclusion. Even though we feel the Theory of Relativity is incorrect regarding energy and what causes gravity, the scientists came to this conclusion because they were forced to explain gravity at its extremes. This forced them to forget their old ideas of existence.
Multidimensional Reality Explanation
We theorize the sun first forms from the center of a galaxy as a small modulation point similar to the evolution of a galaxy. In many respects, we could say they parallel themselves. We would conclude that when a star is first formed we would observe it as a very strong ultraviolet light source. It would appear very blueish and appear to have a very high surface temperature above 25,000 degrees C. For larger stars, the temperature could go as high as 60,000 degrees C. The idea of a star being formed by intergalactic dust particles does not enter into the picture. When the modulation point representing that star comes into being, it creates its own gas cloud around the center of the modulation point. In the early stellar stages, the potential produced by the modulating information is too high for matter to actually form in this dimension. This is because the opening or modulation point that all the information passes through
167
is so small that it raises the pressure of information through that opening so high that the information is unstable when it enters this dimension. Per our theory, the star never leaves the main sequence line. All stars begin as very bright, high-temperature stars. The temperature and brightness is a function of the amount of information that makes up the star. That means that if a star originally has more information that makes it up than another star, it will be hotter and possess greater luminosity. As time goes on, the modulation point grows larger due to the expansion of the universe. This, in turn, causes the temperature to be reduced and permits the star to produce matter of low atomic weight. When the star reaches a temperature of around 7,300’ C, it begins to lose its blueish appearance and becomes white. This is also the stage where it begins to produce more heavier elements such as iron and chromium. (1-p4l9) At this stage of evolution, we theorize that the star produces its planets.
In the next stage of stellar evolution, the star’s surface temperature is approximately 5,700’C. It is white to yellow in appearance. Our sun falls in this category.
In the next stage of stellar evolution, the star’s surface temperature is further lowered to 3,200’-4,700’ C and takes on an orange to red appearance.
In the next stage, the surface temperature is further lowered to less than 3,200’ C. Its surface color has become red by now. As you can see, as the potential of the star went down, the color of the star went from the high end of the light spectrum (ultraviolet) to the lower (infrared) end. We believe that as the modulation point becomes larger, the potential or temperature is inversely proportional. As the potential goes down, a star produces more matter around the modulation point. This would be observed as a thicker sea of liquid gas surrounding the modulation point.
As mentioned in Chapter 3, we also look at a star as being possibly an intelligence; a form of existence on a level we cannot approach. The following is another way of explaining the reason for the tremendous gravitational field created toward the end of a star’s existence. It seems that intelligence is the accumulation of potential. We, as fourth-dimensional beings, have more potential that do three-dimensional objects. Sixth-dimensional beings would have far more potential than we; all the way up to a star-it has more potential than anything we know. It may seem strange to
168
look at a star as being a form of existence, but we should attempt to explain the existence of everything in the universe. It is not good enough to say that something exists solely for our benefit or strictly because it’s there. One of the basic premises of our theory is that there is a continuity through all forms of existence. There is one basic theme for existence, and that is to evolve. We make this assumption because everything we see in the universe, from what happens on the planet with all forms of life to the evolution of planets and stars and even galaxies, indicates one and one thing only: that everything is in a state of evolution, a constant state of “becoming.”
When we take the example of a star, we make the assumption that it perceives most of its existence from its information in the diehold. When it is first created, its gravitational field is great due to the fact that the information being directed toward it is in equilibrium with the unstable information leaving it. As time goes on, its gravitational field becomes greater and greater. Our first explanation of this is that as it creates more three-dimensional matter, its gravitational information becomes greater than the unstable information leaving the star in the form of light. As the star becomes older, it modulates more matter into existence. As more matter is created, less light is given off. If we wanted to make a simple equation for the idea. it would be: total information = modulated matter + light ( r, leaving this dimension). Using this theory, at the last stage all information that makes up that star will be modulated into matter. At this stage, the star becomes a neutron star or black hole.
This is the other way of looking at the increase of a star’s gravitational field. If we accept the premise that the star exists to evolve, then that evolution must be intricately involved in the gathering of information. Since we say a star perceives its surroundings more from the first dimension (the diehold) than it does from any of the other dimensions, then we would observe this gathering as evidenced by its gravitational field. Gravity and magnetism are really information being directed toward a certain area in time and space. The important question to be asked now is at what rate does this star gather information?
As mentioned in previous chapters, the maximum speed obtainable is the speed of the tapehead across the information. This we more commonly refer to as the speed of light. So now to answer
169
the question. This star would gather information at the speed of light. It gathers information 360’ around its domain of information in the diehold. It perceives this information entirely from the diehold. We could say that if we could see that star, it is, in turn, also perceiving us. As we observe it and collect information on its appearance and behavior, it in turn is perceiving the information about our existence. Using this analogy, as time goes on, more and more information would be directed toward this star. This information does not modulate itself as additional matter, because it is actually the information of the objects around it. At some point it has gathered so much information and, in turn, has produced such a tremendous gravitational and magnetic field around it, that it can no longer exist in this dimension or in any other dimension but the first dimension. It disappears from our universe and goes back to the diehold where time stands still as domains of information on the “tape.” It has, in essence, totally evolved. It has gathered information from one end of the universe to the other, covering a period of maybe some 25 billion years; and it has finally reached a point where it’s able to remove its information entirely from our universe. The Einstein-Rosen idea that through a black hole you could enter another universe is not very far wrong. We would not say you would be entering another universe. You would not, but the matter and information would be sent out through a quasar to form another galaxy within this universe; and the cycle would start all over again.
Novas
As a star goes through its various stages of evolution, it also goes through fluctuations in its appearance and energy levels. These fluctuations are known as novae. The old theory about a star that has novaed was that it had blown itself out of existence. Today, astronomers realize that stars can nova more than once and that only the outer layer of gas is ejected from the star. What happens is merely that the gas shell expands very rapidly away from the star. Gas shells have been observed expanding at about 4,600 km/ sec. by the 40th day of observation. Unfortunately, we only begin to notice a nova well after the process has begun. So we really don’t know how fast the gas shell starts out. The light increase happens
170
very fast; within a day, a star can become tens of thousands of times brighter than before. When the star is very large, it will super nova. In this case, the light could eventually become hundreds of millions of times greater than before. The light will eventually decrease over a period of years to decades. Eventually the star will return to a normal appearance. The temperatures of the gas observed during a nova range from 4,700’ C to 20,000’ C. When the gas shell expands, the star no longer gives off a continuous spectrum of light. The spectrum immediately shifts to the ultraviolet. After the gas shell has been ejected, the only light the gas gives off is due to fluorescence caused from the ultraviolet coming from the star. Besides large amounts of ultraviolet light, large amounts of X-rays and radio waves are also emitted. The X-rays produced travel slower than light, therefore, we detect the X-rays some time after the light has reached us. Astronomers have calculated that the heat necessary to produce some of these X-ray sources is as high as 50 million’ C.
Some time after the nova, a nebula cloud will form around the star. The shape of this nebula often appears as a ring such as the famous ring nebula. All types of stars have been observed in the centers of these nebulae, from large stars to small white dwarfs. Many of the stars give off no more light than our own sun and are believed to be not much larger.
The bottom line of all these tidbits of information is that astronomers don’t know what causes a nova, nor what mechanism is involved in the sudden eruption of a star.
Multidimensional Reality Explanation
Using our theory of existence, it becomes relatively easy to understand what a nova is and why it occurs. To understand what happens in a star when it novas, you will have to recall our discussion of polar reversals on the earth. As you will remember, the magnetic field is caused by various clocking, carrier wave and synchronizing frequencies being directed toward the earth. The same is true for a star but even more so. Since there is much more information that makes up a star compared to a planet, we can assume that there are many more of these clocking (controlling) frequencies. The reason for this is that there must be more controls in the
171
Picture of the ring nebula
172
diehold to keep all that information in place (in phase for its time and space). If there weren’t these controls, wide swings in potential would occur. The effects of this would be that a star would be a very unstable object. It would be so unstable that if it had planets, they would not be able to support any type of life.
The next question to be asked is what period or length of time do these various types of controlling frequencies have, and is there evidence of some of these? The obvious cycles that come to mind are examples from our own sun. They are the 11-year and 22-year sun spot cycles. More examples have been observed from other stars. They are called long-period variable stars and pulsating variable stars. Below is a list of the number of these types of stars listed by the Soviet General Catalog of Variable Stars (1968).
Number of Stars Pulsating Variable
Known
In Our Stars Frequencies
Type of Star Galaxy (Days)
Cepheids (type 1) 706 3 to 50
Cepheids (type 2) (About 50) 5 to 30
Tauri stars 104 30 to 150
Long-period Mira-type 4566 80 to 600
Semiregular 2221 30 to 2000
Irregular stars 1687 Irregular
R R Lyrae/cluster-types 4433 1
Beta Cephei/Beta Canis major 23 0.1 to 0.3
Delta Scuti 17 1
Spectrum variables 28 1 to 25
The pulsating light is caused by the star expanding and contracting. For example, one of these stars (Delta Cephei) expands and contracts a distance of 3 million km. This amounts to 7 to 8 percent of its mean radius. It is thought that most of the long-period variables are red giants or super red giants. The reason for this will be explained later.
In summary, there appears to be enough evidence to support the idea of various cycles effecting the appearance of a star. In fact, when we give the subject more thought, we come to the obvious conclusion that stars are much more affected by the clocking and synchronizing frequencies than is any planet. This is because a star is in a very delicate state of equilibrium between matter being
173
modulated into existence and information (light) leaving this dimension. So if a minor clocking frequency crosses the X axis (Figure 6.8), the result would be an additional amount of potential being created. This could have an appreciable affect on a star. Examples would be sun spots or a star expanding.
We theorize that there is a moment in time that all these controlling frequencies cross the nil point (the X axis). We further theorize that the time interval between these reversals is a little less than 12,000 years. Since these controlling frequencies are a function of the diehold, the reversal should happen all over the universe at the same exact moment. We would observe it at scattered times because the stars are various distances away from us, and light does not travel instantaneously. Figure 6.12 best illustrates this idea.
The illustration is a top view of our galaxy. At Point E is the approximate location of the earth. The concentric circles represent time lines 12,000 years apart. Any star that falls on one of these time lines would be observed during a nova. It is possible that the only, reason we see distant galaxies is because some of the stars within the galaxy fall on a number of these time lines. The possible reason why the centers of spiral galaxies are active one to two percent of the time may be because one of these time lines is passing through a high concentration of stars. We would be observing a continuous eruption of novas as the time line passes through the galaxy’s center. The question is, could we find any evidence to correlate distance with the appearance of a nova? There is one example that seems to indicate there is such a relationship; that is the star, Vela X. It is estimated to be approximately 1,500 light years away. It is believed that it novaed some 11,000 to 15,000 years ago. Since time is very difficult to determine, astronomically speaking, the dates obtained are amazingly close to our 12,000-year time cycle. Considering the visual occurrence of novas and super novas in our galaxy, we have to conclude that these are not rare occurrences. It has been estimated that there is at least one super nova in our galaxy every 11 +2 years. In an average year, as many as ten regular novas have been observed.
Figure 6.13 is a graph showing what this 12,000-year-long cycle might look like. The Line H represents the information of various clocking and synchronizing frequencies. We would observe them as the magnetic fields of the star. P represents the potential created
174
Figure 6.12 Drawing depicting the 12,000 year time lines as they travel through the galaxy
by these frequencies. This idea follows from well-proven electronic theory that the current is 90’ out of phase from the magnetic field. We will arbitrarily start where the magnetic field is at its greatest.
175
Figure 6.13 Graph depicting the 12,000 year cycle and the resulting potential created
Step 1-at this stage, the potential caused by these controlling frequencies are at their minimum. Therefore, we would expect to see the star highly compressed and also quite bright. Do we see anything in our observations that would indicate stars of this category? We believe the stars that do fit into this category are called white dwarfs. White dwarfs have a high surface temperature, are of small radius, and are therefore assumed to be of much greater density than are larger stars. It has been estimated that 10 percent of the stars observed are white dwarfs. (1-p458) By our diagram, 10 percent of one of these cycles would represent 1,200 years. So it appears that our observation of white dwarfs fits our diagram.
In Step 2, the magnetic field is approximately equal to the potential it has created by its collapsing field. We would expect to see the star cooling down and expanding very slowly. This condition would continue almost up until the time when the star novaed.
In Step 3, the magnetic field of the sun would be decreasing more rapidly, therefore, we would expect to see the potential created, increase. This would cause the star to expand slightly and
176
the rotation of the star would decrease slightly. The color would change, appearing more yellow.
In Step 4 (the reversal), the magnetic field would be zero. At this point, the potential created by the collapsing magnetic field would create an instant spike of potential that would immediately expand the star to many times its original size. Coupling this with the fact that there is no magnetic field containing what small amount of matter the star is made up of, there would be less force helping to contain the hot gases. Our diagram accurately describes what is observed during a nova. After the gases have expanded, all we would see of the sun is a tremendous amount of ultraviolet light coming from a small spot. It is unknown how long this nil point lasts. There are some indications that it lasts only about 12 hours. After some time, the star would begin to reduce in potential and would begin to create matter in the form of gases around its edge; but this time the sun would appear much larger and quite red. The reason the sun would appear larger is because the magnetic field is still quite weak, and the potential is still far greater than it was prior to the nova. The reason it would be larger is that the radius, at which matter could modulate into existence, would be farther out. The color of the star should be red (Step 5) because the outside of the star would be cooler and its radius from the modulation point is greater. As the magnetic field begins to build up, the appearance should change from red to orange.
Now the question is, is there any observational evidence that would lend support to Step 5? Astronomers do know that red giants and red super giants exist. They have a much cooler surface temperature than do most stars. What is most interesting is that less than one percent of the stars observed are red giants or red super giants. In our graph, that would represent a period of time less than 120 years. The cycle would continue until the star appeared as a white dwarf.
What this illustration implies is that a star not only changes in general appearance over a long period of time but that it also temporarily changes its appearance as the magnetic field increases and decreases. A star could appear as a white dwarf temporarily without actually being in the white dwarf stage. The same could be said when it appears as a red giant. The star’s appearance changes as a result of temporary fluctuations in its potential.
Did our own sun nova some time in the recent past? Our answer
177
is Yes, because all stars in the universe nova at the exact same moment in time. This is also the exact same moment that the polar reversal occurred on the earth. As mentioned in Chapter 3, there is a great deal of evidence to show that the polar reversal occurred at about the same time as the ice age and that both occurrences were about 12,000 years ago. The big question in meteorology and geology has been what caused the ice age.
Ice Age
A brief description of what the scientists think causes an ice age can be summed up simply. One school of thought thinks that the ice age was caused by an increased period of volcanism. What they can’t explain is what would cause a recurring cycle of volcanic activity. These cycles are estimated to be from 10 to 20,000 years apart. How can these increased volcanic periods be caused by an earth that is supposed to be continuously cooling off? They can’t even explain the heat source at the center of the earth, not to mention an explanation of why it would increase and decrease its potential. Supposedly, enough volcanic ash is produced that filters out a good deal of the sun’s rays, thereby cooling down the earth over a long period of time and causing an increased snowfall. The problem with this theory is that a drop in temperature does not cause evaporation of water, which is necessary to have snow.
The other school of thought says the ice age was caused by a tilting of the earth’s axis. This tilting moved the current ice fields further south. Both theories cannot explain why the ice fields were 10 to 12,000 feet thick from latitudes 40’ north and 40’ south. It has been calculated that to create an ice field this thick, covering that much area, our oceans must have lowered from 400 to 1,000 feet.
Multidimensional Reality Explanation
When you try to understand what causes an ice field to be that thick, you have to ask the most basic question: how is snow formed? To have snow or any precipitation, there must be clouds. Clouds are formed from water vapor. Water vapor, in turn, is re-
178
leased into the atmosphere through the process called evaporation. To have evaporation, there must be heat. For the oceans of our earth to be lowered as much as 1,000 feet to create an ice field more than two miles thick, there must be a tremendous amount of heat. The only source of heat within this parsec of space is our sun. It is probably now becoming clear to many of you what caused all of the ice ages this planet has seen. The scenario would go something like this.
At the moment the sun novas, the hot gases would expand, possibly as fast as 10,000 km/sec. This blast of hot gases would reach the earth in about five hours. At that time, the temperature of these gases would be between 2,000’ to 4,700’ C. We derive these temperatures from the evidence of vitrified clays and shales found all over the world and temperatures observed from other stellar novae. It takes at least 1,1000 C to turn clay to glass. At the moment of the flip-flop, the earth would stop rotating. This is explained in the footnote below.’ When the hot gases reach the earth, only one-half of the earth is facing the sun; and considering the angle of the edges, only 1/3 of the planet would get the full treatment. Any water facing the sun at this time would be evaporated extremely quickly. All streams and rivers would dry up immediately. The oceans facing the sun could be considerably evaporated, depending on the water’s depth. Since there would be a tremendous evacuation of atmosphere on the sun side of the earth, this would cause a refrigeration effect on the opposite side, the result of which would be a severe hail storm in the area. This
1
The reason
the earth would stop rotating when the nil point is reached is because the
direction of the controlling frequencies, magnetic field, directed toward the
earth, would cease for a short duration.
After the nil point is passed, we would expect to see the earth rotate
in the opposite direction. We would
then see the sun rise in the west and set in-the east. The scientific basis for the conclusion is a
phenomenon that is observed when high voltage is passed through a wire, firmly
connected only at one end (Figure 6.14). When the current is passed down
through the wire, the wire will twist counterclockwise by itself. This has always been a phenomenon in
science. The reason for this twisting
is because it is a function of the information passing through our dimension. The earth is like the wire. The information for the earth’s various
controlling frequencies pass from the north to the south pole. The result would be that the earth turns or
twists from west to cast. But if the
information were to be reversed, the earth would turn in the opposite
direction. In Chapter 11 we will
present various ancient mythologies that tell the same story: that at one time
the sun rose in the west and set in the east.
179
Figure 6.14 Drawing showing the twisting of a wire and the rotation of the earth
blast of heat would probably remain for as much as a month after the nova. At the time when the earth stops, the oceans on the back side of the earth would continue moving at the previous velocity of rotation (approximately 1,000 miles an hour) toward the east. The question now is, is there any evidence on the earth that the oceans did rush across the surface of the land? Twelve thousand years ago, we would theorize the earth turned from the east to the west, opposite to the direction on which it turns now. This means when the earth stopped, the oceans would have continued traveling in a westwardly direction. There is evidence of great deposits of sand, gravel, and clay located all over the earth’s
180
surface. These deposits have usually been identified with glacial moraines and drift. The glacial drift or till, as it is sometimes called, is believed to be deposited by glaciers; but geologists have discovered that the entire Amazon basin region is just like the glacial drift. (8-p488)
Geological formations that strongly resemble glacial drift have been discovered in many tropical areas. We believe that a great deal of this sedimentary material is formed by processes other than glacial activity. Glacial activity is a principle cause for the till in higher latitudes; but in these lower latitudes, near the equator we think it is safe to rule out the possibility of glaciers. This is because in order to have a glacier that would cover the tropical areas, it would imply that all the water on the earth had evaporated and was then deposited back on the earth as snowfall over the entire planet. Since this is highly improbable, it seems to make more sense that the oceans deposited a great deal of this material on the continents.
After the nova, the earth will lose a certain percentage of its water. Most of the water vapor will be caught in the upper atmosphere and will not escape the earth’s gravitational field, but still a small percentage of the water will be evaporated and will leave this planet. This may be the cause of comets. There will be other occurrences that will happen on the planet at this reversal time. These were mentioned at the end of Chapter 3.
The question now is, is there any evidence on this planet that the sun did nova? There are clues in Chapter 10 in the section on mound builders, and in Chapter 11 we will go extensively through the ancient mythologies which clearly describe such an event.
There are two more major bits of evidence that are extraterrestrial in nature, but which also imply the same thing. One is Mars. The descriptions of Mars sent back by Mariner 9 and the Viking flights clearly show the evidence of large amounts of water once existing on the face of Mars. Nobody knows where the water came from or disappeared to. There are some theories that the water is now in the form of permafrost in the ground. This may be possible at the two polar regions. What the pictures of Mars show is definite evidence of streams, rivers, ocean beds, sea shores, all covered with a layer of very fine dust particles (to be explained later). What we theorize is this: as mentioned earlier, a star becomes cooler as it grows older. We would say that our sun was
181
originally much hotter and maybe much larger. This could have been a hundred million years ago or two billion years ago, but it was at some time in the past. The evidence is overwhelming that Mars had a climate similar to that of earth. What we think happened is that eventually, over a period of many novas, the water on Mars evaporated completely. Mars got cooler because the sun got cooler. We say that Mars definitely did have life on it and probably highly advanced intelligence. We say this because one of the photographs from Mariner 9 clearly showed large pyramids on the face of Mars (to be covered in Chapter 10). We would further theorize that the former occupants of Mars probably moved to the earth. It is very possible that this cycle has been repeated through most of the planets in our solar system. We theorize that the dust found on the face of Mars was caused by the particles carried by the nova when the sun erupted.
To test this theory, we had to figure out where we would look for this type of dust on the earth. As mentioned earlier, an ice age comes after the sun novas. So we would expect to see some type of fine grain material just below the glacial till. We’d also expect to see that the rock below the glacial till would exhibit some sort of chemical change due to heat.
In our investigations we did find that the lowest level of the glacial till was composed of very fine-grained clay. The clay in many of the locations had a reddish appearance and, therefore, a high iron content. There was always a fine line between this clay and the underlying rock. The underlying rock in many parts of the world showed definite indications of being affected by great heat. Such deposits as slate, shale, and conglomerate make up these lower levels of rock. (8-p60) The clay deposits and the till showed no signs of stratification. In other words, it appeared that it was all deposited in one short period of time. These layers of clay also have been generally devoid of fossils. The rock underneath the till is almost always smooth and is sometimes polished. To further accentuate the idea that the till may not have been caused entirely by glaciers, we wanted to see if there was evidence of the glacial till in Siberia, which is cold enough to have glaciers but is also very far from any ocean. (7-p461) There are no glacial drifts found in Siberia. This seems to prove our point that glaciers are not the major cause of the till or drift.
The next evidence that the sun novas is the very existence of
182
the moon. From the Apollo flights, we know several things about the moon. Number one, it did not form from part of the earth. Number two is that the side of the moon that always faces the earth is much smoother than the back side of the moon. There is no explanation for this. It has always been thought that this smooth face is caused from volcanic activity. In that case, why shouldn’t the same type of volcanic activity exist on the back side of the moon? The back side of the moon is very mountainous and full of craters. All of the rocks brought back by the Apollo flights are igneous rocks, rocks that have been formed from heat (1,100º 1,200º C). It has been theorized the heat comes from volcanoes or from the impact heat of many meteors. The rocks on the back side of the moon have some chemical differences from the rocks on our side of the moon. (5-pl64) The next thing is that the moon is covered with a large amount of dust, the origin of which is difficult to ascertain. (Also 20 percent of the samples brought back by Apollo 12 were glass.) (6) We would say that this dust is similar to the dust on Mars.
The next step in our investigation was to see if the chemical composition of the dust found on the moon compared with the chemical composition of the clays found in the drift (Figure 6.15). We found that the chemical composition was very close. As you can see by Figure 6.16, the compounds found on the moon are the same ones that make up the compounds of the clays. There are several exceptions. The iron oxide content of the lunar soil is higher, but this is because many micrometeorites have impacted on the lunar surface. On the earth, the silicon (Si 02) is greater because of the sand that leaches through the soil into the clay layers. Some of the other minor elements have also seeped into these clay layers from organic material above. What is particularly important is that the major ingredients that make up clay are present in the same proportions as in the dust found on the moon.
The last thing we must explain is why the dust found on the moon is gray while the dust found on Mars and the clays on the earth are reddish. This is relatively simple to explain. On Mars and on the earth, free oxygen molecules are present which can produce iron oxide, which has a reddish color. On the moon there is no free oxygen available with which the iron can associate; therefore, the dust keeps its steel-gray appearance. We are not saying that all clays found on the earth originate from the sun, merely that some
183
Type of Clay Chemical Formula
Allophane A12 Si 05 - 5H2 0
Kaolinite (OH)2 Si4
A14 010 or Si 02; A12 02
Smectite (OH)4 Si8 A14 020 or Si 02; A12 03
lilite Mg, Fe, or Ca (OH2) A12 Si4 - K, 010
Chlorite (OH)4 (Si, AI)8 (Mg, Fe)6 020 or (Mg, Al)6 (OH)12
Figure 6.15 The chemical composition of the most commonly found clays on the earth
Atomic Composition of the Lunar Surface
Element Apollo 14 Apollo 15 Apollo 16 Apollo 17 Earth’s Surface
O-Oxygen 60.8% 60.4% 61.1% 61.1% 49.13%
Si-Silicon 17.5 17.3 16.3 16.3 26.00
Al-Aluminum 7.7 6.5 11.6 10.1 7.45
Fe-Iron 3.1 4.5 1.6 1.8 4.20
Ca-Calcium 4.3 4.4 6.1 6.1 3.25
Mg-Magnesium 5.4 5.9 3.0 4.0 2.35
Molecular Composition of the Lunar Surface
Molecule
Si 02 48.0% 46.07% 45.0% 47.0% 48.5%
Fe 0 10.5 21.19 7.5 8.6 10.5
Ca 0 10.7 10.21 13.0 12.1 10.0
A12 03 17.1 8.95 23.0 21.2 16.0
Mg O 8.7 9.51 8.5 9.9 7.0
K2 O .5 .034 .20 .15 1.2
Figure 6.16 Table of the chemical composition of the lunar surface
of it probably originated there. In Brazil a form of this red clay, called ironstone, has been found on the tops of many mountains. The clay covers the mountains like a mantel and is often from six to nine feet thick. (8-p560) The thickness can be explained by the fact that Brazil is located on the equator; it would therefore have received the full impact of the heat blast when the sun novaed.
184
There are two unusual facts about the moon that we do not consider coincidence. That is, why does the moon have an almost perfectly circular orbit around the earth? Why should this be, if allegedly the moon was captured by the earth’s gravity. If this happened on a chance-type basis, the orbit should be more elliptical because the moon had a forward velocity. How do we explain the “straightening out” of the orbit? The last thing which haunts us the most is how is it that during an eclipse of the sun, the moon perfectly covers the photosphere of the sun, thereby revealing only the sun’s corona? We do not think this is coincidence.
For a moment, accept the possibility that a highly advanced civilization, maybe 100 or 200 million years ago, knew what existence was and the ideas to which man was to evolve. But they also knew that after so many years, the sun novas and most or all of civilization is destroyed; and along with this, knowledge of existence and evolution could also be destroyed. This would mean that the following generation of survivors would not know what is to happen to their descendants in the future. This advanced civilization, we theorize, placed in orbit a satellite which we call the moon. This moon would act like a time clock, not so much to divide the year into lunar months, but more to give a clue to man as to what would happen in the future. In other words, let’s take a civilization that hasn’t evolved to the idea of existence, but is somewhat technologically advanced. As the time draws closer to the reversal, the sun will gradually expand. At some point, during one of the solar eclipses, the moon would no longer cover the sun. From that time on there would no longer be a total eclipse of the sun. No matter how primitive the civilization is at this time, even they would begin to realize that something very wrong is about to happen. Since one side of the moon always faces the earth and that is the side that is relatively smooth, we must also conclude that the time between cycles is always exactly the same length of time. We say this because the same side of the moon is always exposed to the sun when it novas. This is why our side of the moon is relatively smooth; it is smooth because the surface is periodically being melted by the heat waves of the sun. It is also very possible that the back side of the moon has a depository of information from past advanced civilizations.
In the llth and 12th chapters, we will further prove our theory that the sun novaed. The evidence from ancient mythologies from
185
all over the world, plus many sections in the Bible, overwhelmingly proves our theory.
REFERENCES
1. Abell, G. O., “Exploration of the Universe,” 3rd. ed. (Holt, Rinehart & Winston, N.Y., 1975).
2. Setti, G., (ed.), “Structure and Evolution of the Galaxies,” (Holland, D. Reidel Publ., 1975).
3. The Aging Universe: Science News, vol. 110, p. 19, July 10, 1976.
4. Orion’s Star nursery in far ultraviolet: Science News, vol. 109, p. 39, Jan. 17, 1975.
5. Kopal, Z., “The Moon in the Post-Apollo Era,” (Holland,
D. Reidel Publishing Co., 1974).
6. Fielder, G., (ed.), “Geology and Physics of the Moon,” Elsenier Publ. Co., N.Y., 1971).
7. Geikie, J., “The Great Ice Age,” (N.Y., D. Appleton & Co., 1888).
8. Aggassiz, L., “Geology and Physical Geography of Brazil,” (Fields, Osgood & Co., 1870).
Bibliography
Aggassiz, L., Geological Sketches (N.Y., Houghton, Mifflin & Co., 1890).
Alfven, H., Antimatter and Cosmology, Scientific American, vol. 4, p. 106-114, Apr., 1967.
Dyson, F. J., Energy in the Universe, Science American, vol. 225, no. 3, p. 50-59, Sept. 1971.
Hopkins, J., Supernovae, Astronomy, vol. 5, no. 4, p. 7, April 197 7.
186
Kaufmann, W. J., III, Relativity and Cosmology (N.Y., Harper & Row, 1973).
Lyell, Sir. C., The Antiquity of Man (Philadelphia, J. B. Lippincott, 1873).
Post-Apollo Lunar Science, A report in July 1973, Lunar Science Institute, Texas, 1972.
Ries, H., Clays, 3rd. ed., (N.Y., John Wiley & Sons, 1927).
Runcorn, S. K., and Urey, H. C., (ed.), The Moon-Symposium No. 47 of the International Astronomical Union (Holland, D. Reidel Publ., 1972).
Sandage, A. R., The Red-Shift, Scientific American, vol. 195, no. 3, p. 170-182.
Shapley, H., Galaxies (Mass., Harvard University Press, 1972).
Thomsen, D. E., A Hole in the Middle of the Galaxy, Science News, vol. 111, p. 124, Feb. 19, 1977.
Thomsen, D. E., Supernovas, Science News, vol. 111, p. 76, 1977.
Weinberg, S., Gravitation and Cosmology (N.Y., John Wiley & Sons, 1972).
187