## General Objections
There are in fact periodicities as well as redshift quantisation effects. The periodicities are genuine galaxy-distribution effects. However, they all involve high redshift differences such as repeats at z = 0.0125 and z = 0.0565. The latter value involves 6,200 quantum jumps of Tifft's basic value and reflects he large-scale structuring of the cosmos at around 850 million light-years. The smaller value is around 190 million light-years. This is the approximate distance between super-clusters. The point is that Tifft's basic quantum states still occur within these large-scale structures and have nothing to do with the size of galaxies or the distances between them. The lowest observed redshift quantisation that can reasonably be attributed to an average distance between galaxies is the interval of 37.6 km/s that Guthrie and Napier picked up in our local supercluster. This comprises a block of 13 or 14 quantum jumps and a distance of around 1.85 million light-years. It serves to show that basic quantum states below the interval of 13 quantum jumps have nothing to do with galaxy size or distribution. Finally, Tifft has noted that there are red-shift quantum jumps within individual galaxies. This indicates that the effect has nothing to do with clustering. (November 16, 1999.)
In Douglas Kelly's book,
In response to criticism, it was obvious the data list was beyond contention - we had included everything in our Report. Furthermore, the theoretical approach withstood scrutiny, except on the two issues of the redshift and gravitation. The main point of contention with the Report has been the statistical treatment of the data, and whether or not these data show a statistically significant decay in c over the last 300 years. Interestingly, all professional statistical comment agreed that a decay in c had occurred, while many less qualified statisticians claimed it had not! At that point, a Canadian statistician, Alan Montgomery, liaised with Lambert Dolphin and me, and argued the case well against all comers. He presented a series of papers which have withstood the criticism of both the Creationist community and others. From his treatment of the data it can be stated that c decay (cDK) [note: this designation has since been changed to 'variable c' or Vc] has at least formal statistical significance. However, Zero Point Energy and the Redshift takes the available data right back beyond the last 300 years. In so doing, a complete theory of how cDK occurred (and why) has been developed in a way that is consistent with the observational data from astronomy and atomic physics. In simple terms, the light from distant galaxies is redshifted by progressively greater amounts the further out into space we look. This is also equivalent to looking back in time. As it turns out, the redshift of light includes a signature as to what the value of c was at the moment of emission. Using this signature, we then know precisely how c (and other c-related atomic constants) has behaved with time. In essence, we now have a data set that goes right back to the origin of the cosmos. This has allowed a definitive cDK curve to be constructed from the data and ultimate causes to be uncovered. It also allows all radiometric and other atomic dates to be corrected to read actual orbital time, since theory shows that cDK affects the run-rate of these clocks. A very recent development on the cDK front has been the London Press
announcement on November 15th, 1998, of the possibility of a significantly
higher light-speed at the origin of the cosmos. I have been privileged to
receive a 13 page pre-print of the Albrecht-Magueijo paper (A-M paper) which
is entitled "A time varying speed of light as a solution to cosmological
puzzles". From this fascinating paper, one can see that a very high initial c
value really does answer a number of problems with Big Bang cosmology. My main
reservation is that it is entirely theoretically based. It may be difficult to
obtain observational support. As I read it, the A-M paper requires c to be at
least 10 There have been claims that I 'cooked' or mishandled the data by selecting figures that fit the theory. This can hardly apply to the 1987 Report as all the data is included. Even the Skeptics admitted that "it is much harder to accuse Setterfield of data selection in this Report". The accusation may have had some validity for the early incomplete data sets of the preliminary work, but I was reporting what I had at the time. The rigorous data analyses of Montgomery's papers subsequent to the 1987 Report have withstood all scrutiny on this point and positively support cDK. However, the redshift data in the forthcoming paper overcomes all such objections, as the trend is quite specific and follows a natural decay form unequivocally. Finally, Douglas Kelly's book Professor Skiff then makes several comments. He suggests that cDK may be acceptable if "Planck's constant is also changing in such a way as to keep the fine structure 'constant' constant." This is in fact the case as the 1987 Report makes clear. Professor Skiff then addresses the problem of the accuracy of the measurements of c over the last 300 years. He rightly points out that there are a number of curves which fit the data. Even though the same comments still apply to the 1987 Report, I would point out that the curves and data that he is discussing are those offered in 1983, rather than those of 1987. It is unfortunate that the outcome of the more recent analyses by Montgomery are not even mentioned in Douglas Kelly's book. Professor Skiff is also correct in pointing out that the extrapolation from the 300 years data is "very speculative". Nevertheless, geochronologists extrapolate by factors of up to 50 million to obtain dates of 5 billion years on the basis of less than a century's observations of half-lives. However, the Professor's legitimate concern here should be largely dissipated by the redshift results which take us back essentially to the origin of the curve and define the form of that curve unambiguously. The other issue that the Professor spends some time on is the theoretical derivation for cDK, and a basic photon idea which was used to support the preferred equation in the 1983 publication. Both that equation and the theoretical derivation were short-lived. The 1987 Report presented the revised scenario. The upcoming redshift paper has a completely defined curve, that has a solid observational basis throughout. The theory of why c decayed along with the associated changes in the related atomic constants, is rooted firmly in modern physics with only one very reasonable basic assumption needed. I trust that this forthcoming paper will be accepted as contributing something to our knowledge of the cosmos. Professor Skiff also refers to the comments by Dr. Wytse Van Dijk who said that "If (t)his model is correct, then atomic clocks should be slowing compared to dynamical clocks." This has indeed been observed. In fact it is mentioned in our 1987 Report. There we point out that the lunar and planetary orbital periods, which comprise the dynamical clock, had been compared with atomic clocks from 1955 to 1981 by Van Flandern and others. Assessing the evidence in 1984, Dr. T. C. Van Flandern came to a conclusion. He stated that "the number of atomic seconds in a dynamical interval is becoming fewer. Presumably, if the result has any generality to it, this means that atomic phenomena are slowing with respect to dynamical phenomena ..." This is the observational evidence that Dr. Wytse Van Dijk and Professor Skiff required. Further details of this assessment by Van Flandern can be found in "Precision Measurements and Fundamental Constants II", pp.625-627, National Bureau of Standards (US) Special Publication 617 (1984), B. N. Taylor and W. D. Phillips editors. In conclusion, I would like to thank Fred Skiff for his very gracious handling of the cDK situation as presented in Douglas Kelly's book. Even though the information on which it is based is outdated, Professor Skiff's critique is very gentlemanly and is deeply appreciated. If this example were to be followed by others, it would be to everyone's advantage.
Setterfield: The main thrust of these
comments is that if new work does not build on the bases established by general
and special relativity, then it must be an "improper" theory. I find this
attitude interesting as a similar view has been expressed by a quantum
physicist. This physicist has accepted and taught quantum electrodynamics (QED)
for many years and has been faithful in his proclamation of QED physics.
However, when he was presented with the results of the rapidly developing new
interpretation of atomic phenomena based on more classical lines called SED
physics, he effectively called it an improper theory since it did not build on
the QED interpretation. It did not matter to him that the results were
mathematically the same, though the interpretation of those results was much
more easily comprehensible. It did not matter to him that the basis of SED was
anchored firmly in the work of Planck, Einstein and Nernst in the early 1900's,
and that many important physicists today are seriously examining this new
approach. It had to be incorrect because it did not build on the prevailing
paradigm.
I feel that the above comments may perhaps be in a similar category. The referenced quote implies that this lightspeed work does "not have roots and branches reaching out, securing them into the other, more firmly established, theories of physics." However, I have gone back to the basics of physics and built from there. But if by the basics of physics one means general and special relativity, I admit guilt. However, there is a good reason that I do not build on special or general relativity and use the types of equations those formalisms employ. In most of the work using those equations, the authors put lightspeed, c, and Planck's constant, h, equal to unity. Thus at the top of their papers or implied in the text is the equation: c=ħ=1. Obviously, in a scenario where both c and h are changing, such equations are inappropriate. Instead, what the lightspeed work has done is to go back to first principles and basic physics, such as Maxwell's equations, and build on that rather than on special or general relativity. This also makes for much simpler equations. Why complicate the issue when it can be done simply? There is a further reason. With significant changes to c and h, it may mean that general relativity should be re-examined. A number of serious scientists have thought this way. For example, SED physics is providing a theory of gravity which is already unified with the rest of physics. This approach employs very different equations to those of general relativity. A second example is Robert Dicke, who, in 1960, formulated his scalar-tensor theory of gravity as a result of observational anomalies. This Brans-Dicke theory became a serious contender against general relativity up until 1976 when it was disproved on the basis of a prediction. Note, however, that the original anomalous data that led to the theory still stand; the anomaly still exists in measurements today, and it is not accounted for by general relativity. A third illustration comes from 2002. In this last year, over 50 papers addressing the matter of lightspeed and relativity have been accepted for publication by serious journals. These facts alone indicate that the last word has not been spoken on this matter. It is true that the 2002 papers have been tinkering around the edges of relativity. Perhaps the whole issue probably needs an infusion of new thinking in view of the changing values of c and this other anomalous data, despite the comments of Misner. For these 3 reasons I am reluctant to dance with the existing paradigm and utilize those equations which may be an incomplete description of reality. Therefore, I plead guilty in that I am not following the path dictated by relativity. But this does not necessarily prove that I am wrong, any more than SED physics is wrong, or that Brans and Dicke were wrong in trying to find a theory to account for the observational anomalies. On the basis of common sense and the history of scientific endeavor, I therefore feel that the "requirement" presented above may legitimately be ignored.
By neglecting the anomalies associated with the dropping speed of light values, and neglecting the anomalies associated with mass measurements of various sorts and the problems with quantum mechanics, those adhering to relativistic theory have left themselves open to the charge that relativity has become theory-based rather than observationally-based. Thanks [to the second correspondent] for your summation of the situation, which is largely correct. The evidence does indeed suggest that h tends to zero and c tends to infinity as one approaches earlier epochs astronomically. At the same time, the product hc has been experimentally shown to be invariant over astronomical time, just as you indicated. These experimental results, the theoretical approach that incorporated them, and their effects on the atom, were thrashed out in the 1987 Report. These ideas have been subsequently developed further in Exploring the Vacuum. Later, Reviewing the Zero Point Energy refined this further. In this way the correspondence principle has been upheld from its inception, and I had thought that this part of the debate was substantially over. However, those unfamiliar with the 1987 Report would not be expected to know this and may wonder about its validity as a result. As far as intra-system energy conservation is concerned, that issue was partly addressed in the 1987 Report where it was shown that atomic processes were such that energy was conserved in the system over time, and in more detail in the main 2001 paper. The outcome was hinted at in the Vacuum paper in the context of a changing zero-point energy and a quantized redshift. However, another paper dealing with these specific matters is proposed. Finally, I, too, am dismayed by the hijacking of SED work by the new-agers, but that should not cloud the valid physics involved.
It was noted that the results of the variable c (Vc) research applied to the early universe might “indicate that high energy physics is governed by classical rather than quantum mechanics at extreme temperatures and densities.” In response, it is fair to say that I have not investigated that possibility. What this research is showing is that the basic cause of all the changes in atomic constants can be traced to an increase with time in the energy density of the Zero-Point Energy (ZPE). Thus the ZPE was lower at earlier epochs. This has a variety of consequences which are being followed through in this series of papers, of which the Vacuum paper is the first. One consequence of a lower energy density for the ZPE is a higher value for c in inverse proportion as the permittivity and permeability of space are directly linked with the ZPE. Another consequence concerns Planck’s constant h. Planck, Einstein, Nernst and others have shown mathematically that the value of h is a measure of the strength of the ZPE. Therefore, any change in the strength of the ZPE with time also means a proportional change in the value of h. The systematic increase in h which has been measured over the last century as outlined in the 1987 Report implies an increase in the strength of the ZPE. Thus the invariance of hc is also explicable. But a lower value for h means that quantum uncertainty is also less in those epochs. This in turn means that atomic particle behaviour was more classical in the early days of the cosmos. This result seems to be independent of the temperature and density of matter, but does not deny the possibility of other effects. The final matter that the Vacuum paper addresses is the cause for the increasing strength of the ZPE. The work of Gibson allows it to be traced to the turbulence in the ‘fabric of space’ at the Planck length level induced by the expansion of the cosmos.
In the Lieu and Hillman paper of 18 “place stringent limits on the
graininess of time.” Elucidation of the rationale behind their methodology
comes in the first sentence, namely “It is widely believed that time ceases
to be exact at intervals [less than or equal to the Planck time] where quantum
fluctuations in the vacuum metric tensor renders General Relativity an
inadequate theory.” They then go on to demonstrate that if time is ‘grainy’
or quantised, then the frequencies of light must also be quantised since
frequency is a time-dependent quantity. Furthermore, they point out that quantum
gravity demands that “the time t of an event cannot be determined more
accurately than a standard deviation of [a specific form]…” This form is
then plugged into their frequency equations which indicate that light photons
from a suitably distant optical source will have their phases changed randomly.
But interferometers take two light rays from a distant astronomical source along
different paths and converge them to form interference fringes. They then
conclude “By Equ. (11), however, we see that if the time quantum exists the
phase of light from a sufficiently distant source will appear random – when
[astronomical distance] is large enough to satisfy Equ. (12) the fringes will
disappear.” This paper, and their subsequent one, both point out that the
fringes still exist even with very distant objects. The conclusion is that time
is not ‘grainy’, in contradiction to quantum gravity theories. This result is a
serious blow to all quantum gravity theories and a major re-appraisal of their
validity is needed as a consequence. Insofar as these results also call into
question the very existence of space-time, upon which all metric theories of
gravity are based, then considerable doubt must be expressed as to the reality
of this entity.However, this is not detrimental to the SED approach, since gravity is already a unified force in that theory. It is in an attempt to unify gravity with the other forces of nature, including quantum phenomena, that quantum gravity was introduced. By contrast, SED physics presents a whole new view of quantum phenomena and gravity, pointing out that both arise simply as a result of the “jiggling” of subatomic particles by the electromagnetic waves of the Zero-Point Energy (ZPE). Since this ZPE jiggling is giving rise to uncertainty in atomic events, this uncertainty is not traceable to either uncertainty in other systems or to an intrinsic property of space or time. This point was made towards the close of my Journal of Theoretics article “Exploring the Vacuum”. As a consequence, it becomes apparent that time is not quantised on this Vc approach. Ragazzoni, Turatto and Gaessler use more recent data to
reinforce the original conclusions of Lieu and Hillman. These latter two then
expand on their 2002 approach in their 27 Here is the key point. In order to obtain an uncertainty
in distance, they multiply the uncertainty in time by the speed of light. If
there is no uncertainty in time, as the Vc model indicates, then the equations
used by Lieu and Hillman cannot be employed to discover if there is any
uncertainty in distance at the Planck length. Furthermore, the final statement
in their 2003 Abstract, namely that However, there are other ways of determining whether or
not space itself is ‘grainy’ at the Planck length level. If metric theories of
gravity have any validity at all, and the work of Lieu and Hilman has cast
serious doubt on this, then an approach suggested by Y. Jack Ng and H. van Dam
may soon provide observational evidence for the existence of the graininess of
space-time. They write in their Abstract A different approach has been adopted by Baez and Olson
which suggests that wave fluctuations the size of the Planck length are the only
ones expected to exist if the fabric of space exhibits graininess at that scale.
As a result, such graininess will be undetectable to gravitational wave The outcome from this discussion is that the granular structure of space is still a very viable option when the SED approach is followed through, as it is in the Vc model. The anonymous reader’s final two paragraphs therefore draw incorrect conclusions. However, if, as on the Vc model, a decrease in the value of Planck's constant can also be construed as meaning a decrease in the uncertainty of time, then part of the problem that has been raised by these Hubble telescope observations may be overcome. If the decrease in the uncertainty of time at the inception of the cosmos is followed through, then this may provide an answer for the problem that these observations pose to theories of quantum gravity. Thus the graininess of space is not called into question in the Vc approach. (April 3, 2003 updated) The theoretical basis for these experiments is the expected fuzziness or granularity of space and time that emerges from those theories that attempt to meld general relativitistic concepts of gravity with those of quantum mechanics. The respective papers by Lieu and Hillman, as well as Ragazzoni et al, have concentrated on the expected fuzziness or graininess of time. They deduced that if such a quantum fuzziness or granularity for time really exists, then there will be a smearing of light photons from a sufficiently distant source which will give slightly blurry pictures of very distant astronomical objects, the blurriness increasing with distance. As it turns out, the Hubble Space Telescope pictures of the most distant objects are sharp, not blurry. This may call into question the whole concept of quantum gravity. However, the newly developing branch of SED physics has a completely consistent approach to gravity that is already unified to other physical concepts, and therefore does not need “unifying” in the way that quantum gravity theories attempt to do. On this basis, the HST images of distant objects supports the SED approach rather than the quantum gravity approach. Furthermore, these results are not detrimental to the
variable speed of light (Vc) model. On this approach, quantum uncertainty was
getting less the further back into the past we go. This uncertainty is given by
Planck’s constant, h. At the inception of the cosmos, h was very much smaller
than it is now. Since the units of Planck’s constant are energy multiplied by
time, this means that the uncertainty in time was very much less (of the order
of 1/10
This situation does not apply to emitted light traveling through a cosmos where the ZPE is changing. In this case, an infinitely long beam or a photon wavetrain is traveling through the vacuum. The energy density of the vacuum is smoothly increasing simultaneously throughout the universe. This means that the infinitely long beam and the wavetrain have all parts slowing SIMULTANEOUSLY. In other words there is no bunching up effect because all parts of the beam or wavetrain are traveling with the same velocity. A similar situation would exist with cars on a highway if all cars were simultaneously slowing at the same rate. The distance between the cars would remain constant, but the number of cars passing any given point per unit of time would be lessening proportional to the speed of the traffic stream. For that reason, in the lightspeed case, wavelengths remain fixed in transit and the frequency, the number of waves passing a given point per unit time, drops in a manner proportional to the rate of travel. Therefore in a situation with cosmologically changing ZPE, the frequency of light is lightspeed dependent, while the wavelength remains fixed. It was the experimental proof of this very fact that was being seriously discussed by Raymond T. Birge in Nature in the 1930’s. Another consideration applies here also. The equation
for the energy E of a photon of light is given as
However, the applicability of Maxwell’s equations is
also called into question here. It is occasionally mentioned that these
equations imply a constant speed of light in the vacuum, and any variation
elsewhere is treated on the basis of a changing refractive index of the
medium concerned. As noted above, this approach is inappropriate for the
situation considered in this paper. Since Maxwell’s equations seem to imply
a constant value for Since all atomic processes are linked to the behaviour
of the ZPE as is lightspeed, then as There is a problem which needs to be mentioned in closing; a problem which is underlying much of the problem some are having with the work presented on these pages. Physics has currently seemed to reverse a sequence which should not have been reversed, and in doing so has made several wrong choices in the latter part of the twentieth century. Those that are underlying the reviewer's criticisms have to do with the permeability of space, a mistaken idea about frequency in terms of the behavior of light, and the equations of Lorentz and Maxwell. As mentioned in point 1, permeability was related to the speed of light early in the twentieth century, but divorced from it later and declared invariant. It was invariant by declaration, not by data, and this is the first backwards move which has influenced the reviewer's thinking here. Secondly, it has become accepted that the frequency of light is the basic quantity and that it is the wavelength which is subsidiary. Until about 1960 it was the wavelength that was considered the basic quantity for measurement. However since it had become easier to measure frequency with a greater degree of accuracy, the focus shifted from choosing wavelength as the basic quantity to using frequency in its stead, thus relegating wavelength to a subsidiary role. The data dictates something else, however. It is wavelength which remains constant and the frequency which varies when the speed of light changes. This latter point was made plain by experimental data from the 1930’s, and was commented on by Birge himself. In a similar way, although both Lorentz and Maxwell formulated their equations before Einstein adopted and worked with them, it has become almost required to derive the formulas of both Lorentz and Maxwell in terms on Einstein’s work. Properly done, it should be the other way around, and the work of both earlier men should be allowed to stand alone without Einstein’s imposed conditions. One final note: In the long run, it is the data which must determine the theory, and not the other way around. There are five anomalies cosmology cannot currently deal with in terms of the reigning paradigm. These are easily dealt with, however, when one lets the data go where it will. The original data are in the Report. As given in my lectures, the anomalies concern measured changes in Planck’s constant, the speed of light, changes in atomic masses, the slowing of atomic clocks, and the quantized redshift. Modern physics seems to be showing a preference for ignoring much of this in favor of current theories. That is not the way I wish to approach the subject. The common factor for solving all five anomalies is increase through time of the zero point energy, for reasons outlined in “Exploring the Vacuum.” The material has also been updated in Reviewing the Zero Point Energy. |