Hey everyone. I know I haven’t updated on here in quite a while. I’ve been meaning too, but have been too carried away in my studies. Recently I ordered a large shipment of books on quantum mechanics, general relativity, cosmology, biological anthropology, history, and economic history. So I’ve been busy reading them.

I also recently acquired an encyclopedia’ish set called “The World Of Physics”, which is a collection of papers, and excerpts written by the “founding fathers” of various topics in Physics. The first volume starts off with myths and early legends about the world. Various excerpts are taken from religious texts and other myths and legends, explaining how early cultures viewed the world.

Next we get to the early astronomers, such as Copernicus and Kepler and their journey of charting the heavens. We move to Galileo and Newton, and early thought on mechanics. Next we come to early theories on atoms and energy. Authors like Robert Hooke, Daniel Bernoulli, John Dalton, Amedeo Avogadro, Michael Faraday, and others. Next we find thermodynamics, heat, and entropy. Then later, electromagnetism.

It’s so neat hearing how they discovered these things and what they were thinking. A lot of them had various details wrong, but had the general idea correct. It’s neat. It’s a lot more detailed than what you find in any textbook on physics or chemistry or even what you find online. There’s nothing like hearing Leibniz debate Descartes in various aspects of motion, such as which quantity is more important, Descartes’ mass * velocity (momentum), or Leibniz’s mass * velocity^2, which he labeled vis viva, or “life force”.

They were debating what kept the universe from “running down” and wondering if the motion of various things was conserved or not.

The next volume contains famous physicists’ perspectives on Radioactivity, Special Relativity, General Relativity, Quanta, Space-Time Symmetry, and Particle physics. Some of the featured authors include Pierre and Marie Curie, Ernest Rutherford, Enrico Fermi, Otto Hahn, Henri Poincare, Albert Einstein, Hermann Minkowski, Ernst Mach, Max Planck, Erwin Schrodinger, Werner Heisenberg, Wolfgang Pauli, Paul Dirac, Richard Feynman, and others.

The last volume pertains entirely to the cosmos and the limits of science. It has papers written by Einstein, Steven Weinberg, George Gamow, Arthur Eddington, Stephen Hawking, and more, talking about the origin of matter, stars, black holes, and more.

To say the least I’ve never been more thrilled in my life. I can’t think of anything which could be better than sitting here reading these guys works. It’s all absolutely amazing. Most of the excerpts are from the authors nontechnical writings, so it’s not mathematically intense. It’s them trying to help you grasp the big picture. It’s like having the best of the best sit there personally in your bedroom and explain everything to you.

Just to give you guys a taste of the quality of this material, I’m going to take a little time and type out an excerpt written by Percy Williams Bridgman on statistical mechanics and the second law of thermodynamics. The article talks about a lot of things, including entropy, the heat death of the universe, and the arrow of time. Unfortunately I’m too lazy to type out the entire article, but I will take 30 minutes or so and type out a section of the article.

I should probably give a brief primer to what you’re about to read, considering you won’t be able to read the entire article.

We all know that if you place a pot of water on a stove’s hot burner, the vibrations of the atoms in the burner start to cause vibrations in the pot, which then transfer to the water, and the water begins to heat up, and eventually boil. But wouldn’t it be strange to put a pot of water on a red hot burner, then watch the water freeze to a solid block of ice? According to some that’s actually possible as temperature is simply random vibrations of atoms, and the random vibrations could well transfer the energy from the cold body to the hot body instead of the other way around, if everything was situated just right. That heat tends to transfer from a hot body to a cold body, until they balance out to an equilibrium temperature, is only the most probable outcome.

Physicists model all the little molecules and atoms mathematically, and assume they exist in some random configuration, and move in random ways, and use statistics and probability to guess the most probable outcome based on the laws of physics. But when we look into this issue closely there’s a lot of questions to be asked, and who better to walk us through the issues than a Nobel laureate whose research specialized in these areas! This is probably the most fascinating thing I’ve read in a long time. Here it is. The bolded emphases are my own. Please pardon any typos you encounter. It’s 1 AM and I’m tired:

“Now it is a consequence of the fundamental assumptions which have gone into the usual statistical model, namely that all elementary configurations are entirely independent of each other, so that the probability of any configuration is to be calculated by purely combinatorial methods from the relative number of ways in which the configuration can be realized, that there is some chance of the occurrence of any configuration, no matter how unusual its properties. This would mean that in the corresponding physical system any configuration whatever, compatible with the fixed conditions, would occur occasionally, as for example, the gas in a box will occasionally automatically all collect itself into one end. This conclusion is indeed taken literally by many experts in statistical mechanics, and in the literature statements are not uncommon, such, for example, as that of Bertrand Russell in a recent magazine article that if we put a pail of water on the fire and watch it for an indefinite time, we shall eventually be rewarded by seeing it freeze. It seems to be that there are a couple of objections that can be made to the conventional treatment of rare occurrences, which I shall now examine.

The first difficulty is with the technical method of calculating the chances of observing a rare configuration, and is concerned only with the model itself, and not with the physical application of the results of the calculations. In computing the chance of any configuration, it is always assumed that the elements of the statistical model are without influence on each other, so that the chance of a given configuration is given merely by enumerating the number of complexions corresponding to the given configuration. For example, in the kinetic theory of gases it is assumed that the location of any molecule and its velocity is, except for the restriction on the total energy and the total volume, independent of the location or the velocity of any or all the other molecules. It may be proper enough to postulate this for the model, but we know that it cannot rigorously correspond to the physical system, for the molecules of gas do interact with each other, as shown by the mere fact that they conserve their total energy, and the transmission of energy from one molecule to another takes place only at a finite rate, so that if, for example, at one instant all the velocity were in a single molecule, we would find that immediately afterward only molecules in the immediate vicinity had any velocity. This means that the assumption of complete independence must be recognized to be only an approximation, and some way of handling this approximation must be devised. The method usually adopted is to cast the problem in the form of inquiring how many observations must be made in order that the chance of observing the desired rare configuration may be one-half, for example, choosing the time between observations so long that each observation all appreciable trace of the previous configuration shall have been obliterated, so that the assumption of independence may apply. The point now is this: the time that one has to wait for the probable obliteration of all traces of a previous configuration becomes longer the rarer the previous configuration; obviously it takes longer for a gas to efface all trace of having been all concentrated in one-half of its available volume than to efface the traces of a small local concentration. The situation is, therefore, that not only must we make an increasingly large number of observations in order to hope to witness a rare configuration, but the interval between our observations must also get longer. It is merely the first factor which is usually considered; when both factors are considered it is not at all obvious that the process is even convergent. This point should be subjected to further examination.

There is another difficulty connected with the mere calculation of the probability of rare occurrences presented by the quantum theory. All classical calculations assume that the molecules have identity. But the uncertainty principle sets a limit to the physical meaning of identity. It is not possible to observe the position and velocity of any molecule with unlimited precision, but there is a mutual restriction. After an observation has been made, the domain of uncertainty in which the molecule is located expands as time goes on. If the domain of uncertainty of two molecules overlap, then the identity of the molecules is lost, and a subsequent observation will not be competent to decide which molecule is which. The only way of maintaining the identity of the molecules is by making observations at intervals so frequent that the domains of uncertainty have not had time to overlap. But this time is obviously much shorter than the time between observations demanded by the requirement that all trace of the previous configuration shall have been wiped out. Futhermore, the act of observation, by which the concept of identity acquires meaning, alters in an uncontrollable and unpredictable manner the motion of the molecules, whereas the statistical treatment requires that the molecules be undisturbed between successive observations. It seems, therefore, that the physical properties of actual molecules as suggested by quantum theory are different from those of the molecules of the model, and this would seem to demand at least designing new models of calculation the chances of rare occurrences.

Apart from these objections, which may be met by the discovery of new theoretical methods of attack, it seems to me that the most serious difficulty with this question of rare states is met in the process of transferring to any actual physical system conclusions based on a study of the corresponding model.Suppose, for example, that we are discussing the problem of the tossing of some particular coin. If the coin is a fair coin, that is, if the chances of heads and tails are even, then our statistical model consists merely of a sequence of one or the other of two events, each of which is as likely to occur at any time as the other, absolutely independently of what may have happened elsewhere in the sequence. The theoretical discussion of this model is very easy, and we are convinced that conclusions drawn from a discussion of the model will apply to the tossing of the coin, always provided that the coin is a fair coin. As a particular problem we may consider the chance of throwing heads ten consecutive times. The chance of this is (1/2)^10 or 1/1024, which means that in every 1,000 consecutive throws the chances will be roughly even that there will be somewhere a sequence of 10 heads. But 1,000 throws are a good many, and it may be that we have never made so many throws, and ar econtent merely to make the prediction that if some one else should make so many throws it would be found to be as we say. But suppose that some one questions the fairness of the coin, and says that he has reason to think that ther eis a bias of 10% in favor of throwing tails, so that the chance of a head at a single throw is only 0.45 instead of 0.50. We find now on making the calculation that we shall have to make roughly 10,000 throws in order to have an event chance of getting a sequence of 10 heads; and, in general, that slight imperfections in the fairness of the coin make very large differences in the chance of rare occurrences. In view of this, we feel that it behooves us to make some objective test of the fairness of the coin before we venture to publish our prediction that we are likely to get a sequence of 10 heads in 1,000 throws. We make the most direct test possible by appealing to the fundamental definition of fairness, which is that in a large number of throws the ration of the number of heads to tails tends to equality. But how many throws are necessary to establish such an equality with satisfactory assurance! There is another theorem here, namely that in n throws the chances are even that we shall have an excess either of heads over tails or of tails over heads of 0.6745n^(1/2). Neglecting the numerical factor for our rough purposes, this means that if we make a hundred throws the chances are nearly even that the number of heads is somewhere between 46 and 54. To establish the fairness of the coin we would have to make a considerable number of 100 throws at a time and observe whether or not the number of heads clusters between 46 and 54. If, on the other hand, there is a 10% bias in favor of tails, the number of heads will cluster between 40 and 50. The precise number of sequences of 100 throws at a time necessary to convince us that there is no 10% bias in favor of tails obeys no definite criterion, but it is certainly of the order of ten or more, which makes 1,000 or more throws altogether. But this was the number of throws necessary to obtain one of the rare sequences of ten heads.

The conclusion from all this is plain; in order to establish with sufficient probability that the actual physical system has those properties which are assumed in estimating the frequency of rare occurrences it is necessary to make a number of observations so great that the probability is good that the rare occurrence has already been observed.In other words, purely logical statistical considerations never can justify us in predicting events so rare that they have never yet been observed. A pail of water has never been observed to freeze on the fire; statistical considerations give us no warrant whatever for expecting that it ever will.”

Impressed? I was. Imagine this same guy discussing the heat death of the universe, and much more! And no, you can’t have my set. Get your own!

The most sober topic, as this article refers, in the philosophy of science is the correspondence model of atoms, waves, force, and energy. The true scope of the unifying functions must include relativity and symmetry to reflect the modern concept of physics. That all depends on the atomic topological function, which has now been developed as the picoyoctometric, 3D, interactive video atomic model imaging function. It is designed in terms of chronons and spacons for exact, quantized, relativistic animation.

The atom’s RQT (relative quantum topological) data point imaging function is built by combination of the relativistic Einstein-Lorenz transform functions for time, mass, and energy with the workon quantized electromagnetic wave equations for frequency and wavelength. The atom labeled psi (Z) pulsates at the frequency {Nhu=e/h} by cycles of {e=m(c^2)} transformation of nuclear surface mass to forcons with joule values, followed by nuclear force absorption. This radiation process is limited only by spacetime boundaries of {Gravity-Time}, where gravity is the force binding space to psi, forming the GT integral atomic wavefunction. The expression is defined as the series expansion differential of nuclear output rates with quantum symmetry numbers assigned along the progression to give topology to the solutions.

Next, the correlation function for the manifold of internal heat capacity particle 3D functions is extracted by rearranging the total internal momentum function to the photon gain rule and integrating it for GT limits. This produces a series of 26 topological waveparticle functions of the five classes; {+Positron, Workon, Thermon, -Electromagneton, Magnemedon}, each the 3D data image of a type of energy intermedon of the 5/2 kT J internal energy cloud, accounting for all of them.

Those energy data values intersect the sizes of the fundamental physical constants: h, h-bar, delta, nuclear magneton, beta magneton, k (series). They quantize nuclear dynamics by acting as fulcrum particles. The result is the picoyoctometric, 3D, interactive video atomic model data point imaging function, responsive to keyboard input of virtual photon gain events by relativistic, quantized shifts of electron, force, and energy field states and positions.

Images of the h-bar magnetic energy waveparticle of ~175 picoyoctometers are available online at http://www.symmecon.com with the complete RQT atomic modeling guide titled The Crystalon Door, copyright TXu1-266-788. TCD conforms to the unopposed motion of disclosure in U.S. District (NM) Court of 04/02/2001 titled The Solution to the Equation of Schrodinger.

Thanks for your comment. Your group’s research looks fascinating. Right now, my current physics skill level is entry level quantum mechanics, though with my recent shipment of books that should change within the next year or so 🙂 Hopefully by then I’ll be able to fully understand your model and software. Maybe we could share some correspondence on physics related subjects, or anything else that’s interesting!