Metrication in England : An Analysis

Disadvantages of Metric

Metrication is used by businesses, through sharp practice, as a means for cheating the consumer by providing goods at an under-weight, because the consumer does not understand the system of weights and measures used, which is inherently confusing and obscure, with its profusion of similarly-named units, and its division into parts per thousand.

Traditional units of weight are never present in greater divisions than sixteenths (not thousandths), and the traditional units do not possess confusingly similar names, easily mistaken for each other.

Metrication as a cloak for fraud is commonly encountered: there is widespread abuse of the regulations, which in practice are used (wherever there is no comparison with traditional units) to deceive the consumer about the quantities being sold, rather than to provide the supposed clarity.

This is also a fundamental breach of human rights: a deprivation of freedom of choice. Genuine freedom of choice must include freedom of contract: the freedom of the parties to the contract to choose for themselves the units of weights and measures used in buying and selling the goods or services involved. Then the parties would be free to choose units which they understand.

There is no justification for imposing a system of weights and measures that one party to the contract does not understand: the regulations never stipulate what language the contract must be in, so the French consumer is not hamstrung by being required to do business in English, he is free to choose to use the French language in the contract. Yet the English consumer is forced to do business in a foreign language, by being compelled to use a French system of weights and measures which he does not understand.

The principle of freedom of contract is already accepted by the European Commission: it agrees that the parties shall be free to choose for themselves the terms of the contract, on the question of whether to use English or some other language in writing the contract. Yet the parties are currently not free to choose the units of weights and measures to use in it. The logic of the Commission’s argument is that French must be banned, and all contracts must henceforth be made in English-only, because only one exclusive system can be permitted. If the units of trade must be exclusive, so must the language be: that is the logical conclusion to the metric-only argument, so far as it relates to trade.

Further, because British law wrongfully imposes criminal penalties for non-use of the metric system – wrongfully, in that the European Directive does not require this, which addresses only matters of contract law, not criminal law – it becomes of paramount importance that the consumer genuinely understands the units used.

A consumer, facing a normal criminal charge in a court of law, must, by European law, be charged in, and questioned in, a language which he understands. But an English court, applying English law, on a European charge of non-use of metric units, is apparantly under no such obligation: an English consumer can be prosecuted merely because he does not understand the metric language, and commits an offence merely by not understanding it. The offensive units are not translated into units which he can understand: which, by European law, is a fundamental breach of his right not to be charged or questioned except in a language which he in fact understands.

There is no justification for imposing a European system of weights and measures on the consumer, where there is no element of cross-border trade involved. All regulation of trade by the Commission must be ended, wherever it relates to a sale of goods or services, by a seller, to a purchaser in the same country. Such matters are exclusively for the individual countries to decide: the Commission has no role. No cross-border trade occurs, so no barrier to such trade is involved in the transaction. There is therefore no objective of trade policy to be met, if trade policy is genuinely about reducing barriers to cross-border trade.

In practice, the existence of a trade policy is commonly used as a pretext for banning any practice the Commission disapproves of, regardless of whether that practice has any bearing on cross-border trade. That, too, is sharp practice.

Posted in Weights and Measures | Leave a comment

Science – Time and Gravity

Time slows down as gravity increases (i.e. the rate at which time passes depends upon the local strength of the gravity field, with time passing more slowly at greater field strength).

Time slowing as one approaches the center of mass is a peculiar phenomenon, given it implies that motion is reduced, as this in turn implies inertia is increased, ultimately becoming so strong all motion is impossible.

This seems to contradict my existing theory that inertia is reduced in the direction of the mass. Hence it is necessary to consider whether the two theories can be reconciled.

Is the slowing of motion compatible with the greater compression of spacetime nearer to the mass? Can an effect other than inertia account for the deceased freedom of motion? Can the compression of the field lines account for a reduction in radial motion, i.e. spin/rotation?

Velocity increases on approach to the mass, because inertial resistance to (linear) motion reduces. Yet the resistance to (rotational) motion increases in that circumstance. Is there any merit in the idea that the altered configuration/geometry of spacetime is obeying Newton’s law of conservation of momentum, by converting the particle’s rotational motion into linear motion? In other words, is rotational motion reducing because it is being bled off to fuel the increase in linear motion?

Is it possible that rotational motion is being hampered by the fact that inertia increases in the direction away from the mass? As the particle rotates, it spins toward the mass for half its spin, but spins away from the mass for the other half of its spin. Is its rotation being impeded in the 2nd half of its spin by the same forces which are easing that spin in the 1st half? Is it losing the ability to spin, because it has to fight the inertial gradient when spinning? That would account logically for the different responses of its linear and rotational motions to identical structural conditions.

Indeed, it (the slowing) suggests that time is a product of the particle’s rotational motion (more simply, that time is a consequence of motion). The implication is that spin causes time, if we view time as simply the existence of cause-and-effect. If a particle has no motion, it has no capacity to interact with other particles, i.e. in chemical or atomic reactions, without which capacity it cannot change its state, hence events cannot occur.

On Earth, time runs (slightly) faster as altitude (i.e. distance from the Earth’s centre of mass) increases. In a house, time runs (slightly) slower on the ground floor than on the upper floors.

As you approach a Black Hole, time runs slower. This is because gravitational strength increases as you approach. At the event horizon, time stands still.

Your point-of-view may literally depend upon whether you are observing the event from the event horizon, or from a discrete distance. In the former case you will not observe the effect, because you will be participating in it.

This implies that no physics is occurring within a Black Hole, because within the Event Horizon time has ceased to be: cause-and-effect has been suspended by the strength of the local gravity, which prevents the normal sub-atomic processes (that govern cause-and-effect) from running.

An analogy is a deep-freeze: time has been frozen, because all motion has been frozen. Motion on a sub-atomic level ceases entirely: presumably being impossible due to the immense strength of the gravitational attraction, which has the effect of binding the particles present to one another.

Alternatively, it may be that the sub-atomic spaces in which the processes of cause-and-effect normally occur are filled with the collapsed debris of the super-compressed particles, making it impossible for those processes to happen.

In other words, the impossibility of any motion is a simple consequence of the density of matter/energy within the event horizon: the space required for normal interactions, which cause-and-effect involves, is occupied by particle debris.

In theory, mathematics implies that density reaches infinity within the event horizon: this may simply mean that all the spaces (those tiny cells of which spacetime is composed at the Planck-length) are occupied by the particle debris: such that all space is completely filled — that it is literally impossible to compress matter/energy further.

Time may have ceased to exist only in the sense that the particles which give rise to cause-and-effect have ceased to exist.

Motion, in a sea of undifferentiated energy, might still exist: with energy continuing to propagate at the speed of light, but trapped within the event horizon by the fact that space has been curved to so great an extent that it has folded back upon itself, forming a complete circle.

Gravity at this strength overwhelms all other fundamental forces. Because the density of matter present at the sub-atomic level has exceeded a critical value, the fundamental forces which normally operate to maintain a minimum separation between particles (e.g. Pauli’s exclusion principle) are overwhelmed: their ‘push’ force pushing outwards is less strong than the ‘pull’ attraction of gravity pushing inwards.

This implies that final collapse into a Black Hole will be swift, perhaps instantaneous, once the effect of Pauli’s exclusion principle is negated (at a critical mass density): there is then nothing to prevent a complete collapse to what we term a ‘singularity’ (possibly a misnomer, since it is well established that the radius of a black hole is rarely so small, but a singularity may be simply the initial state from which black holes grow).

But for Pauli’s principle, millions of times the normal number of particles could be fitted into the space which one atom ordinarily occupies. Where gravitational collapse occurs, and Pauli’s exclusion principle is overwhelmed, the particles most likely cease to be particles in any real sense, but are reduced to the energy out of which the former particles were built.

At the immensely tiny scale of the Planck length, the resulting sea of energy (now undifferentiated energy) still exists: it does not all disappear into “a mathematical point, with no dimensions”, not really, although to our limited senses this might appear to be what’s happening.

In actuality, all of spacetime normally comprises vast empty spaces: the individual atoms in a seemingly solid object are actually separated by greater distances than separate the stars in our galaxy (relative to their size), such that if you took away the electromagnetic and nuclear forces which normally keep the atoms apart, there would be room for billions of nucleons to be packed in where only one can exist under normal conditions.

The gravitational forces within the Black Hole in effect do exactly that: the gravity negates all the ordinary forces, and permits billions of nucleons to be stacked up (just as there is room between one star and the next to pack in billions of stars, if the normal laws of physics were to be suspended).

As additional mass is attracted by gravity and falls into the black hole, the amount of mass present increases, the strength of local gravity increases, the radius of the event horizon increases, and the radius of the “singularity” also increases.

Logic dictates that the singularity must grow in size, albeit slowly, as additional mass is added to it. Matter is in effect being compressed until it occupies only a tiny fraction of its normal volume of space, but it must still go somewhere. It cannot cease to exist: it must add to the volume of the singularity.

In a normal atom, the gravitational force is approximately one million times weaker than the electromagnetic force. One implication of this is that in order for gravity to overcome the electromagnetic force, the density of atoms present must be increased a millionfold.

The fact that gravitational field strength varies with mass (i.e. with the number of particles present) implies that gravity is a force whose properties are additive: that is, the amount of gravitational attraction per cubic inch of space depends upon the number of particles (i.e. quarks) within that volume.

This implies that gravity is a property of quarks: that the strength of gravity is derived by multiplying a fundamental gravitational constant by the number of quarks present.

Electromagnetism, on the other hand, is a force which is NOT additive: its strength is not affected by the number of quarks present. This implies that electromagnetism is not a property of quarks.

Accordingly, logic implies that it is a property of the electron, not the quark (that being the only other particle present): and the further implication (of its not being additive) is that it is generated by the _motion_ of the electron (not by the number of electrons present).

We understand why the electron may be moving at a fixed speed, i.e. the speed of light, being in theory an electromagnetic wave (rather than a particle). And we would not expect a wave to have an additive effect, after the manner of a particle, given that a wave and a particle are quite different phenomena.

The two matters, taken together, imply that gravity is capable of overwhelming electromagnetism, if a sufficient density of quarks can be brought together.

.

Time

Oh to ride on a beam of light, so that time would stand still forever.” (James Follett)

Einstein’s theory says that if you could travel at the speed of light time would cease to exist for you. You would be ageless, while the rest of the universe would grow old around you.

This implies that, for a photon, time does not exist. As a photon travels at the speed of light, time must be standing still on it. So if Einstein is correct, logic implies that the photon can experience no change, no evolution, no cause-and-effect, since for it time has stopped.

If Einstein is wrong, one possibility is that time might slow to a degree, but not entirely.

But Einstein could be correct, as it is possible that time dilation might have no meaning in relation to a mere packet of energy, since it cannot undergo cause-and-effect in any meaningful sense under any conditions, because its entire existence is as a mere vibration in the cosmic structure termed “spacetime”: a particle experiences cause and effect, in many ways (these are exhibited as its temperature, its electrons orbiting, its spin, its field of quarks combining and recombining: an endless series of events occuring); but a mere vibration involves none of this activity.

An electromagnetic wave has properties: it has a frequency, and thus a wave-length, and an amplitude. But these are all static properties, which are unchanging as the wave propagates; they are fixed properties that *define* the wave; they do not interact with each other, nor with external forces. A wave might be absorbed, by a particle, at least in part, but it is thereby destroyed: this type of interaction – seemingly the only type possible – does not amount to the wave experiencing cause-and-effect: that would require the wave to change its state. Instead it merely ceases to exist, converted into (heat) energy, modifying the temperature of the particle by adding a tiny amount of additional energy to it.

.

Time and Motion

Time comes to a stop at the event horizon, because the passing of time is merely a way of describing the rate at which cause-and-effect occurs. There can be no cause-and-effect where the motion (i.e. the spin) of particles ceases, because cause-and-effect is only a description of that motion: the happening of event B depends on the prior occurance of event A.

The effect of gravity, a property of mass, is to slow down the motion (i.e. spin) of particles. This slowing causes time to pass more slowly, simply because events happen more slowly where the particles which cause them spin more slowly (i.e. interact more slowly with other particles).

A distant observer, not being caught in the same gravity field, is unaffected by it: therefore he can see that, at the event horizon, time is passing more slowly, relative to the rate at which it passes for him.

An observer at the event horizon, for whom time is being slowed by the strong gravity, does not notice that slowing, because he is caught in the effect. But because his own time is slowing, he notices instead that the rate at which time is passing for the distant observer appears to be speeding up.

.

Properties of a Black Hole

If the spacetime field lines curve, one consequence must be to create a hole at the precise center of the effect/field.

(Perhaps the apparent curvature is an illusion: what is actually happening is that the strength of the field is describing a curve, due to the spherical nature of the field, which has a center and propagates outwards from that point equally in all directions.

Because the field strength is curving, an unpowered object caught in the field will – because it is not accelerating, but maintains a constant velocity – follow a path between points of equal strength: since that path follows a curve, the object does too.)

A black hole must represent a zone in which the strength of inertia is zero at all points within the event horizon, implying that there is no resistance to movement in any direction.

However, no events can occur, as all motion (i.e. spin) is gravity-locked. Cause-and-effect is frozen, so time is not passing in any meaningful sense. At least, events are not occurring, so if there is movement there can be no outcome, because particles (if they still exist) cannot interact.

It is possible that within the zone, the energy which was trapped within the particles by their spin (perhaps, indeed, all the energy within the vacuum field) is released by the cessation of that spin (similar to the release of energy in an annihilation reaction), and so exists as an energy plasma, not as individual particles.

The fact that the black hole emits no light implies that nuclear processes are not occuring within the hole. This accords with the theory that cause-and-effect has ceased.

Rapid rotation of the hole might induce a rotating current in the plasma, which might manifest an effect outside the event horizon as a magnetic field, in rapid rotation.

Nothing can emerge from the black hole because the particles falling into it have reached a lowest-energy state: in all directions, energy needs to be injected/supplied to an object to cause it to move outward again, because it faces increased inertia if it seeks to move in any direction (there is no longer a direction which requires less energy than all others); but there is no available energy to inject into it (all energy has been used-up).

Theoretically, the particles have zero energy: even Pauli’s exclusion field has failed for want of the necessary energy.

.

How is Time related to Gravity

As you approach a Black Hole, time runs slower. This is because gravitational strength increases as you approach. At the event horizon, time stands still.

Your point-of-view may literally depend upon whether you are observing the event from the event horizon, or from a discrete distance. In the former case you will not observe the effect, because you will be participating in it.

An analogy is a deep-freeze: time has been frozen, because all motion has been frozen. Motion on a sub-atomic level ceases entirely, being impossible due to the immense strength of the gravitational attraction, which has the effect of binding the particles to one another.

On Earth, time runs (slightly) faster as altitude (i.e. distance from the Earth’s centre of mass) increases. In a house, time runs (slightly) slower on the ground floor than on the upper floors.

Rotational motion of a particle near the event horizon – quark, electron, neutron – is being hampered by the fact that inertia increases in the direction away from the mass. As the particle rotates, it spins toward the mass for half its spin, but spins away from the mass for the other half of its spin. Its rotation is being impeded in the 2nd half of its spin, by the same forces which are easing that spin in the 1st half. Eventually, spin becomes impossible, at the event horizon. Atomic processes therefore slow, and finally stop.

Nothing has really happened to time, but cause-and-effect, on which mechanical and biological processes depend, has been halted, suspended: thus “time” has been suspended.

This effect, interestingly, only occurs where the compression of spacetime is so great that a significant difference in inertia exists across the tiny distance which is the diameter of the nucleus of an atom.

The quarks which comprise the nucleus are spinning: theoretically their spin occurs in-place, i.e. like spinning on a dime. But they must be larger than the fundamental planck-spaces, else any type of rotation would be impossible.

Logically, they can only cease to spin when the inertial gradient across the diameter of the quark – a tiny distance – exceeds the critical value. This implies the presence of a tremendously great gravitational gradient. Accordingly, this effect must logically occur at the event horizon, the point at which cause-and-effect ceases.

This type of analysis of the effects of gravity helps us to understand that “time” is just an artificial concept invented by us, a purely man-made concept, rather than a genuine physical state. Cause-and-effect is real enough, but we measure the passage of time by counting the cycles of a radioactive atom; so if radioactive decay is suspended, we lose our ability to count the decay cycles.

Our measuring device has broken; but cause-and-effect is continuing to obey the laws of nature, notwithstanding that those laws are now suspending it. If the particles can’t rotate, they can’t interact: so nuclear and chemical processes can’t occur. We normally measure time by the occurrence of those processes, so we perceive time to have halted, when really it is only those processes which have halted.

We tend to imagine that time is a dimension of its own, separate from the 3 dimensions of height, width and length. But time is an illusion: a convenient way of recording the movement of a group of particles (known as “the universe”) from one state to the next. We typically call that movement cause-and-effect. Usually, the quarks involved will spin, maybe a billion times a second in free space. But if they can’t spin, at the event horizon, that doesn’t mean that “time” is being modified. What is being altered is our ability, as an observer, to measure time — and only because we use atomic clocks to measure time.

If we had a clockwork clock, and we forgot to wind it, its mainspring would gradually run down until it stopped; but we would not then declaim that something had gone wrong with time. Well, the event horizon is just another means of making a slightly more complicated clock – an atomic clock – run down and stop. As with a clockwork clock, we can’t say that time has stopped, merely because our clock has stopped.

This tends to show that “time” is only a convenient construct, invented by us as a convenience for us, not something which has an objective reality. The 4th dimension is an illusion, albeit a convenient illusion. Physical processes can only move as fast as gravity permits them to move: processes can only occur as fast as gravity permits them to occur. The strength of gravity fluctuates across the universe, from almost nil in free space (i.e. inter-galactic space) to its maximum at an event horizon, and the rate at which nuclear and chemical processes can happen is governed by the local strength of the gravity, hence varies slightly (because gravity does) from one point in space to the next.

This doesn’t mean that time is varying from point to point. “Time” is only an illusion. What we mean by the term “time” is certainly varying from one point to the next, but time is not real in any objective sense, because we can’t point to some field (such as a magnetic field) and say “that’s the field which governs time”.

This is not quite true. We might point to the gravitational field and say “that’s the field which governs time”. If there actually is such a thing as a field that governs time, that’s the logical choice.

Time is just a means of recording the number of times a radioactive caesium atom pulsates (i.e. oscillates). We can understand the physical processes as to why it pulsates. But if the conditions it pulsates in are modified, i.e. by a change in gravitational field-strength, the only field present (objectively speaking) is the gravitational field. The atom’s behaviour is clearly NOT being modified by some change in a hypothetical time-field: that field is only an illusion.

We actually define a second (i.e. we define time) as being that period in which a caesium atom pulsates a certain number of times. If its rate of pulsation changes, the duration of a second accordingly also changes.

If conditions are modified such that the rate of those pulsations is modified, compared with a similar atom at a separate location where the conditions have not been modified, then we are observing time running at two different rates. Thus time seems to be relative, not absolute.

If pulsation becomes impossible, at the event horizon, such that the next pulsation never occurs, then in a real sense time has become infinite, in that the period in which any chosen number of pulsations will occur has become infinity (because the period of a single pulsation has lengthened to infinity).

If the observer is stationed too close to the event horizon (albeit, we will assume, not close enough for its tidal forces to disrupt him!), his own time will be slowed down, and he will experience the relative nature of time: in that time at any point more distant from the event horizon will appear to him to be running faster than his personal time.

If his distance from the event horizon is reducing, his personal time will run ever more slowly, so events in the distance will appear to run ever more rapidly. Such an observed effect would prove that the observer was in fact falling into the black hole.

.

Questions with no Answers?

Q: Is the effect of gravity on observed “time” a different phenomena from the effect of high relative speed; or is the “time effect” due to the high relative speed achieved as strong gravity pulls the object ever faster?

A: “Time” is a term which we use in recording the rate at which events occur (such as the rate at which quarks spin, which is one measure of the rate at which they move), and “gravity” is a measurement of the resistance to motion (i.e. inertia) that space imposes on particle spin.

So “time” is thus a measurement of the spin-rate of a particle, such as a quark, a rate which is imposed by the structure of space, that “structure” being space as modified by “gravity”.

In theory, since gravity varies with distance from the nearest massive object (such as a star), what inertia is really measuring is the time taken by a particle to move from one point in space to the next. Gravity reduces the distance between adjacent points, by compression, thus reducing the time required to move between them. This we perceive as a reduction in inertia, in the direction toward the mass.

This is not related to Einsteinian time “dilation” — the apparent slowing of time, as perceived by a distant observer, in the vicinity of an object moving at near the speed of light. High gravity has a real physical effect: it slows down particle spin rate. High speed, according to Einstein, has no real effect: the relativity effect is an illusion, caused by the fact that the light which carries the images cannot travel to the observer instantaneously, but has a speed limit. So if the object which is emitting the light is travelling close to that speed limit, whilst also moving away from the observer, the light being emitted is being delayed.

It’s like watching a movie where the images, instead of reaching the eye of the beholder at the emitted 24 frames a second, is only arriving at (say) 4 frames a second: the movement, as perceived by the observer, appears to be greatly slowed down.

In actuality, the images are still being emitted at 24 frames a second, but the emitter is moving away by a significant distance in the period between frames. The faster the emission source is moving, the greater is the distance it moves between each adjacent pair of frames. As it approaches the speed of light, it begins to move ever closer to a state in which the distance it is moving away between frames is the same as the distance covered by light in a complete second, such that only one frame per second is being emitted (from the point of view of the distant observer).

If it was possible to travel faster than light, the light emitted by the rapidly moving object would never reach the distant observer, because the object would be moving away from him faster than the light emitted by it could travel towards him. [In effect, only part of a frame would reach the observer each second.]

In theory, the image might appear to freeze, as the final frame arrived, and then fade to black since images would thereafter no longer be received.

If the rapidly moving object gradually slows down, to eventually match the (slow) speed of the observer, the images gradually return to normal: they gradually speed up – in their arrival – from 4 frames a second back to the normal 24 frames a second. Of course, they were always being emitted at 24 frames a second: it was only the physical limitation on the speed of transmission which made it appear otherwise.

Thus the entire effect was an illusion, caused by the limited physical properties of light waves, which cannot travel infinitely fast (which is perhaps better expressed as the limited physical properties of the structure of space). Whereas the effects of high gravity are actual and real.

.

Q: If time moves slower in places where gravity is stronger, why does it not go faster in low or zero G?

A: The question tries to imply a falsehood, by suggesting that time is identical with the devices we employ to measure it.

The truth is that the devices we call ‘clocks’ do not measure time. Not in a real sense. We design a clock to tick 60 times a minute, hence a clock is a sort of ‘recorder’: it is, by design, recording the elapsing of a period which we have arbitrarily defined as 1 minute; it is not in any sense measuring anything.

We have merely programmed it to perform a simple mechanical function every second: the clock itself does not probe spacetime in order to measure any property of the time field. Therefore, the clock is not truly a measuring device, of any sort, merely a metronome, marking out an arbitrary duration pre-programmed into it.

This is certainly true of any mechanical clock in your home. Physicists try to overcome the limitations of simple mechanical devices by using atomic clocks, which rely upon the radioactive decay of caesium atoms, for instance, but the basic objection remains: the clock is performing a programmed sequence in which it ‘ticks’ 60 times a minute, rather than actually probing or measuring the properties of space or time. It is doing a sophisticated job of counting the number of oscillations per second of the caesium atom, but it is still only a more sophisticated type of counter.

The idea is that in different environments the number of natural oscillations will be greater or fewer. It is still only a measurement of an effect, rather than of time itself; but it implies that we expect mechanical – even quantum mechanical – processes to slow down if the environment’s gravity (i.e. its inertial resistance to motion) is greater. And that is what we in fact observe.

When we move that atomic clock further from the center of the Earth, into high altitude or into space, the mechanical – or quantum mechanical – processes of the clock encounter less resistance from gravity, so operate more quickly. Time is thus not an absolute: it must vary with every variation in gravity; but gravity varies continuously, in proportion to the distance of the observer from the local center of mass. This implies that time, too, varies continuously, in proportion to every variation in gravity.

All physical processes accordingly occur at a speed which is relative to the local gravitational field strength. Whether this means that time is really continuously variable, or whether it means time is simply an illusion, is unclear. If time can have no absolute value, only a rate that varies everywhere, does it really have an objective existence?

The rate at which events occur is what we are actually measuring (or at least recording), but only relative to some other clock for which we have arbitrarily chosen other values (such as the rate of passage of events at sea level). However, variations in the rate are notoriously difficult to detect or record on a small planet such as Earth, which has such a weak gravitational field that variations in the rate are very slight.

Posted in Science | Leave a comment

Science — Some notes on Gravity phenomena

 

Newton’s law of gravity

Not only does Newton give us the earliest and most readily understandable theory of gravitation, he also discovered the inverse square law  —

Every particle attracts every other particle in the universe, with a force proportional to the product of their masses, and inversely proportional to the square of the distance between their centers.

.

Einstein’s theory of Relativity

Einstein’s theory of general relativity can be summarised in two statements:

Matter tells space and time how to curve. And (curved) space and time tells matter (and energy) how to move.

This implies that space and time are properties of matter. And that gravity is too.

 

 

The Mass of the Universe

Astronomers have re-counted the total number of galaxies in the universe.

The University of Nottingham, in the UK, now estimates that 2 million million (2,000,000,000,000) exist, based on a re-examination in 2016 of deep exposure images from the Hubble Space Telescope.

This might equally be expressed as two thousand billion.

The total mass of ordinary matter (i.e. baryons) represents only about 15% of the total mass and energy in the universe. Galaxies were once thought to be solely composed of baryons (protons, neutrons), until it became evident that their visible mass does not account for the strength of their gravity.

 

 

What is a gravity field?

As the amount of mass increases, the resistance to the movement of particles in its vicinity declines.

The proportionate decline suggests that what we term “gravity” is actually merely a measurement of the spacetime field’s resistance to particle motion (a measurement of the change in that resistance).

The notion of a “gravity field” may be an illusion: gravity may be just one property of the quantum field, i.e. of the spacetime field.

As the field’s resistance declines, particles move in that direction (i.e. the direction in which it declines). But they are not really being attracted to one another, nor even attracted to the local mass causing the effect. Their (inherent) energy is unmodified; but they are acquiring momentum, gained from an increase in their velocity due to the declining resistance of the field. They are merely “clumping together”, a very loose form of association, due to the absence of that resistance, or, initially, due to the effect of the presence of a resistance gradient; not because of the presence (or formation) of a bond between the particles.

On the assumption that the presence of mass causes the spacetime interval (the gap between the field lines) to decrease as the distance from the centre of that mass decreases, hence resulting in a decreasing value for inertia (which is really only the time taken to cross that distance), it thereby causes the linear motion of the particle to increase (without any injection or addition of energy into the particle: the existing — invariant — energy comprising the particle is simply encountering less linear resistance).

The time taken (to move between adjacent field lines) is less, so the particle’s motion (i.e. its velocity) has increased, since it is now crossing that distance in less time.

Newton theorised that an object in motion (e.g. a particle) will continue that motion unless acted upon by an outside force: known as “conservation of momentum”. That theory nevertheless conflicts with his theory of gravitation, in which a particle accelerates in a gravitational field without any application of force (if by force we understand him to mean an injection of energy). Nevertheless, conservation of momentum is a valid expression of what is occuring, since the particle’s inherent energy (its mass) is unchanged (albeit that its momentum, which is its mass multiplied by its velocity, is not).

The velocity is varying in exact agreement with the variation in the resistance of the medium through which the particle is moving. It therefore seems that only mass is genuinely invariant, as velocity (and thus momentum) is not. The velocity of the object is increasing (as it falls): so its momentum is increasing too. But its mass (which is thought to be simply energy in a bound state) remains invariant. It is the medium’s response which changes.

Newton’s theory of the conservation of momentum seems too simplistic, as it fails to take his theory of gravity into account. Momentum is varying as the particle’s position within the gravitational field varies, even though no application of force is occurring, which Newton’s theory claims is impossible.

Einstein rejected Newton’s theory as too simplistic, and we should be wary of rejecting Einstein’s deeper insight into the principles of gravitation.

Momentum ceases to be conserved, because in a gravitational field velocity is a variable, dependent upon the particle’s location within the field. It is only mass which is invariant. The field’s response varies with the distance from the centre of the mass generating it, and with the angular motion of the particle. Momentum is thus variable, varying with the velocity, which in turn is varying with the condition (the “response”) of the field.

If you merely drop a rock (off a 100ft high cliff), so that it falls, can such an action subsequently become a “downward force”? If the rock is hurled downwards, one can speak of an application of force; but not if it is merely dropped. The rock, logically, merely follows the path of least resistance (when released). Gravity might superficially resemble a force, but no force in the usual sense is being applied, only a reduction (at the quantum level) in the resistance of the medium in a specific direction, caused by the presence of (planetary) mass.

 

 

Gravity (Speculation)

At what point is the value of inertia zero?

Einstein postulates that gravity is a structural effect: a consequence of a reduction in inertia (the resistance of the spacetime field to particle movement), in the presence of mass.

 That is not how Einstein expressed it, but it is a logical consequence of his theory (i.e. the theory that the cause of gravity is structural), since that implies a gradual reduction in resistance to motion (which is only a way of describing inertia).

If the value of inertia decreases because of the presence of mass, reducing in the direction of the local center of mass, then there must logically come a point at which the value of inertia declines to zero.

Logic implies that this must occur only at (or within) the event horizon, as that is the point where we observe the attraction terminating. If attraction terminated prior to that, the mass would not fall onto the event horizon.

Inertia (resistance to motion) varies according to how much mass is present; and a decline in resistance to movement gives an illusion of that mass “attracting” the particle towards it, thereby causing what we traditionally think of as the gravity field strengthening.

This illusion of a pull-force tends to blind us to the more simple truth, that an object in motion will tend to follow the path of least resistance: that, therefore, gravity is no more than a structural effect, by which resistance to motion is reduced in a specific direction, thereby giving rise to a path having lesser resistance, which a particle in motion must inevitably follow.

 

 

Inertia

Every particle has inertia, because inertia is a property of the particle’s mass. Inertia is that field which holds the particle in place (binding it to the fabric of spacetime). A particle will not respond to gravitational attraction (it might feel it, but will not move in response to it), unless the strength of the gravitational field exceeds the strength of the inertial field which is holding it in place. At the macro level, an object’s inertia is proportional to its mass (i.e. to the number of particles it contains).

Theoretically, it is at least possible that both gravitational attraction and inertia are two aspects of a single effect (or property). Gravitation seeks to cause a movement, which inertia seeks to resist. At a fundamental level, it is logical to suppose that motion is purely a result of an imbalance between the two effects.

But it is equally possible that gravitational attraction is merely an absence of inertia — that a simple reduction in inertia, in one specific direction (the direction of the greatest concentration of mass present), permits a particle to move in that direction (a particle in motion is governed as to its direction-of-motion by whatever direction offers the least resistance at the quantum level).

The beauty of this argument, from the perspective of logic, is its simplicity. A reduction — or (in the opposite direction) an increase — in inertia (the delay in moving the particle a specific distance) creates the effect we think of as gravity. Applying Occam’s Razor (a principle of logic), the simplest explanation is the one most likely to be true.

At the quantum level, energy is required for tunnelling through the vacuum field. If so, a particle must tend to move in the direction of least resistance, which must equate to the direction in which least energy is required: the path of least resistance.

The structural effect which allows a particle’s motion to be influenced by gravity may also be the effect which causes the particle to possess inertia (as both fields, taken together, govern motion). Structurally, in one direction the quantity of energy required falls (the effect we term “gravitation”), and in the opposite direction the quantity of energy required rises (the effect we term “inertia”).

Logic implies that if these are both structural effects, the most likely cause is that in one direction the distance involved is reducing, and in the other is lengthening: this would account for the energy needed for each “jump” (i.e. involved in the quantum tunnelling), and the time taken by the jump, to vary with direction.

The two effects are two sides of the same coin.

This is based on the logic which requires the presence of a granular structure within spacetime, in which the granules are bound together by tensors into a structure capable of vibrating: one in which the tensors are capable of varying in length (in order to permit the structure to vibrate).

Such variation in length might account for more than the transitory variations which allow electromagnetic energy to be transfered (by the vibrations we perceive as, for instance, light): the variation might be semi-permanent in nature, shortening in the presence of mass for so long as the mass is present. Thus in a single concept we combine an explanation for both electromagnetism and gravity: the presence of tensors, linking adjacent quantum fields together, which are capable of varying in length (to permit both vibration of the structure and reduction in the field separation).

 

 

Gravity

Gravity is an effect (a consequence) of mass.

It is a property of mass, and its strength depends upon the distance from the mass, reducing as the distance increases. The degree of reduction is in accordance  with Newton’s inverse square law.

It is a natural (or inherent) property, one which is always present; it does not depend upon the motion of or energy-state of the mass, nor does it require the presence of any stimulus.

It is a field effect, in that it propogates equally in all directions, three dimensionally, forming a sphere — or shell — entirely surrounding the mass (a shell in the sense that it comprises a series of layers, like an onion, each layer being of uniform strength, the layers decreasing in strength with increasing distance from the centre of the mass).

It is generated by all mass; but its strength depends on (i.e. is proportional to) the quantity of mass present. For example, a single electron has a tiny mass, whereas a black hole has a vast mass.

Even though both are small objects, the greater quantity of mass in a black hole, compared to that of an electron, gives rise to a greater field strength.

A given quantity of mass has different effects, in relation to local objects, depending on its local density.

The local field strength, i.e. the effect of the gravitational field on an object nearby, depends on the density of the mass. If concentrated into a black hole, at such a density it will have much greater local effects than if dispersed across, say, half a cubic light year as a gas cloud or nebula.

Yet at non-local distances (i.e. intersteller distances), the effects of a mass of a given quantity will be identical, whether that mass is concentrated into a black hole or dispersed as a nebula.

In local space, a free floating object will be pulled in different directions
by the mass if that mass is dispersed (e.g. dispersed as a gas cloud): hence the different effects will tend to cancel each other out; but the object will be pulled all in a single direction if the mass is concentrated at a single point (e.g. as a black hole) whereby the effects all reinforce each other.

Local effects always predominate, since if the distance from the mass is doubled (i.e. multiplied by 2) the field’s strength falls to 1/4. And because each time the distance is doubled the strength falls to 1/4 (25%), if the distance is multiplied by 8 (three doublings: to 2, 4, 8) the field strength falls to only a fraction more than 1% (a reduction, to 25% of the previous value, being applied at each stage) —

100%  >  25%  >  6.25%  >  1.5625%

This is also seen by applying the inverse square law (one over the square of the distance), applied in the following simplified form (i.e. its application to the change in the distance):

(a) Where the distance is multiplied by 2, the effect on the field strength is to reduce it to 1 over 2² –

1/2²  =  1/4  =  25%

(b) Where the distance is multiplied by 8, the effect on the field strength is to reduce it to 1 over 8² –

1/8²  =  1/64  =  1.5625%

 

 

 

Posted in Science | Leave a comment

Science — The Structure of Spacetime

We start from an assumption that space/time has a structure, which is solid in the sense that it is composed of a 3-dimensional grid (or lattice) of forces, that simulate a solid, but flexible, 3-dimensional honeycomb; and that these forces mimick the solid bonds which exist in (say) a sprung mattress: namely a bond which joins two defined locations but which has the flexibility of a coiled spring. These forces, joining one location to the next, represent the field lines of the grid (or lattice).

Einstein based his General Theory of Relativity on such an assumption: namely that space/time is an underlying structure, which the 4 fundamental forces (such as gravity) can modify.

We can create a theory of mass, in which the elements of the theory are self-consistent, if we assume that the presence of mass (e.g. a particle such as a proton) causes the distance between the field lines to reduce (we can think of the lattice as having a flexibility, that permits adjacent field lines to behave as though they were joined by tiny steel springs, whereby the presence of mass puts pressure on these springs, which pressure draws the field lines closer together); and we will assume also that the degree of the reduction (i.e. compression) is proportional to the amount of mass present.

It may well be that the degree of the compression-effect is determined by multiplying a standard single value (a constant) by the number of nucleons (i.e. protons and neutrons) that are present in the aggregation of mass.

A related assumption is that the reason why mass has inertia is that there is a resistance to the movement of a particle from one location to another within the space/time structure. If we further assume that a particle can move only by moving along the individual field lines, we might reasonably conclude that any reduction in the distance between the field lines (due to the presence of mass) reduces that resistance: we can then envisage an individual particle, when moving past an aggregation of mass, as following a curved path (a path curving toward the mass) due to the lessening of resistance in the direction of the mass, such that lower resistance in that direction causes it to move toward the mass (since, by definition, that direction offers less resistance to the particle’s passage and a particle always follows the line of least resistance).

At the quantum level, in travelling between two adjacent field lines a particle is probably tunnelling through the intervening space. Thus the resistance to its passage is less if the distance it has to tunnel through is less.

The standard approach of physicists is to take the view that space/time is curved. That is only an analogy, but it is in one respect an unhelpful analogy: there being no obvious way for space — which by definition already fills all three possible directions (i.e. dimensions) — to curve. The idea of curvature is thus a useful analogy, but a misleading one.

At best, it might be said that whilst space itself is not curved, it can be made to seem so by adding in a time element: a curvature can, after all, only be observed by following the path of a particle over some period of time — some duration.

It is less misleading to think of spacetime in terms of a map of varying resistance. In this concept, open space represents the greatest resistance to a particle’s motion: an area where the spacetime structure’s natural resistance to motion has no ameliorating factors. Any aggregation of mass will reduce that resistance, in the direction of the mass, proportionately to the amount of mass present: the degree of the reduction, and the distance from the mass over which the effect has an influence, being proportional to the density of the mass (not merely the total amount present): a mass of 1 billion tons will presumably exhibit differing properties if (a) distributed over a volume of one cubic light year, or (b) compressed into a volume of one cubic inch.

We must recognise that any theory owes much to analogy. The actual mechanics taking place at the Planck level are not important in developing the initial theory. In the absence of direct observation of that level, many possibilities must exist; and it does not make any difference to the theory which of the many possible explanations of the mechanics is the correct one.

The function of an analogy is to simplify the underlying detail. It is helpful to comprehension of the theory to avoid loading it down with technical terminology (or multiple alternative mechanisms), as that may have the effect of concealing, rather than clarifying, its meaning. The liklihood is that, at the Planck length, the fabric of spacetime is a complex balance between matter and antimatter, whereby the two generally cancel out, and only the occasional area of imbalance between the two is perceptible to us: we term it a “particle” (or, more accurately, a quark).

It is not of fundamental significance to our theory how a particle disturbs the spacing between the field lines, nor what those field lines are composed of. The theory seeks to reconcile all the known effects caused by the existence of the field, and perhaps to deduce the structure of the field; it cannot assist with probing the nature of the field, since that exists at the Planck level, which cannot be directly observed.

We can deduce that the field must exist; that it must be fine-grained (possibly akin to a foam); that it must be rigid, yet slightly elastic, allowing the structure to vibrate (for the propogation of energy), and to compress (in the presence of mass); that it must cause the existence of particles; that its resistance to movement by particles must cause their inertia; that variations in that resistance must cause the effect we perceive as gravity; that it must govern the strong force and the electro-weak force.

.

Logic tells us that it is impossible for an event at Point A to cause an effect to occur at Point B, unless there is some contact occuring between the two points.

One possibility is that a particle (or perhaps a quantum of energy) travels from Point A to Point B, and causes the effect to occur at Point B.

Another possibility is that Point A and Point B are permanently connected in some manner, such that an event occuring at A is transmitted to B by a movement in the structure connecting them.

The first of these possibilities seems unlikely: electromagnetic events at Point A, for example, seem to radiate in a spherical pattern in 3-Dimensions, implying that an almost infinite number of particles would have to be emitted, in order to cause the radiation to be detectable from all directions simultaneously.

Also, the strength of the detected effect falls with increased distance, and it is difficult to construct even a theoretical model of how, realistically, this reduction might be caused, if the vector is some (irriducibly tiny) carrier. How do you infinitely subdivide an irreducibly small object?

The second of these possibilities seems much more plausible. If we imagine spacetime to be like a mattress, with each focal point connected to the next by a spring under tension (i.e. a tensor), then it is fairly easy to visualise how a disturbance at Point A could cause multiple effects simultaneously in every direction.

The fact that we cannot see nor perceive the tensor web is not a negating factor. It is so small that we cannot logically expect to be able to see it. We cannot see the oxygen and nitrogen atoms of the air we breathe, because they are so small; but we do not therefore conclude they do not exist.

Open space (interplanetary space) is apparantly empty. Yet the air around us is also apparantly empty: but we know that the air is comprised of transparent gasses. The presence of wind tells us that, logically, this gas is present. Thus the presence of sunlight tells us, logically, that the tensor web is present — carrying (i.e. transmitting) electromagnetic waves from point A to B.

The wind represents a disturbance in the gas which comprises the air. And light (and other electromagnetic frequencies) represent a disturbance in the field which comprises spacetime.

Logically, there must be some possibility that the tensor web is itself an electromagnetic structure, given that what it is carrying is a wave which has electromagnetic properties, and thus there must be some possibility that such a wave is simply a disturbance in an electromagnetic field.

Once we begin to think of spacetime as an electromagnetic field, or as any kind of field, we inevitably require the presence of a web of tensors: for a field to exist, each point in spacetime must be linked to at least one other point; but, more likely, each is linked to several other points. In a field with a 3-dimensional structure, each point must be linked to at least 6 other points (2 for each dimension) — possibly more.

The logic of a disturbance at Point A (a star) being visible at Point B (the Earth) is that a set of links must exist which are being disturbed all along the path between those two points. If this intermediate chain was incomplete, the disturbance would be interrupted in its passage, so would not be visible from Point B.

.

Wave Theory – Electromagnetic Vibration

Theory suggests that all transmission of electromagnetic energy occurs as a vibration in the fabric of spacetime.

This vibration, because it is a 4th dimensional effect (i.e. because it can only be perceived as a cycle, requiring some duration of time), is perceived as a wave — more accurately, as a series of waves.

Each vibration represents the wave’s peak, and is followed by a trough which is merely an absence of such vibration; then a further vibration arrives – at the observer – representing the next peak.

We measure this vibration by either the distance or the time between adjacent peaks. We employ an analogy with the movement of water in an ocean, by describing this type of motion as waves, because the energy passing the observer appears to rise and fall in amount, in a cyclical manner, like in appearance to a wave in an ocean.

There is a true connection with ocean waves: the vibration is passed on by one element in the medium (i.e. water or spacetime) knocking against the next, but the medium itself is not in motion. The only motion is the disturbance in the medium: this gives the appearance (from a distance) that the medium is moving, but when the process is examined in close-up that is found to be an illusion.

A snapshot of one instant in time will tend to disguise the effect, because it can not include the motion involved. However rapid the vibration, it inevitably requires time in which to occur, however brief that period of time is.

This is the reason why Einstein describes reality by the term spacetime, which is a recognition that you cannot describe real events without considering their effect (which requires an examination of the next following instant of time): this is the basis of the (crucial) concept that cause and effect is the key to understanding reality.

In that sense, the expression ‘clockwork universe’ has some meaning: the concept that each event causes the next following event.

Time is a concept that if molecule A is to collide with molecule B, and thereby transfer its energy to B, molecule A must be in motion: motion is incompatible with the ‘frozen’ state implied by taking a snapshot of the collision event. For the two to collide, we must provide for motion, i.e. a sequence of such snapshots.

We might, quite validly, define ‘time’ as being the motion of the molecule, in moving the shortest distance which exists in nature. ‘Time’ is thus actually a measurement of distance: hence there is a blurring of the concepts of ‘distance’ and ‘time’, because we are defining time in terms of movement — really in terms of the distance moved. We use an arbitrary period termed a second, but we actually ought to think in terms of how many movements through that shortest distance will occur in that arbitrary period.

One implication is that time is divided into a sequence of instants, or moments: the notion that there is a fundamental unit of time, just as there is for distance. However small (brief) this unit is, it is genuinely fundamental in that it cannot be subdivided.

It is presumably the period in which a quark or electron or gluon (or one of their constituent parts) — the components which make up a molecule — transitions from one state to another.

If we are defining the fundamental unit of time by reference to the fundamental unit of distance, such that the two are related, this is a further proof that time must have a fundamental unit, because distance does. Spacetime is formed, in mathematical terms at least, by combining these two fundamental units.

If motion is assumed to be continuous, the energy confined within the quark or electron or gluon is presumably never in a ‘frozen’ (stationary) state, but, rather, is in a state of continuous flow.

Posted in Science | Leave a comment

Alias Smith and Jones : Episode log

Season 1
1. Alias Smith and Jones [Pilot, TVM, 90 mins]
2. The McCreedy Bust
3. Exit from Wickenburg
4. Wrong Train to Brimstone
5. The Girl in the Boxcar 3
6. The Great Shell Game
7. Return to Devil’s Hole
8. A Fistful of Diamonds
9. Stagecoach Seven
10. The Man Who Murdered Himself
11. The Root of It All
12. The Fifth Victim
13. Journey from San Juan
14. Never Trust an Honest Man
15. The Legacy of Charlie O’Rourke

http://digiguide.tv/programme/Drama/Alias-Smith-and-Jones/81042/season-1/

 

Season 2
1. The Day They Hanged Kid Curry
2. How to Rob a Bank in One Hard Lesson
3. Jailbreak at Junction City
4. Smiler with a Gun
5. The Posse That Wouldn’t Quit
6. Something to Get Hung About
7. Six Strangers at Apache Springs
8. Night of the Red Dog
9. The Reformation of Harry Briscoe
10. Dreadful Sorry, Clementine
11. Shootout at Diablo Station
12. The Bounty Hunter
13. Everything Else You Can Steal
14. Miracle at Santa Marta
15. 21 Days to Tenstrike
16. The McCreedy Bust: Going, Going, Gone!
17. The Man Who Broke the Bank at Red Gap
18. The Men That Corrupted Hadleyburg
19. The Biggest Game in the West
20. Which Way to the OK Corral?
21. Don’t Get Mad, Get Even
22. What’s in It for Mia?
23. Bad Night in Big Butte

NB: Roger Davis from Episode 19

http://digiguide.tv/programme/Drama/Alias-Smith-and-Jones/81042/season-2/

 

Season 3
1. The Long Chase
2. High Lonesome Country
3. The McCreedy Feud
4. The Clementine Ingredient
5. Bushwack!
6. What Happened at the XST?
7. The Ten Days That Shook Kid Curry
8. The Day the Amnesty Came Through
9. The Strange Fate of Conrad Meyer Zulick
10. McGuffin
11. Witness to a Lynching
12. Only Three to a Bed

http://digiguide.tv/programme/Drama/Alias-Smith-and-Jones/81042/season-3/

 

Alias Smith and Jones

US drama series

Alias Smith and Jones (TV Movie)
Series 1 Episode 1
A feature-length episode of the Western drama series about two likeable outlaws trying to make good in 1880s America.

Cast (unconfirmed)
Pete Duel … Hannibal Heyes (alias Joshua Smith)
Ben Murphy … Jed ‘Kid’ Curry (alias Thaddeus Jones)
James Drury … Sheriff Lom Trevors
Earl Holliman … Wheat
Dennis Fimple … Kyle

 

The McCreedy Bust
Series 1 Episode 2
When a rich rancher (Burl Ives) hires Heyes and Curry to retrieve a bust of Caesar, they are convinced the job will be a simple one – until they discover just who the statue really belongs to.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Burl Ives … McCreedy
Cesar Romero … Armendariz

 

Exit from Wickenburg
Series 1 Episode 3
Heyes and Curry come to the aid of a lovely widow who asks them to manage her saloon. After all, it is decent, honest work that should keep them out of trouble. So why is someone trying to run them out of town?

Cast (unconfirmed)
Pete Duel
Ben Murphy
Susan Strasberg
Pernell Roberts

 

Wrong Train to Brimstone
Series 1 Episode 4
Heyes and Curry (alias Smith and Jones) pose as special agents hired to foil a train robbery plotted by the Devil’s Hole gang

Cast (unconfirmed)
Pete Duel
Ben Murphy
William Windom
J.D. Cannon
William Mims
J. Pat O’Malley

 

The Girl in Boxcar 3
Series 1 Episode 5
Smith and Jones agree to take on a job to earn some money and, at the same time, please a friend of the Governor. All they have to do is transport $50,000 some 400 miles. But they hadn’t anticipated meeting a girl called Annabelle and the money mysteriously disappearing.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Heather Menzies
Alan Hale Jr.
John Larch

 

The Great Shell Game
Series 1 Episode 6
Hayes thinks he has discovered a foolproof way of backing the winning horse at the races. But is it just a confidence trick?

Cast (unconfirmed)
Pete Duel
Ben Murphy
Diana Muldaur

 

Return to Devil’s Hole
Series 1 Episode 7
A beautiful woman cons Heyes into revealing his old gang’s hideaway.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Diana Hyland
Fernando Lamas

 

A Fistful of Diamonds
Series 1 Episode 8
Heyes and Curry are framed for a bank robbery in Kingsburg which went wrong. Could this be the end of their bid for amnesty?

Cast (unconfirmed)
Pete Duel
Ben Murphy
John McGiver
Michele Carey
Sam Jaffe

 

Stagecoach Seven
Series 1 Episode 9
Heyes and Curry, roped and helpless, watch while two groups have a shootout over the reward offered for turning them in.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Keenan Wynn

 

The Man Who Murdered Himself
Series 1 Episode 10
While Curry drives a wagonload of dynamite across rugged country, Heyes volunteers as a guide for two Englishmen.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Patrick Macnee
Juliet Mills

 

The Root of It All
Series 1 Episode 11
Heyes and Curry come to the aid of a fellow traveller when their stagecoach is robbed and a letter, revealing the burial place of $100,000, is stolen.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Judy Carne
Tom Ewell

 

The Fifth Victim
Series 1 Episode 12
When the participants in a poker game are killed one by one, Kid Curry decides it is time to act before his partner becomes the next victim.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Joseph Campanella

 

Journey from San Juan
Series 1 Episode 13
Heyes and Curry are used as bait to bring husband murderer Blanche Graham to justice.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Claudine Longet
Susan Oliver
Nico Minardos

 

Never Trust an Honest Man
Series 1 Episode 14
Smith and Jones mistakenly take a bag containing ten million dollars-worth of diamonds and return it to the railroad magnate Oscar Harlingen. When he examines them and finds they are fakes, he sends his personal posse after the outlaws.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Robert Donner
Marj Dusay
Severn Darden

 

The Legacy of Charlie O’Rourke
Series 1 Episode 15
When an old friend of Heyes and Curry is hanged, he takes the secret of where he hid $100,000 in gold to his grave. Or does he?

Cast (unconfirmed)
Pete Duel
Ben Murphy
Joan Hackett
J.D. Cannon

 

The Day They Hanged Kid Curry
Series 2 Episode 1
A feature-length episode of the adventure series about two outlaws trying to make good. Fred Philpotts is sick of being a nobody, until he hits on the idea of impersonating Kid Curry to get himself noticed.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Robert Morse
Belinda Montgomery
Sam Jaffe

 

How to Rob a Bank in One Hard Lesson
Series 2 Episode 2
Heyes is forced to engineer a bank robbery to save Curry, who is being held captive by two women.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Jack Cassidy
Joanna Barnes
Karen Machon

 

Jailbreak at Junction City
Series 2 Episode 3
Heyes and Curry are deputised to bring in two hold-up men.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Jack Albertson
George Montgomery
James Wainright

 

Smiler with a Gun
Series 2 Episode 4
Heyes and Curry vow to get even with a swindler, but will they blow their cover in the process?

Cast (unconfirmed)
Pete Duel
Ben Murphy
Roger Davis
Will Geer

 

The Posse That Wouldn’t Quit
Series 2 Episode 5
Is it the end for Heyes and Curry when a posse tracking them refuses to give up?

Cast (unconfirmed)
Pete Duel
Ben Murphy

 

Something to Get Hung About
Series 2 Episode 6
Smith and Jones are hired by a rich rancher to bring back his runaway wife.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Monte Markham
Meredith MacRae
Paul Carr

 

Six Strangers at Apache Springs
Series 2 Episode 7
Smith and Jones are hired by the tough-talking widow of a prospector to go into the hills occupied by unfriendly Indians.

Cast (unconfirmed)
Pete Duel
Ben Murphy

 

Night of the Red Dog
Series 2 Episode 8
Cooped up in a cabin with the thief who took their stash, Heyes and Curry devise a cunning plan to claw back some of their stolen riches.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Jack Kelly
Rory Calhoun
Joe Flynn
Robert Pratt

 

The Reformation of Harry Briscoe
Series 2 Episode 9
When Smith and Jones help two nuns whose wagon has broken down, little do they realise that their simple act of kindness will lead to trouble.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Jane Wyatt
Jane Merrow
J.D. Cannon

 

Dreadful Sorry, Clementine (a.k.a. Dreadfully Sorry, Clementine)
Series 2 Episode 10
Smith and Jones are adamant they are not going to help their old friend Clementine steal $50,000 – until she shows them she has a photograph of Hannibal Hayes and Kid Curry.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Sally Field … Clementine Hale
Don Ameche
Rudy Vallee

 

Shootout at Diablo Station
Series 2 Episode 11
Smith and Jones are ambushed and learn that their old friend, Sheriff Lom Trevors, is to be murdered.

Heyes and Curry are on their way by stagecoach to see Sheriff Lom Trevors about the state of their amnesty. The stagecoach pulls into Diablo Station so that the horses can be rested, and the passengers can get cups of coffee.
Into this scenario walk four gunmen. They tie up all the passengers, including Heyes and Curry, and then settle down to wait. Curry asks them what they’re waiting for, and the leader of the outlaws tells them that they are waiting for Sheriff Lom Trevors to come. According to Chuck, the head of the outlaws, his brother had been killed by Trevors some time earlier, and he is burning with revenge and wishes to kill Trevors.
Heyes and Curry are extremely worried. Not only is Sheriff Lom Trevors a good friend, but the success of their amnesty depends on him. They must find a way of letting him know that he will be walking into an ambush. But how?
Then Heyes has one of his ideas.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Howard Duff
Anne Archer
Neville Brand
Pat O’Brien
Elizabeth Lane
Mike Road … Lom Trevors

 

The Bounty Hunter
Series 2 Episode 12
Smith and Jones are captured by a determined bounty hunter.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Louis Gossett Jr.
Robert Donner
R.G. Armstrong
Robert Middleton

 

Everything Else You Can Steal
Series 2 Episode 13
Hayes and Curry are falsely accused of bank robbery and, unless they can find the real culprit, their chances of an amnesty look bleak.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Ann Sothern
Patrick O’Neal
Jessica Walter
Kermit Murdock

 

Miracle at Santa Marta
Series 2 Episode 14
When his wealthy employer is murdered, the Kid finds that he is the number one suspect. Can Heyes come to his rescue?

Cast (unconfirmed)
Pete Duel
Ben Murphy
Craig Stevens … Rolf Handley
Nico Minardos … Alcalde
Joanna Barnes … Meg Parker
Ina Balin … Margaret Carruthers

 

21 Days to Tenstrike
Series 2 Episode 15
Hayes and Curry find themselves embroiled in murder when they join a cattle drive. Has their identity been blown at last?

Cast (unconfirmed)
Pete Duel
Ben Murphy
Dick Cavett
Walter Brennan
Steve Forrest
Pernell Roberts

 

The McCreedy Bust: Going, Going, Gone!
Series 2 Episode 16
Pat ‘Big Mac’ McCreedy wants to sell the bust of Caesar – but he must first recover it from wealthy mexican rancher Senor Armendariz.

Cast (unconfirmed)
Pete Duel
Ben Murphy
Burl Ives
Cesar Romero

 

The Man Who Broke the Bank at Red Gap
Series 2 Episode 17
Saved from the grasp of a bounty hunter, Hayes and Curry find themselves in even deeper trouble. Could this be the beginning of the end for the outlaws?

Cast (unconfirmed)
Pete Duel
Ben Murphy
Broderick Crawford
Rudy Vallee
Dennis Fimple
Bill Toomey

 

The Men That Corrupted Hadleyburg
Series 2 Episode 18
Captured by a prospecting family, Heyes and Curry face prison – unless they can come up with a plan.

Cast (unconfirmed)
Pete Duel
Ben Murphy

 

The Biggest Game in the West
Series 2 Episode 19
Jim Backus in a story about counterfeit bills and high-stakes poker. Roger Davis takes over the role of Hannibal Heyes in this episode. Ben Murphy. Sheriff: Rod Cameron. Bixby: Chill Wills. Halberstam: Donald Woods. Kyle: Dennis Fimple.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

Which Way to the OK Corral?
Series 2 Episode 20
Smith and Jones find that their latest assignment leads them back into the arms of Georgette Sinclair.

Cast (unconfirmed)
Ben Murphy
Roger Davis
Michele Lee
Cameron Mitchell

 

Don’t Get Mad, Get Even
Series 2 Episode 21
Heyes and Curry are cheated out of a small fortune at the poker table.

Cast (unconfirmed)
Ben Murphy
Roger Davis
Walter Brennan

 

What’s in It for Mia?
Series 2 Episode 22
Thrown out of King City after tangling with a crooked local saloon owner, Mia Bronson, Hayes and Curry end up bruised and penniless. Can Hayes come up with a cunning plan to retrieve their dignity and their money?

Cast (unconfirmed)
Roger Davis
Ben Murphy
Ida Lupino
Buddy Ebsen
Sallie Shockley
George Robotham

 

Bad Night in Big Butte
Series 2 Episode 23
Convinced that a bounty hunter is on their tail, Heyes and Curry accompany their old friend Georgette to Big Butte. But that is when their problems really begin.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

The Long Chase
Series 3 Episode 1
The duo cover a lot of ground while trying to escape from a relentless sheriff.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

High Lonesome Country
Series 3 Episode 2
An elderly couple send a bounty hunter after the duo.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

The McCreedy Feud
Series 3 Episode 3
Smith and Jones try to end the feud between Pat ‘Big Mac’ McCreedy and a Mexican land baron.

Cast (unconfirmed)
Ben Murphy
Burl Ives
Cesar Romero
Roger Davis

 

The Clementine Ingredient
Series 3 Episode 4
Smith and Jones’s plans to retire peacefully in Mexico are interrupted by Clementine, who blackmails them into helping her in one of her schemes.

Cast (unconfirmed)
Ben Murphy
Sally Field
Roger Davis

 

Bushwack!
Series 3 Episode 5
Hayes and Curry are set up as witnesses for a man who kills two bushwackers.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

What Happened at the XST?
Series 3 Episode 6
The duo meet up with an old friend who asks for their help in digging up money from an old robbery.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

The Ten Days That Shook Kid Curry
Series 3 Episode 7
Kid Curry falls on hard times and has to be bailed out by a pretty schoolteacher, who has an ulterior motive for freeing him.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

The Day the Amnesty Came Through
Series 3 Episode 8
Smith and Jones try to rescue a woman from her outlaw lover.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

The Strange Fate of Conrad Meyer Zulick
Series 3 Episode 9
Smith and Jones risk their freedom in a foray into Mexico.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

McGuffin
Series 3 Episode 10
Smith and Jones help a man lying injured by the roadside, who then asks them to deliver a package containing counterfeit $20 plates for him.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

Witness to a Lynching
Series 3 Episode 11
Smith and Jones are persuaded to protect a key witness who is to give evidence against a threatening murderer.

Cast (unconfirmed)
Ben Murphy
Roger Davis

 

Only Three to a Bed
Series 3 Episode 12
Smith and Jones help to round up wild horses.

Cast (unconfirmed)
Ben Murphy
Roger Davis

Posted in Television | Leave a comment

Science – Quantum Uncertainty

All of quantum mechanics theory suffers from being entirely devoid of real facts, being just a bunch of theories: the so-called interpretations.

Schroedinger developed a perfectly valid and hugely successful equation, which accurately handles all the practical aspects of quantum mechanics. Then a whole lot of other people tried to theorise about why the equation was so successful.

All the theories violently disagree with each other.

Einstein never agreed with any of these theories, and was particularly scathing about the so-called Copenhagen interpretation, which he viewed as a load of rubbish. And he was a lot smarter than everyone else working in this field – then and now.

So good luck with trying to second-guess Einstein.

Schroedinger realised that at the heart of quantum mechanics there is a random factor, which can’t be precisely quantified, but which must be handled statistically: that is, it can be assigned a probability. The implication of this is that what is being measured is not a single event, but many events: so many, that even given a certain amount of freedom (i.e. randomness) within the system being measured, when viewing a sufficiently large sample – presumably millions of events – it is possible to measure the average response of the system with an impressive degree of certainty.

At the heart of statistics lies a grain of truth: that what to us, here at the macroscopic level, appears to be a single event (we call it, out of ignorance, a particle), is really many events. Statistics give us a picture of a quark, or an electron, or a neutrino: we assume, on no evidence, that it is a single spacetime event; but Schroedinger assures us that it is not, and that what we are seeing is merely the tip of the iceberg: an iceberg built out of the statistics of thousands, perhaps millions, of underlying events.

Schroedinger’s work is the only solid piece in the quagmire termed quantum mechanics. What one ought to do in this field is pay more attention to him, because the rest is all theory, largely based purely on speculation.

If a particle is not a statistical illusion, why does its behaviour conform so closely with Schroedinger’s equation, an equation which requires one to accept – in its math – that the behaviour it is modelling is based on a series of statistical probabilities?

Certainly one can understand why a particle might not be capable of being assigned a precise spacetime location, if what one is “observing” is not a single spacetime event but is, rather, the statistical outcome of a million underlying events.

Even if (which seems unlikely) there are only a dozen underlying events, it is still a case of the “particle” having a “position” which is derived from averaging the positions of those 12 actual events. How much less precise does its position become if the “position” is averaged from the locations of a million actual events? Which of those million is its “real” location? Are they not all equally valid?

When we measure a property, we are measuring the average of a large number of events, not, as we have previously supposed, a single event. Classical physics believed that a particle is a single spacetime event, whereas quantum mechanics is trying to tell us that a particle is the average value of many separate events.

Quantum interpretations tell us nothing: we simply do not have the technology capable of magnifying events at the sub-atomic level to see what is really occurring there. But Schroedinger has already given us the clearest road-map: we must expect to see a large number of individual events, which are to some degree chaotic, but which are predictable when treated in groups, using statistics, and which when so treated will obey the probabilities he sets down.

His math gives the clearest possible explanation of what is occurring, and all the theorists do is ignore him. They persist in treating a particle as a single event, and thereby they mislead themselves into ignoring the statistical nature of Schroedinger’s work.

Accordingly, the answer is that none of the so-called interpretations are valid. A true understanding of quantum events must wait on the development of techniques for magnifying the quantum level, such that we can study what is actually occurring there (instead of theorising about what might be).

 

Posted in Science | Leave a comment

Science – Wave Theory: The Inverse Square Law

The propagation of all forms of electromagnetic waves (including visible light) is governed by the inverse square law. As is the propagation of gravitation (including gravity waves).

To understand the universe, it’s necessary to understand how these fundamental forces propagate: as an expanding spherical shell, radiating outwards from a star or (in the case of gravity) any other massive body.

In relation to electromagnetic radiation, there are a limited number of governing principles:

1. The inverse square law only applies to point sources. For extended sources, it only applies at distances that are large compared to the diameter of the source: i.e. at distances from the source which are so great that the source looks like a point.

2. The inverse square law is only valid on scales where light can be modeled purely as a wave, i.e. macroscopic scales. At the microscopic scale, the assumptions break down; you instead have to think about the statistical expectation of photons, which follows a statistical analogue of the inverse square law. Even smaller, and you enter the world of quantum mechanics, where you have to account for the actual waveform of the object under study.

3. The energy radiates outward in a spherical pattern, as though the energy is being emitted as a series of shells or spheres (or at least some portion of a sphere, typically measured in steradians).

Assuming the source is constantly emitting the same amount of energy (a key assumption), the amount of energy at any given distance from the source – totalled across the entire surface area of a hypothetical sphere having that radius – is always the same (assuming – another key assumption – that no energy has been absorbed, i.e. that the energy has travelled through a perfect vacuum).

The inverse square law merely expresses a simple relationship, long known to mathematics: that the surface area of a sphere is proportional to the square of its radius. In accordance with that principle, the surface area of a sphere always quadruples if the sphere’s radius is doubled.

Thus, because the total energy at any distance from the source (a star) is constant, but the surface area of our hypothetical sphere varies in proportion to the distance from the source, what we observe is a spreading out of the emitted energy over an ever-increasing surface area.

At distance 1, and at distance 2 (double the distance), the same amount of energy is present in our imaginary sphere; but at distance 2 it is spread over an area 4 times greater than at distance 1 (say, an area of 4 million square feet at 2, compared to only 1 million square feet at 1): hence, per square foot, the energy at 2 is only one-quarter of the value at 1.

Radio transmission

A radio transmission is an electromagnetic wave, so it obeys the inverse square law. The power (strength) of the signal falls to one-quarter when the distance from the source doubles.

Thus, if we arbitrarily assign the reception strength at a distance (from the source) of 1 mile as being 100%, its strength at a distance of 2 miles will be 25%.

At a distance of 4 miles the strength will be one-quarter of the strength at 2 miles:

0.25 x 25% = 6.25% (i.e. 6.25% of its strength at 1 mile)

In terms of light years, a radio frequency signal, or a light source, will have a strength of (say) 100% at a distance from the source of 1 light year, so will have a strength of 6.25% at 4 light years.

At 8 light years, the strength will be 6.25% x 0.25, namely 1.5625%.

At 8 light years, 1 over the square of the distance means 1 over 8 x 8, or 1 over 64. 1/64 = 0.015625 (or 1.5625%). Thus at 8 times the distance, the signal strength has fallen to only a fraction over 1%.

Classical Geometry

The inverse square law (which governs the propogation of light and other forms of electro-magnetic energy) is related to the principles of classical geometry.

The inverse square law holds that the energy received from a star falls to one-quarter if the distance from the star is doubled, in accordance with the formula:

One over the square of the distance

Thus, if the energy per square foot at a distance of 1 light hour from the star is measured as having a value of y, at a distance of 2 light hours the value will be y multiplied by one-quarter :

1 / 2²  =  1 / 4

a. A Circle

If we consider the classical geometry of a Circle, the ancient Greek mathematicians proved that the circumference of a circle is related to its radius, expressed in the formula:-

The circumference of a circle = 2 x pi x r

i.e. the circumference is equal to twice the radius, multiplied by a constant, Pi, which is 22 over 7 (approximately 3.142).

In terms of electromagnetic radiation, the source star represents the centre of the circle, and the energy wave-front represents its circumference, as measured at two distances from the star, which represent two different radius lengths, one double the other.

The area of a circle is equal to pi r² (pi times r squared). Simple algebra demonstrates that where the radius is 1, this formula gives a result of 3.142 multiplied by 1; and when the radius doubles to 2, the formula yields 3.142 multiplied by 4 (i.e. by 2 squared).

Thus we see that the area increases by a factor of 4 when the radius (the distance from the centre of the circle to its circumference) is doubled. The fact that this exactly matches the inverse square law implies there is a real connection with the decrease in electromagnetic energy to one-quarter when the area is quadrupeled.

There is a further implication, one which supports the notion that electromagnetic energy is a vibration or waveform: the reduction in the energy level is related to the area of the circle, rather than to its circumference. This implies that the vibration is being absorbed (“damped down”) by the spacetime field, as it propogates through it, rather than simply thinning-out in proportion to the expanding circumference.

b. A Sphere

If we take our circle (a 2-dimensional figure) and expand it into a sphere (a 3-dimensional figure), classical geometry shows the surface area of the sphere is:

4pi r² (4 multiplied by Pi, multiplied by the radius squared)

The volume of the sphere is:

4/3 x pi x r³

With a 2-dimensional circle, the area is determined by the square of the radius (a circle having only 2 dimensions, height and width); and with a 3-dimensional sphere the area is determined by the cube of the radius (a sphere having 3 dimensions: height and width, but also depth).

Observational Evidence and Gravity

The inverse-square law is simply a statement of an observed fact: the strength of a gravity field falls with distance. Having made a series of detailed measurements, Newton realised that if you take any mass, when you measure the gravitational strength of it at a distance, x, if you double the distance and then measure the field’s strength, at the new distance, 2x, the field strength is only one-quarter of its strength at distance x.

Newton then formulated his theory, that the field strength is inversely proportional to the distance. Specifically, that it is always one over the square of the distance. Thus at a distance of 1 million miles it is 1/1² (i.e. 1), and at 2 million miles it is 1/2² (i.e. 1/4). Thus, by doubling the distance the field’s strength falls to a quarter of its former value.

Einstein’s math makes clear that he understood the relationship: it is the relationship which arises from the circumference of a circle, when projected into 3-dimensions, as a sphere: in other words, what Newton was measuring, Einstein says, is a relationship based upon the surface area of a sphere.

With a sphere, if you take a central point, such as a planet or a star, and measure its gravitational strength at a distance of 1 million miles, then double the distance and measure again, the strength falls to a quarter; but that is not the only thing that falls to a quarter: if you measure the surface area of a sphere of radius 1 million miles (in other words, the surface area of a sphere of which the planet or star is the centre), you obtain a value of x for the surface area (in, say, square miles); but when you repeat that measurement, at double the distance, the surface area is exactly 4 times greater.

What Einstein had realised (perhaps even Newton, too) is that a beam of sunlight emitted by the star, which illuminates an area of 1 square mile on the surface of our hypothetical sphere of radius 1 million miles, will, when it has travelled double the distance, be illuminating an area of 4 square miles.

This simple geometric fact tells us quite a lot: the strength of gravity falls to 1/4 if the distance is doubled, but so does the strength of sunlight. The same amount of light now illuminates 4 times the area, so each square mile is receiving only a quarter of the total. This rather striking co-incidence is not really a co-incidence at all. If we think of gravity as a wave, and light as an electro-magnetic wave, we can begin to see that they might have some common properties.

We can see, for one thing, that they are both obeying the inverse-square law. What does this imply? Well, one implication is that both gravity and electromagnetic radiation are propogating in a spherical pattern, radiating out 3-dimensionally from a central point, as a sphere (or shell) of energy.

This mechanism causes the inverse-square effect, since a given quantity of energy, q, emitted from a central point, a star, will diminish in strength with distance as predicted by Newton: if we project a set of imaginary spheres around the star, set at intervals of 1 million miles, the strength of the gravitation and of the emitted light (measured at any two of our imaginary spheres) falls in proportion to how much the surface area of the sphere has increased. As the distance from the star (the radius) exactly doubles, the surface area of the imaginary sphere exactly quadruples, and the strength of the gravity wave and of the electromagnetic wave fall to exactly a quarter.

Newton’s math thus gives us the striking fact that, for both types of wave, doubling the radius of the sphere, thus quadrupelling the surface area with which the wave must interact, causes the measured strength of the wave to fall to a quarter. The logic of the math is that the spacetime medium which is transmitting the wave is spreading it out over four times the surface area, and it is thereby having only one-fourth of the effect per unit of area.

The same mechanism which allows spacetime to transmit energy, as a wave, also causes spacetime to curve. Logic demands that spacetime must be flexible: it cannot vibrate if it is not, and this vibration is what is permitting the energy transmission.

Newton tells us that action and reaction are equal and opposite. What this implies is that which we would logically expect: as in an ocean, where the water molecules bump one against the next to pass on the motion which we perceive as a wave, the granules of spacetime are in collision with one another, but they do not go anywhere: they pass on the motion, but then return to their starting point. The re-action (to the action of passing on the motion) is equal, and is opposite, putting them back where they started.

Elastic deformation is a result of the tensor force, which separates the individual granules of spacetime, being compressed: as one granule is impacted from the direction of the centre of the sphere, i.e. the star, the tensor pushes against the next adjacent granule, forcing it in the opposite direction; but, like a tiny spring, it also recoils after doing so, returning to a stationary state. Hence the deformation is temporary, i.e. elastic (rather than plastic, i.e. permanent).

Note: Theory implies that the tensor force might be the so-called gluon “flux tube”, a string-like object created from a quark’s colour field. It is this string’s tension which is of significance.

This flexibility allows a gravity wave to be passed on, or an electromagnetic wave. The strength of the wave is one-quarter at distance 2x, compared to distance x, simply because at distance 2x the expanding nature of the wave (i.e. its spherical expansion pattern) means that each unit of energy must displace four times as many of the granular units of spacetime (put another way, there are only one-fourth the number of energy units/quanta arriving per square mile of surface area).

It suggests that ordinary gravity is imposed by a more permanent deformation of the tensor. Logic suggests that gravity is most likely simply a reduction in the resistance to inertia (the force holding a particle in one place), possessed by the granular structure of spacetime. If the granules offer less resistance in one direction, a particle in motion, which follows the path of least resistance, will inevitably tend to move in that direction.

If the tensor (by reason of the presence of a central mass) is shorter in the direction toward the star or planet, and so the distance between the granules is less in that direction, this offers a logical basis (a mathematical reason) for a particle in motion to move in that direction: where the energy requirements are lower for moving toward the central mass, as contrasted with every other direction, the particle will tend to move toward that mass.

Note: The tensor might be shorter because of an exchange of messenger particles with the adjoining granule. If so, it might be shorter because there are *more* messenger particles arriving, being closer to the mass: an inevitable consequence of the inverse square law (the field strength falls with increasing distance, because the arriving energy has to affect a greater surface area).

What I’m tentatively suggesting Einstein means is that where the distance is less, the energy required to cross that distance is also less, on the basis that the quantum tunneling effect thereby requires less energy.

Perhaps the granules represent the stepping stones in a swamp, and one must leap from stone to stone in order to move, and a leap across a greater distance requires greater energy.

However, a logical case could be made out for arguing that the tensor may be elongated (rather than shortened) in the direction of the central mass. Inertia must be lessening in that direction, but the mechanism is uncertain.

I tend toward supporting compression, i.e. shortening, as otherwise the tensor has to become infinitely long at the event horizon.

If we think of a particle in terms of quantum mechanics, if the granules of spacetime are closer together in one direction the particle necessarily requires less energy to “tunnel” in that direction. Again, the math implies this effect.

Note: An analogy can only take you so far. If the energy required to move toward the mass obeys the laws Einstein predicts, the logic of his math says the energy requirement must eventually fall so much that it becomes *negative*: that is to say, it no longer requires energy to move in that direction; rather, the object *gains* energy with each further step (observed to be an increase in velocity and momentum).

At a threshold distance, the inertia holding it in place will exactly match the gravitational attraction: at closer distances the inertia is insufficient, the “energy cost” has become negative.

Ordinarily, for a particle to move requires energy, e.g. momentum. The particle must, in order to move, overcome the inertia which binds it to its current location in spacetime. Where there is a direction (i.e. toward the mass) in which inertia is less, the particle will, firstly, tend to move in that direction; but will, secondly, tend to accelerate, since the restraining force acting on its (unchanged) momentum is reducing.

Note: This implies that the particle must do more jumps to cover the same distance, as the jumps are shorter. That would seem to lead to the energy cost for moving any given distance being unchanged overall. But this is not so if the field strength is not constant.

It appears as if doing more jumps to cover a given distance, since the jumps are shorter, would have no overall benefit. However, this is only so if the field strength is constant at all points. Newton’s inverse square law tells us that in fact the gravitational field strength is varying continuously.

Note: Gravity is a consequence of a semi-permanent deformation of the tensor, persisting so long as the mass is present. Hence gravity is a genuinely structural effect, caused by the shortening of the tensor in the direction of the mass.

One implication is that gravity is caused by one granule on its immediate neighbour (by the alteration in the length of the tensor), not by a long-range force.

Note: Quantum tunneling – As the universe is composed of only 5 percent ordinary matter, but contains 5 times as much “dark” matter (matter which does not react to electromagnetism), it is feasible to take the view that the distance between granules in spacetime might be variable, since the normal granules may be separated by the “dark” ones: a normal granule might have no “dark” ones between it and the next normal one; or it might have 1,2,3,4 or 5 “dark” ones to “tunnel” through, before it reaches a normal one.

Where the space surrounding the star (or planet) is composed of a multitude of such  granules, each having its tiny tensor(s) compressed more greatly in the direction of that mass, then to our perception, at the macro-level, it might appear as though (i.e. there could be an illusion that) space is curving, since an object of low mass injected into such a system (but with some motion/momentum of its own, and given a suitable angular momentum) might behave like it was being exposed to a curved surface. The math seems to be similar.

Note: The “illusion of curvature” – Logic implies that if the wave is expanding in a spherical pattern, then the wave-front must inevitably take a curved form. If we equate each point on the surface of our imaginary sphere, of radius 1 million miles, with a particular field-strength (an equal value, for inertia or perhaps for resistance), and view the field from that perspective, the pattern of the field strength must inevitably appear curved, as it must be uniform in strength at every point that is equidistant from the star.

The fact that Einstein says that space must curve in the presence of mass is completely understandable if gravity propogates in a spherical pattern, just like an electromagnetic wave, since a wave-front (which represents points of equal field strength, equal because all points on a spherical surface are equidistant from the star) is a *curved* surface.

The overall implication of the math, both Newtonian and Einsteinian, is that waves of gravitation and electromagnetism obey the same physical laws, and for the same reasons: that both are a wave motion in a granular medium; a medium which responds to the ordinary, well-understood geometric principles associated with a spherical type of wave propogation based on vibration; that the tensors which bind spacetime into a cohesive whole allow the transmission of this type of energy; and that ordinary principles of motion and inertia, and of quantum tunneling, explain gravity.

There are many implications in the foregoing for the likely composition of the granular structure of spacetime, but as they don’t follow from the actual math involved in the foregoing I won’t complicate this discussion any further here.

To clarify one point, this is a scenario in which electromagnetic radiation (including light) is behaving in the same manner as gravity: both effects are propagating as a spherical field, reducing in strength in proportion to the increase in the surface area of the sphere as distance from the source increases. Both effects are 3-dimensional, being spherical; but they are also 4-dimensional, since the sphere (the wave-front) expands from moment to moment at the speed of light. Ordinary gravity behaves in this way, my remarks are not directed solely to so-called “gravity waves”.

Note: It is possible that although normal gravity apparantly propagates in the manner of a wave, what might actually be occuring is merely an effect that mimics a wave. If the tensors are contracting in the direction of the centre of mass, and the effect is (as would be expected) propogating as a sphere, i.e. in 3-dimensions, and any changes propogate at the speed of light, the effect will behave in exactly the same way as the propagation of an electromagnetic wave, but is a structural effect, not a genuine wave.

It is possible, therefore, that the only true wave-effect which gravity displays is the phenomenon termed “gravity waves”, predicted by Einstein, which were detected by LIGO in 2016.

Posted in Science | Leave a comment

Humour – An Analysis

Humour: some reflections on the technique of role-reversal in English situation comedy, on television and radio.

 

Radio : Advantages over TV & Cinema

Radio comedy – and radio drama – involves a degree of participation by the audience. The events which are presented occur partly in the mind of the listener, who, because he can’t see what is happening, has to imagine it.

Television does not do this. Because it provides both sound and pictures, it leaves nothing to the imagination.

Listening to the radio is more akin to reading a book than to watching television; because, as with a book, the audience is asked to imagine the scenery and the settings, and even the appearance of the characters.

For this reason, audiobooks (the reading of a book onto tape) come across well on radio.

The medium involves the audience in the creative process, in a way that television does not.

 

Comedy Characters

Ineffectual:

Perhaps the best description of a certain type of comedy character is “ineffectual”. This describes characters as diverse as Captain Mainwaring, Basil Fawlty, Manuel, Frank Spencer, Gordon Brittas, and Arthur Dent.

Not all comedy characters are ineffectual. Some are overbearing instead, e.g. Margo Leadbetter in The Good Life, Mrs Bucket in Keeping Up Appearances, and Sybil Fawlty in Fawlty Towers. These can also be termed ‘battleaxe’ roles.

Notice that it is only male characters who are ineffectual, and only female ones who are domineering: this is comedic role-reversal (since all comedy is reversal), because in reality it is typically men who are dominant and women who are ineffectual.

 

Naive:

Some comedy characters (traditionally the principal male character in a series) can best be described as ‘naive’. An alternate description is “an innocent abroad”.

This includes Frank Spencer, in Some Mothers Do ‘Ave ‘Em, who is the most obvious example of the type; but also Manuel in Fawlty Towers; possibly even Basil Fawlty himself; and Bertie Wooster.

It also describes most characters played by Richard Briers, such as Roger Thursby (the trainee barrister on radio in Brothers in Law); Roger Sparrow (the trainee doctor on radio in Doctor in the House); the newly married young husband (in Marriage Lines on tv, with Prunella Scales); the character he played on tv in Ever Decreasing Circles; Tom Good (in The Good Life on tv); and (on the radio) Bertie Wooster in Jeeves.

 

Amateur versus Professional

The basis of humour is often the amateur versus the professional. For instance in the political comedy Yes Minister. The amateur, in this case the Minister, is inevitably at the mercy of the professional, embodied in the shape of the Permanent Secretary.

In the field of government the professionals are the Civil Servants, who make a career out of it, and who have all the benefits – i.e. experience – which flow from being employed full-time in the business of government.

Politicians, in contrast, are the amateurs, for they are only involved in government part-time: when they lose their parliamentary majority, and are forced into Opposition, they can spend many years gaining no experience of governing; and individual MPs can lose their Seats and thus be forced out of the field altogether.

The amateur versus the professional is also at the heart of the comedy in the wartime comedy Dad’s Army, where Captain Mainwaring’s Home Guard platoon are amateur soldiers, pitted against the professional soldiers of the German Wermacht and Luftwaffe.

This is another way of looking at that form of comedy which the BBC categorises as ‘innocence versus experience’.

The concept of humour arising from the incompetence of an amateur goes back all the way to the films of Will Hay in the 1930s. However, Yes Minister and Dad’s Army employ the idea in a more sophisticated manner, in that the amateur is up against a professional opponent.

The humour might arise out of the amateur’s mistake being revealled by the professional. Or the professional might trip the amateur up.

In the case of Will Hay, the character he portrays in his films is rarely up against a more competent opponent. He stumbles frequently, but normally this is his own doing (simple incompetence). Occasionally he falls foul of the malice of Charles Hawtree, or of his two stooges: the fat boy (played by Graham Moffat), and the old man (played by Moore Mariott); but he has no equivalent of Sir Humphrey – there is no Moriarty to his Sherlock Holmes.

In Yes Minister this type of humour is based upon reversal, in that the Minister is supposedly in charge of his Department; but his subordinate, Sir Humphrey, the senior Civil Servant (a man who’s very job description explicitly embodies the notion that he’s the Minister’s servant), is in reality the one in charge.

This is the inevitable consequence of Humphrey being the professional and Jim Hacker being the amateur, and hence at Humphrey’s mercy.

Role reversal humour is also present in Dad’s Army, where the roles of Officer and Sergeant are switched about-face. The upper class Wilson, a public school man, is the Sergeant; while the firmly middle class Mainwaring is made the Officer. Thus their social standing is reversed in the Platoon.

Their social standing is also reversed in private life (i.e. in their business relationship), where Mainwaring is Manager of the Bank in which Wilson is merely the Chief Clerk.

This reversal is particularly to the fore in the episode The Honourable Man, in which a death in the family results in Wilson’s side moving up one rung in the pecking order of the aristocracy, so that he acquires a title, becoming ‘The Honourable Arthur Wilson’.

Hence, despite being Wilson’s social inferior, Mainwaring is in authority over him both in the Home Guard and in civilian life. Mainwaring’s obvious discomfiture at this situation gives rise to a lot of humour, as he frequently riles at supposed slights cast on his authority by Wilson. The audience can see that Wilson is not at fault; Mainwaring is overly sensitive, due to his inferiority complex: he realises he is socially inferior to Wilson, but can never bring himself to admit it.

It is the comedy of frustration: Mainwaring’s frustration and discomfiture, beautifully acted by Arthur Lowe, is what gives rise to the laughter. Usually, it is expressed through Mainwaring’s pomposity, which Wilson gently pricks, thereby bursting Mainwaring’s “bubble” (the illusion – also his self delusion? – that Mainwaring is in charge). The laughter comes from the bursting of the bubble: the shattering of the illusion.

Both Sir Humphrey and Sgt Wilson generate humour by subtlely undermining the authority of their notional superiors, i.e. Jim Hacker and Captain Mainwaring. In Wilson’s case this is often unintentional: frequently, it’s merely the result of his making a sensible suggestion to balance the lunatic schemes of Corporal Jones. Thereupon Mainwaring gets a laugh, by immedately claiming he was just about to make the very same suggestion, even though it’s evident to the audience that he was really all-at-sea.

The reality is that it’s Humphrey and Wilson who exercise the real authority. By contrast, Hacker and Mainwaring are well intentioned, but muddle-headed; and if they win out it is only by muddling through.

In essence, Yes Minister is about innocence versus experience. Jim Hacker is an innocent abroad, and Sir Humphrey is like a hungry pirannah, waiting to gobble him up; and this is a direct consequence of one being an amateur and the other a professional.

Most comedies can be viewed on this basis. In Jeeves and Wooster, for example, Jeeves is the voice of experience and Bertie is the innocent at large.

In Dad’s Army, Mainwaring is an innocent, by reason of being an amateur soldier. The experienced soldier is, depending upon the needs of each episode, either Hitler (memorably represented on one occasion by Philip Madoc as a German submarine captain), or Captain Square of the Eastgate Platoon, who was formerly a regular soldier. On other occasions, regular soldiers have guest roles, in order to contrast their level-headed efficiency with Mainwaring’s bungling; for example, Fulton MacKay in the episode We Know Our Onions.

An interesting illustration of how to ring-the-changes on this theme is Corporal Jones, who, like Mainwaring, is a muddler. But Jones is not an innocent abroad; the innocent is young Private Pike. Neither is Jones an amateur: of all the platoon, Jones has the greatest military experience.

However, Jones is demonstrably muddled by old age, as in his tendency to panic whenever a cool head is needed (i.e. is muddle headed due to age, rather than due to being an amateur). Jones approximates the woolly thinking of Will Hay, and of Captain Mainwaring, without having their excuse of a lack of professional training. But Jones’s excuse is his age.

Comedy arising out of the contrast between innocence and experience is also at the root of Tristran’s character in All Creatures Great and Small, where, as a trainee Vet, he’s allowed to make mistakes that would be unacceptable if coming from the experienced, fully trained, James Herriot or Siegfried Farnon.

So a character needs a legitimate reason for being muddle headed: the inexperience of an amateur; the inexperience of a trainee; or the effects of old age. This gives the necessary element of reality; there is nothing funny about a professional making mistakes, because that is just not credible. In situation comedy, that need for credibility is usually expressed as being a need for reality in the situation.

So Dad’s Army would not have been funny if Mainwaring had been a professional soldier.

On the other hand, Captain Square of the Eastgate platoon had once been a professional soldier, but in his case he gets away with being muddle-headed by reason of being cast in the role of Colonel Blimp, with all the old-fashioned attitudes which that (i.e. being out-of-date) implies. Blimp is invariably behind-the-times: in fact, he’s in effect turned back into an amateur, because his knowledge is so badly out-of-date.

Similar pretexts which have been used successfully in tv comedies include psychiatric derangement, in the case of Frank Spencer, in Some Mothers Do Ave Em. In One Foot in the Grave, it is again old age that afflicts Victor Meldrew, as with Corporal Jones.

Thus a “comedy of errors” needs a basis in reality: a credible reason for the character to get it wrong.

 

In The Good Life, where Tom and Barbara drop-out of ordinary life to become self-sufficient, the Goods are amateurs at farming, trying to overcome their amateurism. They are as amateur in their chosen roles as Will Hay is in his role as a Schoolmaster.

In Porridge, a comedy set in a prison, the Prisoners are nominally the amateurs, and the Warders the professionals (since, also, amongst criminals the true professionals are the ones who don’t get caught!)

But Porridge is more subtle than it at first appears. It subtly makes the point that, within the prison, the prisoners are the true professionals, and thus are the ones really in charge, while the warders, in a classic case of reversal, are (like Jim Hacker) notionally in charge, but really at the mercy of the Cons. In an early episode, Fletcher remarks that “this resort is notionally run by a Governor, Mr Venables, who is appointed by the Home Office. But we know that, really, ‘Genial’ Harry Grout could bring this place to a standstill if he so wished.”

This point is subtly reinforced frequently, such as in the Christmas episode: when Harry Grout drops in on Fletcher and Godber unexpectedly, he explains that he had to get out of his cell for a few minutes to allow a couple of warders to put up his christmas decorations!

 

 

Feedback – contact me:

 

Posted in Comedy | Leave a comment

The Noble Years : Radio Documentary

The Noble Years is a radio documentary:
http://www.bbc.co.uk/programmes/b08y6wtz

You can’t have suspense without information” – Alfred Hitchcock on making films.

At first I thought they’re going to need subtitles in the picture” – Shelley Winters on Michael Caine’s cockney accent in ‘Alfie’.

Milligan and I are both manic depressives” – Peter Sellers.

Alfred Hitchcock, Shelley Winters, Peter Sellers, Sammy Davis Jr, Richard Burton, David Niven, Vincent Price, Sean Connery, Shirley Maclaine, Joan Greenwood, Paul McCartney and John Lennon. These are just some of the big name interviewees featuring in this hugely entertaining review of the work of interviewer Peter Noble.

Long before BBC 1’s Film review show, film fans’ main port of call was Movie-Go-Round which ran on the BBC Light Programme / Radio 2 on Sunday afternoons from 1956 to 1969.

Travelling around the globe, the programme’s film location reporter Peter Noble chatted to the superstars and directors of the day. Tragically none of the original programmes were saved in the BBC archive, but luckily Peter held onto all his irreplaceable taped interviews.

Not heard since 1995, this look back with Movie-Go-Round‘s original host Peter Haigh showcases film-fan Peter Noble’s love of cinema with the best of his vast personal collection of tapes.

Producer: Barry Littlechild

First broadcast on BBC Radio 2 in 1995, to celebrate the 50th anniversary of the Light Programme.

[http://www.bbc.co.uk/programmes/b08y6wtz]

 
Barry Norman does not appear.

This programme still exists in the BBC archives (repeated 15th July ’17, on Radio 4 Extra).

Movie-Go-Round” is the series which was frequently spoofed by Kenneth Horne on the 1960s BBC radio comedy “Round the Horne“, in a regular feature entitled “Movie-Go-Wrong“. (“Round the Horne” was usually broadcast on Sunday lunchtime, on the Light, which was just prior to “Movie-Go-Round“‘s Sunday teatime slot on the same station.)

Unlike the impression given in ‘The Noble Years‘, Peter Noble was not the sole reporter on “Movie-Go-Round” (Donovan Pedelty and Bernard Mayes, for example, reported from Hollywood), nor was Peter ever-present for the 13 years that the series aired.

 

I should like to hear in full some of the interviews from which clips were used in ‘The Noble Years’: perhaps these could be broadcast on Radio 4 Extra? There could be a lot of mileage in using other surviving interviews (not included in the 1995 programme), either as fillers or as a complete series, on 4 Extra.

That station regularly airs short, 15 minute features: the large collection of interviews in Peter’s surviving recordings could be broadcast in 15 minute segments across a series of 6 or so programmes.

Posted in Films | Leave a comment

Science – Towards a Theory of Gravity

A Default State

Gravity is a default state: mass causes gravity, so in the presence of mass gravity is the default state. Where mass is entirely absent, the absence of gravity is the default state.

Gravity, on the face of it, appears to be an attraction, because it causes two particles to move towards each other. Probably, however, it is not a force generated by one particle which pulls on the other: more reasonably, logic suggests that the particle is having some effect on the structure of spacetime, and that it is this which is, in turn, having an effect on the other particle.

 

At what point is the value of inertia zero?

Einstein postulates that gravity is a structural effect: a consequence of a reduction in inertia (i.e. a reduction in the resistance of the spacetime field to particle movement), in the presence of mass.

That is not how Einstein expressed it, but it is a logical consequence of his theory (i.e. the theory that the cause of gravity is structural), since that implies a gradual reduction in resistance to motion (which is only a way of describing inertia).

If the value of inertia decreases because of the presence of mass, reducing in the direction of that mass, then there must logically come a point at which the value of inertia declines to zero.

Logic implies that this must occur only at (or within) the event horizon (of a black hole), as that is the point where we observe the attraction terminating. If attraction terminated prior to that, the mass would not fall onto the event horizon.

If inertia falls to zero, momentum might cause the object to continue moving in the same direction. This point might occur at the event horizon, or above it; but logic implies that the event horizon itself is the most likely point (because we must bear in mind that inertia begins, not ends, at this point: we are looking for a point at which resistance to motion starts).

The more difficult question is why this resistance occurs?

Gravity binds particles together, but not in those extremely strong bonds formed by electromagnetic attraction or by the strong nuclear force: it is a million times weaker. This weakness may arise from the supposed “attraction” of gravity not being an attraction at all: perhaps it is simply an absence, i.e. the lack of that resistance which ordinarily prevents particles moving freely.

The quantum field (i.e. “spacetime”), in a low-mass hence low-gravity environment, has a resistance to motion which we term ‘inertia’. This inertia, or rather the lack of it (in a high gravity environment), causes gravity by allowing a particle in motion to move toward the local centre of mass. Inertia, or resistance to motion, varies according to how much mass is present; and a decline in resistance to movement gives an illusion of that mass “attracting” the particle towards it, thereby causing what we traditionally think of as the gravity field strengthening.

This illusion of a pull-force tends to blind us to the more simple truth, that an object in motion will tend to follow the path of least resistance: that, therefore, gravity is no more than a structural effect, by which resistance to motion is reduced in a specific direction, thereby giving rise to a path having lesser resistance.
 

 
What is a gravity field?

As the amount of mass increases, the resistance of quantum spacetime to the movement of the individual particles in that mass declines.

This proportionate decline suggests that what we term “gravity” is actually merely a measurement of the field’s resistance to particle motion (a measurement of the change in that resistance).

The notion of a “gravity field” may be an illusion: gravity may be just one property of the quantum field, i.e. of the spacetime field.

As the field’s resistance declines, particles move in that direction (i.e. the direction  in which it declines); but they are not really being attracted to one another, nor even attracted to the local mass causing the effect. Their (inherent) energy is unmodified; but they are acquiring momentum, gained from an increase in their velocity caused by the declining resistance of the field. They are merely “clumping together”, a very loose form of association, due to the absence of that resistance, or, initially, due to the effects of the presence of a resistance gradient, not because of the presence (or formation) of a bond between the particles.
 

 
The illusion that gravity is a constant

We live our lives in the unrealistic assumption that gravity is a constant, because the Earth is spherical. That’s to say, we exist exclusively and perpetually in a state of constant gravity, because we spend our entire existence on a spherical surface, hence at a constant distance from the planetary centre of mass.

This experience colours (and prejudices) our entire outlook toward gravity. We instinctively treat gravity as a constant, because our environment conditions us to expect that its strength never changes – something which is, in reality, an illusion.

In fact, if we thought about it, we would realise that it’s an invalid assumption, hence only an illusion: if we observe someone falling off a cliff, what we are actually observing is that gravity has a different value at the top of the cliff from its value at the foot of the cliff.

The fall demonstrates  the existence of a gravitational gradient: of a difference in the value of something (some physical state, which we term ‘gravity’) between the top and the base of the cliff.

The resistance to movement lessens in the direction of the centre of mass, hence it increases in all other directions. Without an injection of additional energy, a particle or other object can only move in a single direction: the direction in which resistance
to movement is least, i.e. directly toward the centre of mass.

But, in practice, we are standing on the surface of a planetary mass, hence that direction is straight down. Accordingly, we are impeded from moving in that direction by the surface upon which we are standing. Thus, to us, our experience is that gravity is always constant, because we can never approach its source more closely.
 

 
Gravity as a Mutual Force

Much erroneous thinking may result, if one begins from an incorrect assumption which implies that gravity is a one-way process.

Objects within the gravitational field of the Earth, for example, do NOT fall toward the Earth: in principle, the object and the Earth fall toward each other.

Unless the object has a substantial fraction of the Earth’s mass, the movement of the Earth will in practice be non-existant or too small to be measureable.

Nevertheless, it would be wrong in principle to treat gravitational attraction between two bodies as acting in a single direction only: it is (in principle) a mutual attraction.

Posted in Science | Leave a comment

Comic Art – Harry Mendryk

Harry Mendryk was moderator of the Yahoo Group ‘Digitizing Comics‘. The Group no longer exists, as Yahoo closed all their Groups in 2019.

Group Description

This was a group for discussions about using scanners and computers to save and restore comic book art. With the continuing deterioration of Gold and Silver Age comics, and the high cost of those comics, some are turning them into digital format. This list allowed amateurs to discuss what they were doing, exchange scanning and restoration techniques, request and receive advice, and develop a community of like minded individuals.

 

This deals with the following scanning topics:

  • Golden Age printing methods
  • Scanning Resolution (400 dpi vs 600 dpi)
  • Digital Bleaching to extract line art
    (a) Harry Mendryk’s method
    (b) Kris Brownlow’s method
    (c) David’s method
  • Digital Colour Correction
    (a) Harry Mendryk’s method
    (b) Rand’s method
  • Colour Correction : Conversion to CMYK alters colour
  • Colour Correction : Yellow & Magenta – Edit as CMYK
  • Colour Correction : Avoid the Red halo
  • Colour Correction : Greys
  • Colour Correction : Colour Noise
  • Colour Correction : Limit Colour to 8 bit
  • Resizing : Moire Patterns
  • High Resolution scanning : Advantages
  • LAB Color Mode
  • Modern Reprints : Colour Techniques
  • Note on other Methods : Destructive & Non-destructive

 

Background : Golden Age Printing
Darci (2007/09/04) [#147]:

Bob Rozakis said (reported from a magazine interview) (discussing professional comics printing at DC Comics) –

Once all the art and colouring was done, the pages were sent to Chemical Color Plate in Bridgeport, CT, where the colour separations were done by painting acetates for each of the 25%, 50% and 100% screens of red, yellow, and blue. This changed with the advent of computerised colouring and separations.

 

Q: You’ve doubtless seen the piecemeal auctioning of the fabled “Jack Adler Collection”. I have an approval cover (“Adventure Comics” #374) I received as a gift. How did he get hold of those?

A: From what I know, Jack Adler took the proofs home with his original colour guides, and now they’re being sold off. The proof was created at Chemical, using the separations they’d generated. If it was okayed, the film negatives were shipped out to Spartan Printing in Sparta, Illinois, for printing.”

Background : Chemical Bleaching

Harry Mendryk (2006/01/18) [#2]:

When I decided to try restoring line art, I was already comfortable working in Photoshop. I bought a HP scanner, and came up with a technique to “digitally bleach” colour scans.

But there are problems, due to the poor Golden Age printing techniques, yellowing of the comic paper with age, the limitations of the scanner, and the limitations of the technique itself.

Over the years I have become adept at squeezing the most out of this method. But there will always be problems, due to things like the inability to distinguish a black pixel made from the overlapping of CMY inks from a black pixel on the K plate.

Or the use of C ink under black in the old printing to improve its look. When you remove the C, the black channel suddenly has lots of little holes in it.

After many years of experimentation, research and thought, I have come to believe that it simply is not possible to digitally bleach a comic as well as can be done chemically.

Although digitially bleaching saves a lot of time compared to using Photoshop tools on a unbleached scan, it still requires a lot of effort to make a really nice line art restoration.

I spent years digitally bleaching the line arts for all the Simon and Kirby covers (something like 386 covers). I was quite pleased with the results. During that project, I showed what I was doing to Joe Simon. This led to frequent visits to Joe’s place. I learned a lot of the techniques Joe has used over the years, and still uses.

He showed me how to chemically bleach a comic. From what I understand,
he had also shown this to Greg Theakson. Greg apparently added his own processes to improve the results. Bleaching by Joe’s method pretty much removes the magenta (red) and yellow inks, but only partially affects the cyan (blue). When I tried it, I did not do as well with the blue.

But one time Greg showed me bleached pages he made for DC’s “Spirit” archives. I can attest that he does something different: his process truly left only the black ink.

Of course, with the poor techniques of the original comic book printing, even the bleached pages still needed touching up.

When I finished the S&K cover project, I wanted to do something with the actual S&K stories. But knowing how many S&K pages that was, and how much time was required to fix up digitially bleached pages, I knew that there was no way I was going to do that.

But affordable photo printers were now available, so I decided to work on colour restoration. To that end I have developed Photoshop methods to remove yellowing that the pages have undergone and to improve the often poor inking quality of the original printing.

Harry Mendryk (2006/02/01) [#89]:

There is no perfect bleaching process. Chemical Bleaching, when not faced with the horrible clay paper, produces the best results. But as an amateur it is not affordable for me, even with low grade comics.

I am not one of those who criticise chemical bleaching because of the loss of the comic. One comic is destroyed, but when the restoration is published more copies are created.

And how long will the original comics last? The paper is low grade and very acidic. I am amazed they have lasted as long as they have. I am sure in 100 years time all the Golden Age comics will be dust. Do libraries still keep original old newspapers anymore? Most have switched to microfilm.

There are chemical processes to remove the acid from newsprint. But the cheaper ones do more harm than good. The only process that libraries are willing to do is place the material in a sealed chamber with gas. But that is probably too expensive a process for comics?

 

Scanning Resolution (300 dpi vs 600 dpi)

 

Tom Kraft (2006/01/23) [#15]:

These settings were specified on the Kirby list for scanning original art (scanner settings):

– 100%
– 300 dpi
– RBG colour
– Scan front and back.
– No unsharp masking or auto adjust settings.
– Include space in between edges of paper and scanned image (don’t crop the scan to the edge of the paper, let us see the actual paper edge and some non-paper space).
– Save as JPG at 99% or “maximum”.

Does this group feel these specs are best for archiving a record of the original art? At 300 dpi you should be able to print the file and get very close to the same quality as the original.

Should it be 600 dpi (although the file size would be too large to e-mail)?

 

Randolph Hoppe (2006/01/23) [#16]:

600dpi would be better in the long term.

Harry Mendryk (2006/01/23) [#17]:

There is no completely correct answer to this question. It boils down to finding the best compromise for the intended use.

Let’s consider 300 vs 600 dpi based on e-mail capability. When Jack started penciling, the industry standard was to work twice-up [double the size of the printed page]. However sometime later (after Silver Age?) the industry switched to 1.5 times up.

Obviously, the paper size would affect the image size. Twice-up scanned images would be about 81 MB at 300 dpi, and 325 MB at 600 dpi. 1.5-up scanned images would be about 48 MB for 300 and 192 MB for 600. However, these are not the actual JPEG file sizes. With JPEG set to Maximum, there is still data compression, only it is a lossless type of compression (that is, when the file is de-compressed you get an image identical to the original). The amount of actual compression depends on the image. It is not unusual to see JPEG files compressed to 1/4 of the original size: with twice-up, files are 20 MB at 300 dpi; and 80 MB at 600 dpi. With 1.5-up, files will be 12 MB at 300 dpi and 48 MB at 600 dpi. Better results would frequently occur, but even 300 dpi images are too large to e-mail, so e-mail capacity cannot be used to decide scanning resolution.

So now let’s turn to printing the image. Here, much depends on how the image is to be printed. Let’s assume the image will be printed life size, but with the quality found in the better magazines. Those types of magazine use 150 lines/inch printing. LPI is not the same thing as DPI. When I started in computer graphics, I was told the rule is that DPI should be twice the LPI. Nowadays I hear that the rule should be 1.5. Using the x2 rule, magazine quality printing would require 300 dpi image resolution. If you use the x1.5 rule you need even less resolution. This suggests that 600 dpi would be overkill. Personally I really do not see the need to print original art better then a magazine’s quality.

One other possibility comes to mind. What if you wanted to print the original art in bitmap format. That is, convert the image to just black and white; no grey tones. This is effectively what was done originally, in making a stat from the original art, to be used to make the actual comic book. The experience I found when making the Simon & Kirby covers, is that with bitmap at 300 dpi I could barely see the little digital steps, at 600 dpi I could not. But keep in mind that my book was of covers at comic book size. But original art is larger than the comic book, and would be viewed from further away. I suspect then the small steps at 300 dpi would be unnoticeable.

So my suggestion is to remain with 300 dpi. The benefits for 600 dpi do not seem to be worth the larger file size.

Randolph Hoppe (2006/01/23) [#29]:

After a little web research that took me to some Museum/Archive websites, I’m as keen on 600 dpi as I was in my last post.

But I was wrong with my 99% jpg recommendation. Any lossy compression is to avoided when building a Museum-quality digital archive. For web-posting and e-mailing, a jpg would be preferable; but the non-lossy compression available in a TIFF file is preferable for a digital archive.

So I’d like to see:

– 100%
– 600 dpi
– RGB colour (24 bit)
– Scan front and back.
– No unsharp masking or auto adjust settings.
– Include space in between edges of paper and scanned image (don’t crop the scan to the edge of the paper, let us see the actual paper edge and some non-paper space).
– Save as TIFF with LZW compression.
– If you want to stitch the pieces together, go ahead, but send the pieces, as well as the result of your stitching, for safety’s sake.

Greg T [Greg Theakston] (2006/01/24) [#35]:

400 dpi is the industry standard. I used 600 dpi at Pure Imagination for a long time, but I found that 400 dpi works just fine, with fewer MBs eaten-up.

Nobody has mentioned the Median filter (Filter > Noise > Median), in greyscale, for use with line art. I find it indispensable. A fast cure for poorly printed lettering, and large areas of black which are breaking up.

Used with the marquee tool in Photoshop, the Median filter cuts my work time by at least 25%.

Consequently, I retouch most of my pages in greyscale, so that I can use the Median filter.

Harry Mendryk (2006/01/24) [#36]:

It’s true that whenever I supply Marvel with a file they request 400 dpi. And I do not believe there are many people, if any, that can actually see a pixel at that size.

For my restoration work, however, I do not find that 400 dpi cuts it. The lowest resolution that works for me is 600 dpi. Here it is not the case of the eye seeing, but of the ability of Photoshop tools to distinguish the comic book screen dots from the paper background. Having done my work at 600 dpi, and with CD disk writers available, I find no need to re-size it. Besides, re-sizing can create Moiré. My printer handles 600 dpi nicely.

When doing colour restorations, I do not use the Median filter. But when doing line art restoration I frequently do. A lot depends on the quality of the image. If it has a lot of noise (small black or white dots) I will use the Median filter with a setting that is a compromise between cleaning up the noise and the loss of details in the real line art. I then use the Pencil and Eraser tools to fix those dots that were too large for the Median filter to remove, and for re-sharpening those areas of the Line Art that were lost.

Marquee tools are useful to restrict my adjustments to a particular area. I also use the Magic Wand tool for the same purpose.

Matthew Moring [m.moring@comcast.net] (2006/01/24) [#38]:

400 dpi is what Marvel wants. However it certainly is not the industry standard. Every other company doing Golden Age reprints which I’ve done work for has been using 600 dpi for some time.

In this age of huge hard drives, the difference in file sizes is of minimal significance.

Harry Mendryk (2006/02/02) [#95]:

Low resolution not only makes digital bleaching more difficult, but makes the manual editing a problem. I prefer to work at 600 dpi.

Darci [darci386] (2007/03/08) [#143]:

Golden Age comics were probably printed at between 65 to 85 LPI (lines per inch). The general formula for dpi is 1.5 x to 2.0 x the LPI. As such, 150 dpi should be plenty for reproduction, unless you are going to scale up.

Someone mentioned that modern comics use higher LPI settings. However, I thought you might be more interested in Golden Age comics.

Harry Mendryk (2007/03/08) [#144]:

There are two problems with scanning Silver and Golden Age comics at 150 dpi. The first is, it is easy to encounter Moire problems. The second is the line art of comics. At 150 dpi the line art will develop easily seen pixel steps. The formula the fellow gives is an old one, developed when disk space was expensive. I would advise you to continue scanning at 300 or 600 dpi.

The eye begins to detect image deterioration below 300 dpi. High quality images require 400 dpi. 600 dpi is used for convenience, simply because most scanners can do that.

Further, the restoration techniques I’ve described work best when the scan resolution is much higher than the screening resolution used in the original printing process.

Harry Mendryk (2007/03/08) [#145]:

No sooner had I sent my last response, I remembered what the quoted formula was originally used for. It was meant only to calculate the scanning resolution required when scanning an unscreened image, such as a photograph, that will be screened for printing.

It does not apply when scanning printed images that are already screened.

NB: Screening is the technique used in printing to simulate tints or continuous-tone images (such as photographs) using dots. Almost all printing technologies – such as offset, gravure or inkjet printing – simulate shades of colours using dots. See the technical note, next.

AM Screening [Half Tone]

 

AM screening (Amplitude Modulation) uses a fixed linear dot pattern, of various sized dots, to emulate the tonal range in photographic images.

Standard AM line screens vary in resolution depending on the reproduction process and equipment quality. In commercial offset printing, these line screens are typically 100, 133, 150, 175 and 200 dots per linear inch.

The larger the dot, the darker the image area; and the smaller the dot, the lighter the image area.

Colour images use a separate AM screen for each of the primary printing colours: Cyan, Magenta, Yellow and Black (CMYK). These screens are printed on top of one another, which gives the range of colour we see on paper. The colour we see in a printed image is an illusion, caused because our eyes can only discern so much detail at a given distance. If we use high magnification to enlarge an area of a printed photo, the image becomes unrecognisable.

Problems With AM Screening

a. Limited Minimum Dot Size: In printing, we are limited to a minimum dot size for ink to adhere and transfer back to the sheet of paper printed on. We’re also limited at the other end of the tonal spectrum, because we can only go so large with the dot before the printed area becomes a solid. This results in an inherent flaw in the process called posterisation, and we have to adjust the photographic image before printing to reduce the problems it creates on the press. When we make these adjustments, we are actually degrading the quality of the original image slightly; so we lose detail, colour, and contrast.

 

b. Size of the Dots: AM screening uses a fixed dot pattern, and the tonal range is achieved by varying the size of the dot within that fixed pattern. Printing presses can only print so small a dot, so are limited to a printing range between the smallest dot possible and the largest dot possible in achieving a tonal range. Thus the peak resolution in an AM Screen is set by the largest (coarsest) dot, not by the smallest one. For a 175 line screen, the smallest possible dot is approximately 10 microns, and the largest dot is approximately 200 microns.

 

c. Visible Patterns in the Image: Sometimes such patterns conflict with the actual subject matter of the photograph, so amplify the negative visual effects of the printing process. The human mind recognizes patterns easily, so anytime we incorporate a fixed pattern into the process we naturally detect that pattern. Colour images are built on a series of screens, printed over the top of one another, and these screens are turned at specific angles to reduce the inherent negative effects. The flaws are still present, such as moire patterns and rosette patterns, but it is possible to reduce their more obvious effects.

For all these reasons, customers demand FM screening instead, especially in the clothing industry, where subject matter is all about patterns, as there can be a plethora of adverse pattern conflicts from using AM screening.
NB: A detailed note then follows, regarding the benefits of FM screening (omitted, because comics printed in the period 1940-1980 didn’t use FM screening, as it has the drawback of being very expensive).
[Source: http://thedivision.co.uk/everything-need-know-print-screening/]

 

 

Digital Bleaching (Generate Line Art)

Bleaching is a chemical process applied to printed comics pages to remove the cyan (i.e. blue), magenta (i.e. red), and yellow inks, to leave only the black line art. Digital bleaching is a computer process which simulates chemical bleaching for digital images.

 

 

Harry Mendryk (2006/01/19) [#8]:

When I correct a Simon & Kirby cover, to colour-correct the cover I first go through a digital bleaching process, so that I have line art which exactly matches the colour plates used.

I work in Photoshop 5 and (for some features) Photoshop 7.

 

 

Step 1 (CMYK colour setup) –

 

Harry Mendryk (2006/01/31) [#65]:

Digital Bleaching is not as effective as a Chemical Bleach. After you Digitally Bleach an image you will have to spend a lot of time editing the image to get it correct.

But if you are willing to spend that time, you can get really nice results without destroying the original comic.

There are even cases where Chemical Bleaching will not work. This is so with the Joe Simon cover for “Silver Streak Comic” #2. That cover was not printed with a Black plate: instead, the fourth plate was for a special silver ink used in the title.

The black on the cover is actually caused by overlapping Cyan, Magenta and Yellow. If you were to Chemically Bleach this cover all the line art would disappear.

Like my colour restoration technique, my Digital Bleaching technique requires the correct CMYK colour setup. In Photoshop 7, go to menu item: Edit > Color Settings. In ‘Working Spaces’ click on ‘CMYK’, then select (from the list of options): “Custom CMYK”.

When the CMYK dialog appears, in “Separation Options” select ‘GCR’, in “Black Generation” select ‘Maximum’, then click “OK” twice.

 

 

Step 2 (Level Adjustment) –

 

Harry Mendryk (2006/01/31) [#66]:

Digital Bleaching begins in the same way as Colour Correction, using the Level tool (Image > Adjustments > Levels). The purpose of this is to adjust for strong Black, and for the paper to become near White.

The dialog box settings I used for each colour channel are:

Adjustment   Channel   Input Levels   Output Levels

Scan levels RGB 0 1.00 255 0 255

R Adjust Red 60 1.00 239 0 255
G Adjust Green 76 1.00 206 0 255
B Adjust Blue 52 1.00 152 0 255

C Adjust Cyan 31 0.64 209 0 255
M Adjust Magenta 32 0.84 235 0 255
Y Adjust Yellow 2 0.53 185 0 255
K Adjust blacK 7 0.77 217 0 255

 

 

Step 3 (Level Adjustment, per channel) –

 

Harry Mendryk (2006/01/31) [#67]:

Firstly, convert the image to CMYK mode: Image > Mode > CMYK color

NB: An essential step for an image printed originally using CMYK (such as American 4-colour comics), this conversion should be omitted for ordinary photographs or other images which were created using RGB.

Secondly, open the Level tool (Image > Adjustments > Levels). The adjustments for Digital Bleaching are somewhat different to those for Colour Correction: I’m not concerned with making the image look correct, colour-wise. I adjust each channel so that the left input level is at the point where the histogram starts to climb, this provides a deep colour to that channel. I then adjust the right input level to past the right peak, this converts light tones to white. Some of these light tones may be under the black line art, and if so need to be removed.

Thirdly, examine each of the image’s colour channels separately (Cyan, Magenta and Yellow), by selecting them one at a time in the Channel Window. I have set up Photoshop to display single colour channels as greys, not as colours. Notice that over most of the image there does not appear to be any Cyan (blue) where there would be line art. An exception is in the steering wheel. If my ‘A’ channel looked like it had colour in the line art area, I could go back to the Level tool and push the right input level more towards the left to remove it. But sometimes getting ride of colour in the line art degrades the colour outside of the line art too much. In the Cyan of my example, that is the case: to get rid of the Cyan from the steering wheel line art, I pretty much loose Cyan everywhere. So I decided not to adjust the Cyan channel any further In fact I did no further level adjustments to Magenta or Yellow either. And Black is exempt from these secondary adjustments, all of the time.

 

 

Step 3(a) (Color Dodge) –

 

Harry Mendryk (2006/02/01) [#84]:

I found a use for the Color Dodge “Apply”: a new step, between my original steps 3 and 4. Selecting the Black channel, I run: Image > Apply Image. Instead of choosing a colour channel (as in Step 4), I choose the Black channel and Color Dodge, but do not tick ‘Invert’.

Doing this seems to have some bleaching effect. I tried my usual practical and theoretical tests. This time the theoretical tests indicate similar, but not identical, results compared to my original Digital Bleaching sequence.  But the practical do show some positive results: some line art disappeared using my original steps, but did not when the new step was added.

From this, I can’t say definitively that this new step should be added to my Digital Bleaching. But I do plan to try it when I generate line art from a scan.

 

 

Step 4 (Apply Image) –

 

Harry Mendryk (2006/01/31) [#68]:

Having adjusted the levels for the channels, I now select the Black channel by clicking it in the Channel window. Viewing should also be for Black only.

Now I use the menu item Image > Apply Image on the Black Channel (it was selected above), using each of the colour channels (Cyan, Magenta and Yellow) in turn. I am going to use the Screen with invert.

Attached is an example of the settings I used for Cyan. Before I accept them with “Apply”, I click the preview on and off to see the effect on the Black channel. If the preview is visibly better than the non-preview I will accept that particular “Apply”. By “better”, I mean that some of the tones outside of the line art disappear or diminish. I also do not want to see the line art deteriorate much. In this particular example, the “Apply” of Cyan to Screen the Black clearly helps, so I accepted it.

I repeat the “Apply” operation for Magenta, and after that for Yellow. In each case I use the preview to ensure the change gives a benefit. In my example, the Black channel improved with the change to Magenta and to Yellow.

After all the “Applies”, bleaching progress has been made.

David [betroot] (2006/01/31) [#79]:

At the “Apply Image” stage, I accidentally chose “Color Dodge Mode” and it seemed to have a stronger bleaching effect.

 

Harry Mendryk (2006/02/01) [#84] (see also Step 3a):

NB: In summary, Harry will not modify his step 4, but will add a new step (named by me as Step 3a), between his original steps 3 and 4.

There does seem to be a stronger bleaching effect using Color Dodge instead of Screen in the Apply step (step 4). However I also observe that the “bleaching effect” was stronger in light tones of the Black channel than in the darker tones. Remember that the Apply step is followed by a Threshold adjustment (step 5). What is important is the combined effect of the two steps.

Frankly, I do not understand Photoshop’s description of what Color Dodge does, although I do understand what the Screen is doing.

So I decided to investigate further, using practical and theoretical examples. For the practical test I used the “Journey Into Mystery” cover David posted and also my “Young Romance” high-resolution panel.

For theoretical testing, I created new grayscale images in Photoshop with two channels. I used the Gradient Tool horizontally in one channel, and vertically in the other. I could then use this image file to run the Apply and Threshold, selecting one channel and applying the other.

In the end I did not see much difference in my practical examples between using Color Dodge and using Screen. However, I used only two samples. Perhaps other comic scans would show a difference.

But the theoretical examples showed very different results. Here the use of Screen did exactly what I wanted, but results from the use of Color Dodge were not satisfactory. My conclusions from these tests is that personally I will continue using “Apply Screen”, as outlined in my Step 4.

However, I did find another use for the Color Dodge “Apply”. This would be a new step between my original steps 3 and 4. Selecting the Black channel, I ran the Image > Apply Image. But instead of choosing a colour channel, like I did in Step 4, I chose the Black channel and Color Dodge, but did not tick ‘Invert’. Doing this seemed to provide some “bleaching effect”. I tried  the same practical and theoretical tests as before. This time the theoretical tests seemed to indicate similar, but not identical, results between using my original Digital Bleaching sequence compared with adding the new Color Dodge step. But the practical did show some positive results. Some line art disappeared using my original step sequence, but did not when the new step was added. From this, I can’t say definitively that this new step should be added to my Digital Bleaching. But I do plan to try it next time I generate line art from a scan.

 

 

Step 5 (Median filter in Greyscale) –

 

Harry Mendryk (2006/01/31) [#69]:

At this point I no longer need the colour channels. There are lots of ways to get rid of them. What I usually do is use menu item Select > All. Then, in the Channels window, I select each colour channel (Cyan, Magenta, and Yellow) and press the delete key. With all the colours gone, and only Black remaining, I use Image > Mode > Grayscale to convert the image from CMYK.

Next, click the little triangle in the Channels window and select the “Duplicate Channel” option. I do this because I am going to perform some operations that may cause the line art to loose details. I will not perform these operations on the duplicate copy, so it will be a reference when I manually edit. I select the Black channel in the Channels window.

Next I use the Median filter (menu item: Filter > Noise > Median). I usually set this to the lowest Radius, that is 1. The Median will help reduce the noise that can occur in the image. Unfortunately it also affects the line art itself. The larger the Radius used the less noise but the more detail of the line art is lost. But I know I am going to have to manually edit some of the noise out, as well as manually restore some of the line art detail. Like I said, I generally use a Radius of 1; others might choose a higher value.

After the Median filter, more bleaching progress has been made.

 

 

Step 6 (Threshold adjustment) –

 

Harry Mendryk (2006/01/31) [#70]:

Before proceeding, I usually pick some appropriate section and magnify it to 100% or 200%. Still leaving the Black channel selected in the Channel window, I click the view for the duplicate copy.

This sets me up for the next step, which is the use of the Image > Adjust > Threshold tool. The Threshold tool turns the image to just pure Black and White. I can select where the Threshold point should be. Anything below the Threshold level will turn Black, everything above will be White.

By setting things as I did above, I can judge what might be a good Threshold setting. Moving the setting to the left will remove some of the unwanted non-line art. But it will also remove some of the wanted line art. Moving the adjustment to the right will have the opposite effect.

In the magnified view, areas with white are areas that will be pure white in the image. Areas of light red are those removed from the image: often we don’t want them, but there might be line art that we would like to have. Areas that show up as dark red are those that are not removed from the image but we wish they were. Areas with a mid-red tone are the line art that will be included as desired in the image. I can tell you right now, you are not likely to be able to find a perfect point. You will always lose some line art and get some non-line art. You have to pick a good compromise point.

Attached is a copy of what the magnified image looks like while I am making
the Threshold adjustment. Once I achieve an acceptable adjustment, I click
the “OK” button. Also attached is what the image looks like at this point.

 

Step 7 (second Median filtering) –

 

Harry Mendryk (2006/01/31) [#71]:

This step is optional. I look at a magnified view (200%) of the Black channel. If it is noisy, I apply the Median filter again. It is the same sort of compromise as before. The higher the radius the less noise, but the less detail in the line art. In this particular case I decided to use the Median filter again, with Radius of 1.

NB: For the Median filter (Filter > Noise > Median), applied in Greyscale, see messages #35 and #69 (above).

Now I have finished the easy part. From now on, I have to manually edit the black Line Art image, using the Pencil and Eraser tools. I also have a Black Copy channel, that hasn’t been filtered or threshold adjusted, to use as a reference to aid my editing. To completely clean up the image takes some further effort.

Matt Moring [m.moring@comcast.net] (2006/02/01) [#91]:

Right now I’m trying to finish off a story for an upcoming DC Archive book.

To do this right, you need to be an artist yourself. There’s no copy machine that will take a colour page and spit out a finished page of black & white line art. There’s a lot of effort that goes into doing a page right.

Conclusions

 

Harry Mendryk (2006/01/31) [#74]:

I use Digital Bleaching to restore the line art from scans of Simon & Kirby covers. Ideally it would remove all colours and leave just the line art. But it is not 100% effective.

Depending on a particular scanner, and the settings used in RGB Level Tool adjustment, it may not remove Purples (Cyan + Magenta) very well. But even under the best of circumstances it cannot remove grey tones, such as found in colours such as Brown.

But it doesn’t destroy the original comic, which for an amateur restorer like me is a paramount concern.

Chemical Bleaching is destructive, but you get very clean line art. The only retouching that would be required would to fix up creases and original printing errors.

Harry Mendryk (2006/01/31) [#83]:

A cover will be a problem if it contains a lot of purple and brown.

Harry Mendryk (2006/02/01) [#85]:

Some observations on using Digital Bleaching on the low resolution “Journey Into Mystery” cover David provided.

Because of the low resolution of that scan, the line art is often just one pixel wide. Using the Median filter, as I describe in Steps 5 and 7, has the effect of wiping out a lot of line art: so I suggest you skip using the Median filter when Digitally Bleaching low resolution images.

You will end up with a lot of noise to clean up. But that will be better than all the line art that would need to be edited back in.

The other observation concerns Apply Screen (step 4). Certainly Apply Screen should be used as I described using the Cyan channel. But when I did the same thing using Magenta or Yellow, the effect was to drop some of the line art without any other benefits. So for the “Journey Into Mystery” cover, I would suggest to do Apply Screen using only the Cyan.

This has nothing to do with the low resolution of the scan. Actually the “Journey Into Mystery” cover is more typical with respect to step 4 than my high resolution example.

My attempt on this cover did a nice job cleaning up the background “grey”, but, as I expected, there has to be a lot of manual editing of those areas which were originally purple or brown.

Little Bumps in CMYK histograms

NB: These notes relate to Step 3 in Harry’s above procedure for Digital Bleaching of the image.

 

Dario [vulcaniano99] (2006/03/22) [#118]:

I see some “little bumps” in the histograms, in CMYK colour mode, in scans of my old comics. I was wondering what are they. Is it an artifact of the ageing of the inks on paper, or are they a feature of the colour printing, even for new comics?

Harry Mendryk (2006/03/22) [#119]:

For the most part it is due to the aging process. Aging adds black to areas where there is no black ink. This is due to dirt and grime on the page, changing of ink with age, and the yellowing of the paper. With the increase in the K channel, the level for the other channels typically decreases. But the effect is not uniform. Areas with ink from one channel will change the least, areas with colour overlaying one another will change more. Hence the extra bumps.

I would like to think that this effect would not happen as much with recent comics. But I have done little work with the more recent stuff. One problem though is recent comics have a screen density much higher than that in the Golden or Silver Age. Scanning at 300 to 600 dpi works very well with those comics. But for modern comics it is nowhere near enough. I am not saying you couldn’t do it, just that I suspect the extra bumps will be there.

 

Mix Channels : Apply Image

NB: These notes relate to Step 4 in Harry’s above procedure for Digital Bleaching of the image.

 

 

David [betroot] (2006/01/24) [#42]:

Apply Image: In the past I would’ve put the Cyan channel in a new document, and used the Black channel set to Screen mode as the Layer mode. Apply Image saves the bother of creating a new document.

You can mix channels with any layer Mode, using Apply Image.

“Calculations” allows you to generate a new channel by “mixing any 2 channels” – useful for extracting masks for photo retouching (like say a model’s hair in a blue sky and you want to composite against a different background – calculations would help in creating a silhouette mask).

 

 

Harry Mendryk (2006/01/31) [#60]:

The purpose of the Apply step is to remove some Black from areas of Cyan. I’ve attached an image of the Apply dialog to make the settings I used clear.

This apply is done on the Black channel only. But when I worked I clicked the CMYK view. I also attach an image of the Channel Window to show this. I select the CMYK view because I want to see the effect of the Apply in order to adjust its strength (opacity).

If you try this Apply you can tick and untick ‘Preview’, to see the effect of this operation.

Although the effect of the Apply step is not dramatic for this particular image, it is an operation I use in the Digital Bleaching process.

 

 

David [betroot] (2006/01/31) [#72]:

I hadn’t used Apply Image before. It puzzled me, then I realized it was similar to what I’ve done in the past: copy the K channel and paste (so it’s a Layer), and set the Mode to ‘Screen’.

The Apply Image step does it for me. I don’t have to make the Layer, etc.

Then I realized it’s like Calculations – where you can do a similar change to 2 separate channels (screen, Multiply, etc) to generate a new channel – this is useful for extracting masks in photography.

Example: A girl with windblown hair, and you want to make a silhouette mask including the hair strands.

 

Digital Bleaching (Kris Brownlow’s method)

 

 

Kris Brownlow (2006/02/01) [#92]:

I tried to “bleach” a colour scan on my Epson HP scanner, to see if it could be done. The scanner software does not have a traditional “black and white” function, so I used the “old photo” function.

The process:

1. Once an image is scanned, go to EFFECTS and select “Old Photo”.
2. Go to ENHANCE and move “Highlight” and “Midtone” to 89.

 

 

Harry Mendryk (2006/02/02) [#94]:

I am impressed with the results you got using just the Epson scanner adjustment. It was hard to judge from your image, because of how faint it was, so in Photoshop I converted it from RGB to greyscale, and then used the Level tool to adjust the lower end.

Most of the colour has indeed been bleached.

I then opened the original scan file, created a new channel, and copied my adjusted version of your file into it. This allows me to better compare the two by either switching views or viewing both at the same time (the bleached image channel acts as a red mask).

Initially I observed that the blacks in the Epson bleached image are pretty noisy. But when I examined the combined file, I found that the noise was in the original comic printing. This is not surprising, as the comics were printed with a rather crude printing method on rather poor paper. No matter what bleaching technique is used, retouching of some kind is required to correct for this.

I then magnified the face of the foreground woman. I found the Epson bleached line art a little narrower than the original line art. I also found some of the finer line art had disappeared in the Epson version.

So the Epson bleaching is not perfect, but none of the digital bleaching processes are. Perhaps with a little tweaking of the settings in the Epson bleaching, you could achieve better results.

But using Gaussian Blur and Threshold in Photoshop would easily get the line width back to what it should be. And following that, one could manually edit back in any details that had been lost.

All that matters to me is that the results are accurate. I may not understand exactly what your Epson software is doing, but it seems to me it could be a viable tool.

Digital Bleaching (David’s method)

 

David [betroot] (2006/02/01) [#87] (use “Color Dodge” to make Line Art):

“Color dodge” is a technique to do with obtaining Line Art (better than the “Find Edges” filter), a method to turn a scan into Line Art –

1. Duplicate the background layer and Invert it (making a second layer).
2. Invert (negative) the duplicated layer (looks like a negative photo).
3. Set mode of the layer to ‘Color Dodge’ (the image will appear to disappear!)
4. Gaussian Blur the duplicated layer (the one which has been inverted, and set to Color Dodge) with a very small setting, like 0.8 (you will see in the preview the effect and can adjust it).
5. Flatten the Layers.
6. If the line work is light, you can duplicate and set the new layer to ‘Multiply’ (and you can duplicate the Multiply layer more times as required).

Optionally, you can use Threshold if you want black-and-white Lines.

You can save the steps as an Action in Photoshop.

 

 

 

Colour Correction : Harry Mendryk’s method

 

Harry Mendryk (2006/01/24) [#32] (CMYK Settings):

NB: This was originally posted as part of Harry’s discussion about High Resolution scanning (see also #33 and #34). But it’s of general applicability, so needs to be here.

Before doing the actual colour correction, make sure that CMYK conversion is set properly. In Photoshop 5 the setting dialog can be found using: File > Colour Settings > CMYK Setup

In Photoshop 7 getting the dialog is a little more involved. First bring up menu item Edit/Colour Settings. At the end of the CMYK field is a checkmark, hitting it causes a list of options to be displayed. Choose “Custom CMYK”.

Once you get the CMYK Setup dialog in the Seperation Options, select GCR and in the Black Generation select Maximum. Hit OK (twice in Photoshop 7).

 

1. Level Tool –

Red Input Levels : 32, 1.00, 255
Green Input Levels: 32, 1.00, 222
Blue Input Levels : 0, 1.00, 182

Note that I am actually moving the left and right triangles to where the particular histogram curves up. The left side makes Black, the right side makes the White. The exact point of a setting is based on the histogram, not the image.

2. Select from menu Image/Mode/CMYK colours

3. Level Tool –

Cyan Input Levels : 19, 0.88, 233
Magenta Input Levels: 14, 0.66, 204
Yellow Input Levels : 21, 0.65, 205
Black Input Levels : 5, 0.54, 185

Note that here I adjust using the left triangle first, remember the position of the center triangle, adjust the right triangle, and finally reposition the middle triangle back to where I remembered it. Although the histogram gives clues to what might be good settings, ultimately it is the preview of the image itself that is most important. And there may not be a perfect setting. With this example the Cyan adjustments are a compromise of getting the Cyan out of the whites and not loosing the light green background. I suspect this adjustment might have been easier at 600 dpi.

4. Select Black channel but click CMYK for viewing, I am going to remove some of the black undertones to the Cyan. The actual change must only be done to the Black Channel, but I want to view what is happening to the colour version of the image.

5. Image/Apply Image; Channel: Cyan; check Invert;
Blending: Screen; Opacity: 50%.

The 50% is my personal judgement. I like to leave some undertone Black in the Cyan.

6. Colour conversion completed. I would then switch to manual editing. In particular I would want to use the Eraser tool, selecting just the Cyan channel. Selecting just the Cyan makes Erasing the Cyan out of the word balloon area easy, because Black is not affected.

I might further want to fix up the Green sidewalk in the background, remove a little more of the Black from it, and boost the Yellow a bit. I did not do that here, because one respect my example panel is no better than David’s eBay examples. In order to make the file smaller for email purposes, I set JPEG compression to 3. This leaves rather severe patterns in the colour channels. Normally I use JPEG compression of 7. That level avoids that sort of pattern.

7. Select from Image > Mode > RGB colours. Put the image back to RGB. Although JPEG is happy with CMYK, most browsers and many printers are not.

Harry Mendryk (2006/01/19) [#7]:

This is my method for colour correcting comics scans. I work in Photoshop 5 and (for some features) Photoshop 7.

I have added photos of some of the steps and tool dialogs used in a folder, “HM’s color adj”, in the Photos section. If you do not find what I’m saying clear, check them out:

https://groups.yahoo.com/neo/groups/digitizing_comics/photos/albums

 

Step 1: Scan –

I scan at 600 dpi with auto colour correction or auto levels features turned off. My last scanner had really poor gain, so I had to set the manual level adjusments to get more out of it. But I was careful not to push the levels too far up. Once the gain is too high you loose data that I need for my process. If the gain is low, my method will fix that up with no apparent loss in quality of the final result. You have to use really low gain to harm the corrected scan. My present scanner does a nice job without any manual adjustments.

You can use either the Level tool or the Histogram tool to judge if the scan has all the data needed. The histogram I want to see in a scan is the one for the combined RGB. Going from right to left (dark to light) the curve I want to see is flat at the start, starts to turn up, can go through any number of peaks, but eventually turns back down, and is flat by the time it gets to the right side (255). I do not care about the actual values. What I care about is the flat start and end to the curve. If that is missing and either 0 or 255 has a curve off the bottom axis, then data has been lost. In the photo section I have a photo of an original scan, and the RGB histogram (using the Level tool) for that scan, before I have done any adjustments.

 

Step 2: Level tool –

I do my colour correction in two steps. Scanners use RGB sensors. So I do my initial rough adjustment in RGB. I do not touch the combined RGB level, all adjustment is done on the individual Red, Green and Blue channels. I adjust each Red, Green and Blue channel separately. But how I adjust them is the same. In each case I adjust the low input level by dragging the little black triangle from 0 up to the point in the curve where that channel starts to ascend. I also adjust the high level, by dragging the little white triangle from 255 to the point on the curve where again it goes up. With my previous scanner I would also have to set the middle input value. I had figured what seemed to be a good setting for it and would effectively just load that number into the box. My present scanner does not need that sort of fix, so the middle box always remains 1.00. The photo sections have the settings for each of the channels that I used on the scan. After the Red, Green and Blue channels have been set, and the OK button clicked, the image should have richer blacks and the paper should be closer to white. The photos section has an image of what the page looks like after the adjustment. In this particular case, the original scan was very yellow, but the yellow was pretty even across the page. After the adjustment, the white in the word balloons looks pretty good, but the paper edges are rather splotchy. Other pages may have browning toward the edges and the end result may be a pretty good white on the interior, but bands of yellow or brown along some edges. This adjustment increases the contrast of the original scan, so these effects can look worse than on the original. I can’t afford pedigree comics with pure white pages, so this is something I just live with. Although I have seen better, I am please with the results so far for this particular scan.

 

Step 3: Convert to CMYK –

Comics are printed using CMYK inks, so it makes sense to me to do the final colour adjustments in CMYK mode. The only important thing is that the Photo CMYK setup be set to GCR Seperation Type, with Maximum Black Generation. I leave it that way all the time.

 

Step 4: Level tool –

I now return to the Level, but now the image is in CMYK mode. Here I adjust using not just the histogram, but also watching the image, and using the Info tool window. I do not touch the combined CMYK channel, but work on the individual Cyan, Magenta, Yellow and Black channels. For any channel I generally start with the Low Input Level, dragging the black triangle from its right corner. I do not try to push the C, M or Y channels to achieve the strongest possible colour. Rather I prefer to leave it somewhere where near where the actual data starts. Precisely where is a judgement call based on the preview of the image. If you want to push a channel to the max the histogram will sometimes show where that is as the first small peak going from right to left. Unfortunately, depending on the image, that peak may not show up. In my example I can see the peak for the Cyan and Yellow channels, but I can’t make it out for the Magenta. In any case my advice is to move the cursor over an image area that appears to have a solid example of the colour of the channel of interest and look at the Info window. If the Info window shows that particular channel has reached 100%, you have reach or exceeded the peak. Go past the peak, and final results are bound to be pretty bad. In fact when I adjust the magenta that way to find the peak, the girl’s dress has a nice red, but the flesh tones are terrible. My tastes are to find some good point near where the data starts. Then I note the position of the Middle Input Level triangle. I use landmarks on the curve to judge that position. If the curve fails to provide a good landmark, I fall back to landmarks in the dialog box itself. I do not more the middle triangle yet, just remember its location. I then move the High Level Input (white triangle) from right corner to the left. Now I am trying to remove those tones that make the paper off white. Some cases the tones maybe under some other colour. The histogram will invariably show some peaking on the the right side. The peaking may have an abrupt beginning. That often is the a clue to how far I have to push the white triangle to remove unwanted tone. But even if there is an abrupt edge, and especially if there is none, I keep and eye on what is going on in the image. And I stop periodically to use the Cursor and Info window to judge the effect of the area having the unwanted tone (usually the white). But like I said I watch the image. It’s no good to remove a coloured edge due to paper browning if you remove all the yellow inks also. I use my judgement on how far to push the High Levels before it damages the final results. In this example in the Cyan I moved it past the abrupt edge. The Magenta had a secondary peak right on the edge; I moved the white triangle past the abrupt edge but not into the secondary peak. The yellow channel had a gradual climb as it approached the right. For yellow I moved the adjustment to about where this curve starts. This was a pretty extreme adjustment and in the end did not get rid of all the yellow, but judging from theimage it was as far as I was confortable with. Up to now I have not mentioned the Black channel. Actually it is adjusted pretty much like the others. However it almost always shows a peak on the left. I prefer to push the black further then I do the other colours. I usually put the Low setting at the where the left peak starts. After setting both the Low and High Level adjustments I then move the Middle adjustment to the landmark I remembered from before. Moving the other adjustments moves the Middle one automatically. But I want it back to the location I remembered from before (based where it is when set to 1.00, and the Low setting where I want, but the High Level still at 255). The final adjustment is not as cut and dry as my initial RGB adjustment: once I get the channels where I think they should be, I do not hit the OK button right away, but look at the image and decide if I might want to try other adjustments. But any channel I do adjust I redetermine where the Middle level triangle should be. Once again the photo section has pictures of the settings I used for each of the channels as well as a copy of the image the adjustment was actually made.

The final results of colour correction in this case are somewhat mixed. One problem is this was obviously originally a poorly printed page. With my minimally adjusted Magenta channel the dress is not a great red, and yet the flesh tones are a bit strong. When restoring interior pages, I personally do not try to make them perfect. And if you keep the imperfections, you are going to make some compromises. What those compromises are will vary based on personal tastes. Another problem with this particular final correction is there still is a bit of yellow in the white above the printed image. From experience I know that yellow will be even more noticable when printed on white paper. In this case I will do some editing with the Eraser tool on the Yellow channel before I switch the mode from CMYK back to RGB. I find it easier to edit the image in CMYK.

When I do interior restorations, I typically scan all the pages I want at one time. I then use the Automation feature to do all the file opens, RGB to CMYK conversions, and final CMYK to RGB conversions, leaving me to do the individual Level adjustments manually.
Harry Mendryk (2006/01/22) [#14]:

I mentioned in an earlier post that when I do colour correction I make use of Line Art that I had made previously. The Line Art is generated using digital bleaching of the same scan files, and therefore match them perfectly.

I just finished restoring the Young Romance #7 cover. I decided to use de-screening on it. Rand’s “bulletin board” procedure indicates resizing in the middle of the de-screening steps. I can offer no firm reason that this sequence should be followed. But I can say that resizing should never preceed the blurring step. And in general it is best to do filtering at the final resolution. On that basis, the bulletin board makes sense.

But I decided against following that sequence. The reason is that although I know the resize that will need to be done for my book, I may have other uses for the cover and I do not know what resize they will require. If the image is properly de-screened at the original scan resolution, it can then be reduced to any size without the fear of Moire occuring.

I also did not follow Rand’s bulletin board in respect to blurring the channels individually. Here my reason is different. I have only recently started experimenting with de-screening. I would first like to get more experience with my present technique. That way when I do try individual channel de-screening, I will be better able to evaluate the results.

I have added to the photo sections, the original scan, the original line art, the cover after colour correction and touch-up, and the cover after de-screening and touch-up. Below I outline the steps I take.
Step 1: Colour correction

Perform colour correction as described in a previous post. I leave the cover in CMYK mode.

 

Step 2: Add Line Art Layer

I create a new channel and paste into it the Line Art that I created when working on the Simon & Kirby covers. This Line Art was made using this particular colour scan, so it perfectly lines up. I will always keep this as the top layer. And none of the editing, colour correction, or descreening steps defined below are done on the Line Art layer.

 

Step 3: Create special selection channels from Line Art

The Line Art from the book includes black letters which on the colour cover are Yellow (upper right) and White (in box on the right side). I use Duplicate Channel on the Line Art to make three copies. I edit these copies so one is just the Yellow letters, another channel is just the White letters, and the third copy is that part of the Line Art that truly will be Black. I do this so when I am editing I can switch from one selection to another quickly, as needed.

 

Step 4: Create Line Art Layer

I create a new empty layer. I use Load Selection to load the invertion of the Black part of Line Art. I fill the selection with 100% Black. This layer remains the top layer so that when I am done, the Line Art is fully Black, which is how I like it.

 

Step 5: Use Erase tool to remove unwanted undertones

Now I use the eraser tool on each channel to remove colour undertones. These undertones are artifacts left over from the scanning and colour correction processes They will have a tendancy to give the correct colours a muddy look. I do not remove the black undertones from under the cyan areas. They will be handled later in a separate step. Each channel has its own screen angle. I use this to help recognize the undertones that I want to remove. Using the previously made selections help me to do things like remove the Cyan undertone to the Yellow Letters without affected the Green background. During this editing step I also remove registration problems, ink smudges, and uncorrected paper browning.

 

Step 6: Use Screen in Apply Tool to reduce Cyan’s Black undertone

Now for the black undertone to cyan. My experience is that Cyan seems to have stronger black undertones then Yellow or Magenta. I have several explanations for why that is. But the important thing is that I personally do not like these comics restored with all of the Cyan’s Black undertone removed. So I handle them differently. I start by making a duplicate of the Cyan channel. I use the Level Tool and bring the left triangle to the start of Cyan’s left peak and hit the OK button. I then select the Black channel but display all the CMYK channels. I then open the Apply Tool from the Image Menu. I set the layer to Background, the channel to Cyan, turn on Invert, and set blending to Screen. Doing this allows me to selectively mask out only the Black under the Cyan. I try various values of Opacity for a value that looks correct to me judging by the preview. In this case I accepted 50% and hit the OK.

 

Step 7: Use Curve Tool to adjust hair and hat colours

I was pretty happy with the cover’s coluor at this point except for the woman’s hair and the man’s hat. I am always a little uncertain about my CRT’s calibration so I print just those sections.

The printout looks better, but on the print the hair and hat are more grey compared to the comic’s dark brown. So I duplicate the Line Art channel twice and edit to make one a selection of the hair and the other a selection of the hat. I then adjust with the Curves tool on each section to get the brown I want, and I save a copy of the curve before I accept the changes. I proof my changes, and if still not happy I go back in the history pallet to try again. But I still have my selection channels and stored curves from which I can tweak. After a few iterations I am finally happy.

 

Step 8: Create De-Screen and Trap Layers

I duplicate the Background Level (the one I’ve been working on) to make a De-Screen Layer. (I do not want to de-screen the Background Level directly in case I want to go back and change it some day). I then duplicate the Image because I am going to perform an operation that can only be done on a flattened image, I call this copy the Trap Image.

I flatten the Trap Image (remember the Black Line Art is the topmost layer). I use the Trap tool set to maximum (10). I copy the Trap Image and paste it back on the original image and name it the Trap Layer (I discard the Trap Image). I make sure that the Trap Layer is below the Line Art Layer and above the De-Screen Layer.

 

Step 9: Limit Trap Layer to colours that were trapped under Line Art

I clear the entire contents of the Black channel for the Trap Layer. I use Load Selection with the Black Line Art channel. With delete, all that remains is the trapping under the Line Art. This will help to reduce the halo affect from the de-screening procedure later.

 

Step 10: Merge the Trap Layer into the De-Screen Layer

I then select the De-Screen Layer. I duplicate the Line Art channel to make a De-Screen channel. The De-Screen channel already has the Line art for the Yellow and White lettering, but I also edit it to include the whites inside the title characters. On the CMYK of the De-Screen layer I select Load the De-Screen channel with invert set on and then Fill with 100% white. The Trap Layer is merged down into the De-Screen Layer. The De-Screen Layer now has the CMY colours trapped under the Line Art, but the Line Art itself removed.

 

Step 11: Use Gaussian Blur on the De-Screen Layer

On the De-Screen layer I use the Gaussian Blur tool with the Radius set to 4.0 pixels.

 

Step 12: Use Unsharp Mask tool on the De-Screen Layer

On the De-Screen Layer I used the Unsharp Mask tool. I set the Radius to 4.0 pixels (to match what I used during the Gaussian Blur) and initially set the Threshold to 0. I then adjusted the amount until I thought it was sharped enough (my case was 130%). I scrolled the image to a part of the image that had flesh tones, then adjusted the Threshold up to where the tones looked good (15).

 

Step 13: Delete from De-Screen Layer those parts not to be de-screened

I Select Load the De-Screen channel with invert and then delete the parts from the De-Screen layer that do not require de-screening.

 

Step 14: Correct remaining defects with Paintbrush

Using the Paintbrush tool, I then corrected the defects caused by de-screen as well as those on the original comic.

 

Harry Mendryk (2006/08/07) [#121]:

I have posted my technique for colour correction. I still use my method. You can see some of my results on my Simon and Kirby blog:

http://kirbymuseum.org/blogs/simonandkirby/

 

 

Colour Correction : Rand’s method

 

Randolph Hoppe (2006/01/18) [#3]:

Rand’s “bulletin board” method (so-called because Rand pinned a note of these steps to the bulletin board behind his monitor):

1. Rotate
2. Crop
3. Convert to CMYK
4. Auto Levels
5. Curves to Blacken
6. Gaussian Blur each channel separately
7. Re-size
8. Curves on K channel only
9. Sharpen all

 

Harry Mendryk (2006/01/19) [#4]:

> 1. Rotate
> 2. Crop

Harry Mendryk (2006/01/19) [#4]:
Partly self explanatory. But lately I have been using the crop tool to rotate also. I used to try to be very precise and get one edge of the comic perfectly vertical. But, frankly, I have found that when they inked the original panel layout they were pretty sloppy. When you get one edge perfect the others might look pretty bad. Rotating using the crop tool allows me to better visualize, and to find a compromise for all edges.

Randolph replied (2006/01/19) [#11]:
I start with the measure tool and “image|rotate canvas|arbitrary” to get something either horizontal or vertical. Then compromise.

 

3. Convert to CMYK

Harry Mendryk (2006/01/19) [#4]:
Again self explanatory. Scanners usually have RGB sensors so it is natural to import scans as RGB. But comics are printed with CMYK inks. So at some point in colour restoration it makes sense to work in CMYK. But it is important to have your Photoshop CMYK set up properly. You should be using GCR, with “Black Generation” set to Maximum. The purpose of this is that greys are thereby generated only in the black (K) channel, not by various combinations of all the channels.

NB: In Photoshop 7, go to menu item: Edit > Color Settings. At the end of the CMYK field is a checkmark, clicking it causes a list of options to be displayed. Choose “Custom CMYK”.

In the CMYK Setup dialog, in “Separation Options” select “GCR” and in “Black Generation” select Maximum. Click “OK” (twice).

 

4. Auto Levels

Harry Mendryk (2006/01/19) [#4]:
“Auto levels” is a quick and dirty tool. I have tried it and compared the before and after. It seems to maximize the ranges of the CMY tones, but does not seem to do much to the K channel. This has the effect of giving stronger colours to the image. This is important because the tone range of the CMY channels is generally low and uneven. This is due to the original poor printing that comics received, the fading of the inks with age, and the limitations of the scanning. It seems to do a good job on the CMY channels, but I prefer the full control I get from working with the Level tool.

 

5. Curves to Blacken

Harry Mendryk (2006/01/19) [#4]:
Since the auto level did not do much to the K channel, I suspect you are using it to enrich your blacks. I remember an article I once read that recommended using the Curves tools to do colour adjustment. Curves does seem to provide the maximum flexibility. But again I prefer the Level tool, because it provides a histogram. I find this gives me a better insight into what is going on in the image, and what I should do to correct some of the problems.

 

6. Gaussian Blur each channel separately

Harry Mendryk (2006/01/19) [#4]:
This is step one of the de-screening process. I mainly de-screen to remove the Moire patterns that often show up when resizing an image. I do not resize all my work, and only use de-screening when actually needed. On the matter of blurring the channels separately, it shouldn’t matter. Certainly there is no harm in doing it. It might be possible to use different settings for the blurring of each channel, but I am not sure what the benefit would be.

Randolph replied (2006/01/19) [#11]:
It is *all* about using different settings on the blurring of each channel. I may have picked this up from Dan Marguiles, a photoshop expert, in a magazine column or website. Or some other photoshop experts forum.

Harry Mendryk replied (2006/01/20) [#12] [#13]:
I recall one guy who generally worked using the Curves tool. I followed that approach back then, but have since switched to using the Level tool. You get more control over how a channel is adjusted from Curves, but I find the histograms help by giving me better insight into the image itself. As for working on channels separately, when bluring during de-screening, now that you mention it, perhaps I can see some advantages. In a recent cover restoration I had to do some severe bluring to get rid of some Moire. Maybe working on the channels seperately would allowed a less severe blur, or require less post-blur re-touching.

 

7. Re-size

Harry Mendryk (2006/01/19) [#4]:
I’ve been re-sizing *after* de-screening, not in the middle. But, to be honest, I never thought about it. Since you did not get your de-screening technique from me, did the source give any reason for doing re-sizing here? This is something I definitely want to experiment with.

Randolph replied (2006/01/19) [#11]:
I came up with the order of these steps after considerable testing. I think it is about re-assuring that the blacks are where I want them to be after the bi-cubic resampling that takes place when resizing.

Harry Mendryk replied (2006/01/20) [#12]:
It’s always a good idea to keep track of what has happened with the channels as you work. I have only begun using de-screening for restoration recently. I’ll keep this tip in mind.

 

8. Curves on K channel only

Harry Mendryk (2006/01/19) [#4]:
Again I am not sure what to say about this step. Does your de-screening process somehow affect the black channel? Like I said, I’ve just started to experiment with de-screening.

Randolph replied (2006/01/19) [#11]:
As I noted above, this is just doing a little more adjustment after the resizing.

 

9. Sharpen all

Harry Mendryk (2006/01/19) [#4]:
I presume you are using the unsharp mask filter to do this, the last step in the de-screening process.

Randolph replied (2006/01/19) [#11]:
No, just plain old “sharpen” and “sharpen more”. I know the unsharp mask filter is a powerful tool, but have never put the time in to figure it out.

Harry Mendryk replied (2006/01/20) [#12]:
I admit I don’t really understand the Unsharp Mask tool. And having three adjustments to use makes it hard to just twiddle until you get the results you want. But I have found settings for two of the adjustment bars that seem to work pretty well. This leaves adjusting to just one (Radius). At that point it just becomes a “sharpen” adjustment. The re-sizing I do for covers is not much. I think that is the reason why I find that the “Sharpen” and “Sharpen More” tools don’t do much.
This does not include hardware issues.
Colour Correction : Conversion to CMYK alters colour

Topic: Loss of “out of gamut” colours
Dario [vulcaniano99] (2006/03/06) [#105]:

The colour adjustment suggested in this list involves, firstly, a rough level adjustment in RGB, then a finer one after a conversion to CMYK.

I am puzzled that when I convert colour mode from RGB to CMYK, the colours change a little (sometimes quite a bit).
Harry Mendryk (2006/03/06) [#106]:

I have seen a slight change when converting to CMYK. This is expected, because the “colour space” for CMYK is smaller than that of RGB.

But I am surprised if you see a significant change, except for purples in RGB changing to more of a grey in CMYK. I fix that by adjusting the middle triangle when I am working on the RGB correction.

Most browsers won’t display a CMYK .jpg file, but Photoshop will have no problem doing so.
Dario [vulcaniano99] (2006/03/08) [#107]:

That was the problem I observed, mainly with purples. Other colours seem to be okay.
David [betroot] (2006/03/09) [#108]:

A computer can display some colours that can’t be printed.

The exclamation mark in the colour pallette (the big one), in Photoshop, demonstrates this: a means of indicating out-of-gamut colours.

When you scan a comic, you convert a CMYK printed image to RGB, but Photoshop then converts the RGB image back to CMYK. The purple region of the spectrum is the part most likely to be altered by this process, because of incompatibilities in the filters used.
Harry Mendryk (2006/03/09) [#109]:

What you are saying is true, but is not the explanation for the problem I had with purples. In my case, when I went from RGB to CMYK the purples really did became greys. Converted back to RGB, they were still grey.

Once I made the proper adjustments, that was no longer true. The purple might change slightly, but it was still a purple.

 

David [betroot] (2006/03/09) [#110]:

When you scan the colours that are on the page, it takes the data and records it as RGB colours for screen display. When you make a conversion to CMYK (a sub-set of RGB), the RGB colour number of every single pixel ‘jumps’ to the value the Photoshop algorithm thinks is the closest CMYK equivalent: it just converts it to the nearest one that it was programmed to.

Select a colour, then check its compatibility with CMYK by clicking the ForeGround swatch in the Toolbar (that brings up the large window for colour choice, with shades of colour and numeric info).

Chances are the purple will force a ! (exclamation point) prompt, meaning “this colour is unprintable in CMYK”, and when clicked it ‘leaps’ to the nearest colour which DOES have a CMYK equivalent.

Often, the purple goes ‘grey’ (dull).

You can cheat this ‘leap’, by selecting an alternate purple colour that has a rich saturation; but by doing so you’ve probably shifted the red or blue component.

That’s why the current comic colourists work in CMYK and avoid RGB if possible (some filters don’t work unless RGB).

I’m not suggesting a method to avoid the problem, just stating the basic facts about RGB/CMYK.
Harry Mendryk (2006/03/10) [#111]:

You are writing about colour space in general. But my discussion with Dario about purple has nothing to do with that colour space issue.

If you change the Foreground colour, but do not use the swatch, but instead enter in the CMYK boxes 100 for C, 100 for M, 0 for Y and 0 for K you will get a purple. There will be no exclamation mark to indicate any colour space problem. Make a new RGB file and Fill it with the Foreground color, and you will get a purple. Convert the file to CMYK and it will still be purple. Convert it again to RGB and it remains a purple. If you are observant, you might have seen that the info box shows the CMYK is not exactly 100,100,0,0 it has shifted a little. But it is still purple.

Colour space difference may explain slight changes in colour when changing to/from CMYK and RGB. It does not explain the big shift to grey that I once had. That shift is an artifact of the scanning process and the settings used. And, at least in my case, that problem was correctable by proper adjustments during my RGB level adjustment step. Hopefully that will be the case for Dario also.

I wish that I could remain in CMYK mode for my restoration. But, unfortunately, scanners actually read the image using RGB detectors, most browsers will not display a CMYK jpeg, and my printer uses RGB.

I do most of my actual work in CMYK. I just have to return to RGB in order to do anything with it.
Dario [vulcaniano99] (2006/03/10) [#112]:

I found a problem even with blue.

When I convert to CMYK, Reed’s costume changes a bit, to a less bright blue
David [betroot] (2006/03/10) [#113]:

FFblues.jpg shows what happens when you sample the blue. The Color Picker says it’s an “out of gamut” (unprintable CMYK) colour – the exclamation mark shows this. If you click on the exclamation mark it will jump to what Photoshop decides is the closest colour numerically, trying to preserve the Hue, Saturation and Value. It’s a grey.

If you just do a CMYK mode conversion, that blue will shift to that grey.

Marked in FFBlues.jpg is an area of blue where, if you click, you WON’T get an exclamation mark warning: a nicer blue.

You are making a judgment here, and saying “colour saturation is most important, I don’t care about value shift”. Photoshop can’t do that, as it’s a subjective judgement.

So, by trial and error, you can find a nicer CMYK blue, and then substitute that. Basically, you would record the number of the ‘nice’ blue. Then, using one of the colour controls (such as ‘selective color’), shift the blues to your replacement colour. Well, that’s how you would evolve a ‘method’ — I can’t give you a step-by-step.

Remember: a comic is a CMYK entity. The scanner (nothing to do with Photoshop) scans the picture and converts it (using the scanning software’s algorithm) to RGB: so it’s a scanner problem. It makes the scan for on-screen representation in RGB. It looks good on the monitor. BUT when you convert it to CMYK, in Photoshop, it isn’t going “back” to CMYK – it’s a first time conversion for Photoshop, which uses its own algorithm to do the CMYK conversion.

 

David [betroot] (2006/03/10) [#114]:

[This section is valuable ONLY if you intend to print the comic to paper, not if it will only be viewed on a computer monitor.]

One other thing you have to consider. It may look grey on screen, but it’s not until you PRINT it that you can be sure if there is colour loss: it may look dull on screen, yet print perfectly well.

You are converting an RGB screen display to a CMYK printed image, and the concern is for the PRINTING, not for its representation on the screen. It’s the PRINTING that is now important.

He then gets lost in meaningless rambling
The only points he seems to be trying to make are
:

 

a. Do two printouts, one as RGB and the other as CMYK.

b. Ignore entirely what you see on the computer screen, and instead compare the RGB printout with the CMYK printout.

c. Do the CMYK conversion in Photoshop, don’t leave it to the printer, because consumer printers do a crap job of this type of conversion.
David [betroot] (2006/03/10) [#116]:

If you scan a comic, then convert it to CMYK in Photoshop, although it may look duller on the screen it should print like the original!

It’s only the difference on-screen between RGB and CMYK that you’ve been noticing.

The diagram shows how different media react to colour (it’s from an old book, and newsprint, like colour photocopiers, has improved in the last few years). But, in the diagram, see how newsprint can’t handle rich purples, and which colours fall in problem areas.
Harry Mendryk (2006/03/10) [#117]:

What David has said is all true, but I have a different suggestion. The faces have way too much magenta. Was the comic really like that? If not, you have pushed the initial RGB level adjustment too far.

If the comic was heavy in magenta, you can try what David suggested.

Despite what David says, the Photoshop conversion in this case does not push to grey, it pushes to magenta (when I try it). After conversion there is 4% of black in Mr Fantastic’s suit, but 25% of magenta. It is the magenta that is causing the problem.

You can use “Apply Image” to correct that. First select only the magenta channel, and I would advise clicking the little box for combined CMYK. This allows operations to be done on magenta, but shows you how they will look. Then in the menu select Image/Apply Image. Select Cyan for the channel, select invert, and Screen for the Blending. This will remove magenta under any cyan. But there may be a slight shift in the background purples.

 

 

Colour Correction : Yellow & Magenta – Edit as CMYK

 

Harry Mendryk (2006/02/17) [#102]:

I generally do not discuss manual editing of scans. Photoshop provides the tools, but no magic solutions. You have to do a lot of tedious work. But working in the proper colour mode can make some corrections a lot less painful.

Since I work with scans of golden age comics (generally low grade ones), I often work with pages that have a browning problem. My colour correction technique can correct much of this problem.

But sometimes the browning is uneven, so after colour correction part of the page will have white paper, other parts of the paper will have yellow to magenta tones.

I recently scanned a Boy Commandos story from “Detective Comics” that had this type of problem. After colour correction some of the paper was a pretty good white, mostly in the center of the page. But other areas, particularly the left side, were still pretty ugly. Getting the yellow/magenta out of word balloons etc would take a lot of effort in RGB mode.

But I converted the image to CMYK mode using the GCR at Maximum setting. Then I worked on first the Yellow channel, and then the Magenta channel. Erasing unwanted tones out of word balloons becomes an easy task, as work done in the Yellow or Magenta channels does not affect the black lettering, which is in the Black channel. Browned paper like in this example will leave unwanted Yellow tones under some of the Magenta, along with some unwanted Magenta under some Cyan. I use low tone values, screen angles and subject to indicate what should be removed and what left. Low values of Yellow probably need to be removed. Higher values of Yellow that exactly match the Magenta screen angle and pattern, are also likely to be undesired. But a low Yellow associated with a strong Cyan in Brooklyn’s shirt makes it Green and should not be removed. Most skies are made using Cyan alone, the presence of Magenta in skies probably needs to be removed. That sort of reasoning. With practice, it becomes pretty much second nature.

If further cleaning up was needed, I would also work on the Cyan and Black channels. But in this case it looks pretty good with just the Yellow and Magenta work. I really don’t want to spend too much time on a non-Kirby work.

 

Colour Correction : Avoid the Red Halo

 

Tom Kraft (2006/01/28) [#55]:

I own a Microtech ScanMaker 9700XL.

I’m having a problem with scanning original art. Some of the finer black lines have a blue or red halo, usually 2 or 3 pixels above the black line or in some cased the entire line has a blue or red tint.

I tried scanning at a higher resolution. This diminished the halo but does not eliminate it. I recallibrated the scanner with the included Kodak recallibrator but observed little difference.

Is there something I can do to eliminate the halo?
Randolph Hoppe (2006/01/28):

What software are you using to scan? If it’s not VueScan or Silverfast, it might be worth trying their demos, although the IT8 colour
calibration is part of the paid versions:

VueScan http://hamrick.com

SilverFast http://silverfast.com
Harry Mendryk (2006/01/28):

I recently got a Microtek 9800XL, and do not have the problem you are reporting.

The problem sounds like one of two things:

1. Calibration – I know you said you re-calibrated it. But the Kodak calibration reference is probably smaller then the art page. If possible try calibrating with the Kodak reference placed midway on the glass.

2. Filter & Descreen – Make sure you have both of these set to none. The description of a halo sounds like a sharpen filter is in effect.

This does not include hardware issues. I used to take original art to a digital service in the city. They could scan art much larger than I could, for a relatively low fee. But one time, scans I got from them had the problem you describe. I tried to talk to them about it, and they re-scanned for me two more times. But the problem never went away. In the end they told me they did not want my business anymore.

Colour Correction : Greys

 

David [betroot] (2006/02/02) [#93]:

I used Harry’s method of digital bleaching, then some ideas of my own to try and get rid of the grey.

There was a post in the Kirby Group about the greys on covers.

Whoever did the colouring around the early Marvel in the transition to the Silver Age was fond of using Grey as a colour – presumably he/she thought it made the colours “pop” more.

I assume that a grey tone was added to a copy of the original art, either with Benday stick-on screens, or in some cases with watercolour, so that the photographed black plate had greys added.

He may of course have done it on the original art.
Harry Mendryk (2006/02/02) [#95]:

I tried to follow the discussion in the Kirby list about the use of grey on Atlas/Marvel covers. But I was never completely clear on exactly what was meant by using grey. In this particular cover, Journey Into Mystery #52, are you talking about the grey in the giant’s costume? If so, the low resolution of the scan makes it hard for me to give a definitive answer.

My experience with original art is that if the grey was added using Benday, the dot size would be different from the screen dot used in printing the colours. Generally, Benday dots are larger. And, since Benday is manually applied, there often are differences in the dot row/column angles from place to place on the image. Differences in dot size (but not angles) would also be expected with the special pre-treated boards that were sometimes used to achieve the greys. Water colour was also mentioned, but I have never seen it on original Golden or Silver Age art.

However, the JIM #52 scan’s resolution is too low to make such comparisons with confidence. But I will hazard a guess that in this case the greys were achieved just like the rest of the colours. That is, by the comic colourists, based on colour guides. They were not on the original art.
Greg T [Greg Theakston] (2006/02/02) [#96]:

Ben-Day, in my experience is a treated board with two lines at 45 degree angles, left and right. One set of lines is 30%, the other is 50%, so if both are used in an area, the result is an 80% tone.

There may have been a dot-pattern Ben-Day, but I don’t recall seeing it. Usually, the dot pattern grey is accomplished with Zip-A-Tone: plastic sheets with a sticky back, cut with an Exacto-knife.

The water-colour you are talking about was three shades of blue ink which were translated at the engraver’s into a dot pattern. The Marvel cover greys were produced by ink-toning a blue-line board: a fifth colour-separation.

Jack Adler and Jerry Serpe did the grey tones at DC. I suspect Sol Brodsky did them at Marvel.

 

 

Colour Correction : Colour Noise

 

Dario [vulcaniano99] (2006/02/17) [#100]:

Using Photoshop CS2, the filter “Surface blur” will remove colour noise.

 

 

Colour Correction : Limit Colour to 8 bit
Harry Mendryk (2006/01/25) [#52]:

Subtle colour differentiation has nothing to do with scanning resolution.

Rather, it is governed by the bit depth. Most people scan with 8 bits per pixel, i.e. for each colour channel, in the case of colour. But some scanners allow 12 or even 16 bits per pixel. 8 bits provides 256 tone levels for a channel, 12 bits provides 4,096 levels, and 16 bits 65,536.

Personally, I think 8-bit depth is sufficient.
Darci (2007/09/04) [#147]:

How many colours should a comic’s palette contain? It seems to me there’s no point in scanning for 16-bit colour, for example, if there are only 1,024 possible colours. What do three colours, times three screen sizes, plus one (for solid black) work out to be?
Harry Mendryk (2007/09/04) [#148]:

It seems to me you have reached the right answer for the wrong reason. If in fact we were trying to use a computer to produce new comic book art that uses a silver age palette, then you don’t need a lot of bits for each colour channel. In fact to minimize file size you would probably be better off using an Indexed Colour file format.

But that is not what we are trying to do. I am trying to restore, as close as possible, the original colours from scans of old comics. Primarily the problem is the page has yellowed, affecting the colours scanned. You need more bits per colour channel to make the distinctions, you simply are not dealing with just 1,024 different colours. Having said that, you don’t need to distinquish millions of colours either.

In my restoration techniques, I work with the individual colour channels. What matters to me is how many levels I can get from each colour channel. With 8 bits you get 256 different levels, with 16 bits you get 65,536. I find 256 levels is more than enough. 65K is overkill, and such overkill results in file sizes that are difficult to handle.

 

Darci (2007/10/26) [#149]:

Comics have 63 colours (plus black and white).

 

Steven [webster2000] (2009/05/09) [#152]:

There were no colour matching standards before Pantone. Individual printers provided designers with numbered swatch books, but these would vary from place to place.

Resizing : Moire Patterns

 

Harry Mendryk (2006/01/19) [#5]:

My restoration of Young Romance #6. The front cover is surprisingly well preserved. There were relatively few tears or creases.

I needed to slightly reduce its size. Moire patterns occured when I did. I had particular problems with patterns in the man’s brown jacket. So I had to do a special job on it.

I ended up with 3 layers: one with severe de-screen of the man’s jacket, another for the lesser de-screening of much of the figures, and a final layer for the solid colours that required no de-screening (mostly the background).

 

Harry Mendryk (2006/01/23) [#24]:

David wrote:
> surely in the “Tomorrow Man” cover
> there’s ways of getting out the grey
> other than the Eraser tool

The grey in the word balloon is close in tone to some of the grey in the background. Removing the grey from the balloon using the “Level” tool, will adversely affect the cover as a whole.

If for some reason I really did not want to use an Eraser tool, I would probably create a selection of just the word balloon. Lots of ways to make such a selection, perhaps the Lasso tool would do. That way I could use my Level tool on the grey without affecting the rest of the cover.
David [betroot] (2006/01/24) [#30]:

Your mention of moire in the “Tomorrow Man” restoration was of interest. My scanner has built-in filters to get rid of “dots” in printing — using ‘Magazine’, ‘Newspaper’ print (it doesn’t have an ‘Art magazine’ filter that I’ve seen in other scanners). I tried to scan an art picture from a library book and none of the filters (de-screeners) in the scanner was perfect, and left a diagonal line moire (they were very small pictures and I was enlarging them). Do you have an idea for getting rid of moire?
Harry Mendryk (2006/01/24) [#37]:

Moire problems are a recurring headache. The most general solution is to scan at high resolution. Usually the further the scanning resolution is from the comic’s screening density the better. When I work with 600 dpi scanning (and even at times 1200 dpi) I generally do not have any Moire problems. That is unless it becomes necessary for me to re-size. Then it may show up. Both Rand and I have discussed de-screening techniques, and some scanners already come with their own de-screening utilities. If you like we can go over that more carefully. But there is no magic bullet that prevents Moire at all times.

Rand once mentioned getting some of his procedures based on some columns by Dan Margulis. When I dug out some old magazines that helped me, when I first got into doing image manipulation in Photoshop, it turns out that they were also written by Margulis. Dan’s articles are well written and contain valuable info. But his writings are geneally for use with images ultimately used in commercial printing. I do have some articles by Margulis that talk about how to prevent Moire. But I want to experiment with some of his techniques to see if they are truly useful for our type of work.

 

Harry Mendryk (2006/01/26) [#54]:

Reducing Moire from scanning –

Previously I was asked about how to prevent Moire patterns when scanning from printed material. The short answer is that there is no way that is guaranteed to work in all cases.

However Dan Margulis wrote some articles on the subject that I recently re-read. He provides a shortlist of practices to follow. I’ve reordered them slightly, and added some notes in brackets.

1. Always scan printed material at the highest possible resolution. These scans can be resized down later. (Although a high-resolution scan is moire-free, this does not mean that the resized image will not have Moire.)

2. Don’t use a sharpening filter. (Most consumer scanners use automatic sharpening filters when scanning. To avoid this you would have to get into the setup for your scanner and turn off sharpening.)

3. Don’t use an automated descreening package. (Some scanners have descreening capabilities. Some are better than others. But even when they work they destroy detail. Dan advocates a manual approach in Photoshop. But Dan’s approach is complicated, and I have not actually used it. So I would say if your scanner has descreening, first try scanning without using it. If that does not work out well, try again with descreening.)

4. Learn to read the screen angles of the original. (For black and white, this is pretty easy. For colour there is a different angle for each CMYK ink. Generally it takes some effort to determine these individual angles. Converting the file from RGB to CMYK in Photoshop helps. But it still takes some practice.)

Dan Margulis also provides a 30-degree rule: to minimize Moire, scan the original at an angle 30 degrees from the original’s screen angle. For black and white prints this is not difficult to determine: most B&W images use a 45 degree screen angle. Using the 30 degree rule would mean scanning with the original at a 15 degree angle. Occasionally some B&W are screened at 0 degrees. That would mean scanning the original at 30 degrees. I have never come across B&W screened at any other angle; but if you read the screen angle, you can determine the best scanning angle in all cases.

Or if you cannot read the screen angles, scan a B&W image first at 15 degrees, and if that doesn’t work try 30 degrees.

After the image is scanned, you can use Photoshop to rotate the image back to the original vertical.

Attached are two versions of the same image, cropped to keep the files relatively small. The original was from a movie ad in a newspaper. View them at 100%, or at Actual Pixels: viewing at other than 100% makes it harder to see the Moire; viewing at a reduced size may show Moire on the monitor that is not really present in the original file. The first (angle_0.jpg) was scanned normally.If you look at the forehead of the actress, or in the background, you will see the Moire pattern. The second image (angle_15.jpg) was scanned at a 15 degree angle, then rotated back using Photoshop (Edit > Transform > Numeric). This second image, scanned at an angle, has no Moire.

I can’t say if following the 30 degree rule will always work perfectly. But it should always minimise the Moire pattern.

But things get messy when scanning colour prints. These prints have a different angle for each ink, and they attempt to follow the 30 degree rule themselves. But although a screen is said to have some particular angle, it really is composed of rows at that angle and columns 90 degrees to the angle. This means that only three colours can follow the 30 degree rule in CMYK, the fourth ink must be at some other, non-optimal angle. The eye is less sensitive to Yellow, so that is the ink that normally gets the poor angle.

For CMYK prints, the screen angles normally are Cyan (15º), Black (45º), Magenta (75º) and Yellow (0º). There simply is no perfect scanning angle available, the best that can be done is to be 30 degrees from two of the ink colours. Which two can vary depending of the particular image. But Margulis suggests that the best screening angle is 45 degrees. This is best for Cyan and Magenta, but not so good for Black and Yellow. In fact it is the absolute worst for Black, so I am a little surprised by his suggestion. So I would say try his 45 degrees first, then scan also at 15, 75 and 0 degrees. Use whichever one is best.

Years ago when I first started doing comic scans, I re-read Dan’s articles. But I never tried following them. One of the reasons is a practical one. Most consumer scanners scan up to about 8.5 by 11 inch images. This is fine for comics, until you try scanning them at an angle. Even at 15 degrees, a comic will not fit on this size of scanner. Dan Margulis’s advice is only useful if you have a large scanner, or for scanning prints smaller than comic books.

 

 

High Resolution scanning : Advantages
Harry Mendryk (2006/01/24) [#33]:

Scans obtained from eBay have severe limitations with respect to the colour correction method I use.

One major shortcoming is their low resolution (typically only 100 dpi). Golden and Silver Age comics typically are printed with a screen pattern of 85 lpi (lines per inch). At 100 dpi a screen dot pattern on the comic page does not sample well. In Photoshop, first view the attached file at 100% (x1) magnification. The dot pattern is readily seen. Now use Image > Image Size, making sure “Resample Image” is set; then set the Resolution to 100 dpi (at this point DO NOT SAVE). Look at the image again: you no longer can see the screening. (After doing this test, discard the Image without saving it).

My method works best when the scan is fine enough that the comic’s screening dots can clearly be distinguished from the paper background. I usually work at 600 dpi, the 300 dpi of this example is a compromise for email purposes.

The other limitation of files on eBay is that they are generally adjusted to look good. This is usually an auto-adjust. But when an image is adjusted it often loses data that would have been useful to my colour correction technique. Actually my example just barely escapes losing data.

 

Harry Mendryk (2006/01/24) [#34]:

If you have Photoshop and are going to try to use my technique on the Hi-Res scan I posted, you must have CMYK conversion set up properly in Photoshop. I have two versions of Photoshop.

In Photoshop 5, the setting dialog can be found using menu item File > Colour Settings > CMYK Setup.

In Photoshop 7, getting the dialog is a little more involved. First bring up menu item Edit > Colour Settings. At the end of the CMYK field is a checkmark, hitting it causes a list of options to be displayed. Choose “Custom CMYK”.

Once you get the CMYK Setup dialog, in the Seperation Options select GCR, and in the Black Generation select Maximum. Click OK (twice in Photoshop 7).

NB: The purpose is that now the black channel will have a better black, and there will be less black in the colours.

This CMYK setup is important in that it defines how greys are converted. Commercial printers often want part or all of the greys to be made with CMY inks. For colour correction we want greys to only be in the Black channel. This setup provides that.
David [betroot] (2006/01/24) [#39]:

The first image is an example of colour mis-registration: you can see the mid-ground girl’s lipstick colour is ‘off’ – is this solved by moving the red channel, so the red registers correctly?

NB: The red of the lipstick does not coincide, on the image, with the girl’s lips/mouth. This is due to a mis-alignment of the K plate (holding the line art) and the M plate (holding the Magenta ink), known in printer’s jargon as mis-registration.
Harry Mendryk (2006/01/25) [#53]:

I goofed, and failed to convert the file from CMYK to RGB.

The reason for this image is so that anyone following my procedure by themselves processing the original scan I posted could have something to compare their results with. If they use the same settings I did, they should be getting the same results.

But if they decided to use different settings (a valid thing to do, particularly when doing the CMYK adjustment) they could see if their version turned out better. I expect people will have different preferences on how the final image should look.

I purposely did not do any manual editing on this image. The image is posted as an attempt to allow members to understand my colour correction technique.

In Photoshop you can select the Magenta channel, then use the Move tool to shift it about (i.e. move the Magenta colour patterns to coincide more accurately with the line art: termed ‘registration’ correction). If shifting it up/down and left/right is not sufficient, you can also use Edit > Rotate to do rotation.

Unfortunately, fixing registration problems, particularly on interior pages, almost always ends with a lot of fixing and touching up. As you move the magenta into proper registration, areas which originally were under the black line art (i.e. were over-written by black) become exposed. These newly exposed areas will have to be re-touched.

 

 

Photoshop LAB Color
Harry Mendryk (2006/02/15) [#97]:

I am currently reading “Photoshop LAB Color” by Dan Margulis.

The subject of the book concerns adjusting photographs, but I am interested in adapting his ideas to digital comics restoration.

LAB colour is an alternate colour mode used by Photoshop. It provides some benefits as compared to RGB or CMYK, but is not as intuitive.

It consists of three channels: A, B, and Lightness. The Lightness channel is the easiest to understand. Its name pretty much covers what it shows, it is similar to the grayscale of the image. The A and B channels are colour channels. Both A and B show the range between two different colours, with absence of either colour indicated by a midway point. A-channel is for green (negative numbers) and magenta (positive). B-channel is for blue (negative) and yellow (positive). For the following simple adjustment, it is not important to know which colours are part of A and which are part of B.

I will describe a way to do a quick colour correction for a comics scan using LAB colour. This could replace the RGB level adjustment I described previously for colour correction (also for digital bleaching). Like the RGB adjustment, the LAB adjustment only makes an initial rough correction, which can be further improved by secondary adjustments in CMYK mode. I described these other adjustments in previous posts.

1. Convert the scan to LAB colour:
Image > Mode > LAB Color

2. Open the Curves tool:
Image > Adjustment > Curves

3. I work with the Lightness channel first, it should be the default when the Curves Tool dialog comes up. I mouse click the cursor over an area that should be white, in this case inside the word balloon in the center. While the mouse button is held down a little circle will appear on the Lightness curve. I note where it occurs, and then drag the nearby curve end horizontally to the right to match that location. If I now hold the mouse button over the same area, the small circle should be over the point where the Lightness curve starts rising from the axis. The info box will show the L channel in this spot to be in my case 94/100. The first value (94) depends on the particular scan’s white value, but the second (100) is what we are aiming for.

4. Still using the Lightness channel in the Curve Tool dialog box, I hold the mouse button down over an area in the image that is black. In my example I used the lettering inside the yellow heart. A gain a circle will appear on the curve to indicate where to adjust. This time I moved the nearby curve end to the left. When finished the Info Box shows the Lightness channel of these letters to be something like 18/2. Again the 18 value may differ for other scans, but the 2 (or 1 or 0) is my goal.

5. Having adjusted the whites and the blacks, I noticed that the image has become too dark overall. I click the mouse button on the Lightness curve someplace in the middle and drag the curve to the right. This dragging causes the curve to no longer be a straight line. You can tell if you are dragging the curve in the correct direction, because if you go the wrong way it has the opposite affect of what you want. I have attached an image of the Lightness curve having made the three adjustments to it.

6. I now select the “A” channel in the Curves Tool dialog box. For this channel I will only be adjusting the white. I hold the mouse button down inside the same word balloon. In my case the little circle shows up right in the middle of the curve. The Info box indicates the area has 1/1 for the a channel. 0 is the ideal value for no colour cast, but 1 is good enough. So in my case I make no adjustments to the A channel. Had this not been the case, the adjustment would have been similar to what I describe below for the B channel.

7. I next select the “B” channel. When I hold the mouse button down with the cursor in the word balloon the little circle appears on the curve. This time the circle is on the lower half of the curve. The Info Box shows values of something like 17/17. I drag the opposite end of the curve, in this case the top to the left. I keep trying different settings until the clicking the mouse in the word balloon has the circle showing midway and the Info Box showing for the B channel something like 17/0.

8. Having done all the adjustments, I click the “OK” button in the Curves Tool dialog box. I would now convert the image out of LAB colour mode to RGB (if all I wanted was a rough correction) or CMYK (if I wanted to get even better adjustment). I have also attached a before and after image of the cover I tried this on (“Young Romance” #4). Note this example only shows the results of the LAB Curve adjustment, no other work has been done on it.

I have just started experimenting with using LAB color adjustments. I do not yet know whether it provides any benefits as compared to the RGB adjustment I described in an earlier post.

I have also experimented with improving the colours in general (not just the blacks and whites) using LAB. But those maneuvers are a bit more complicated.

Digitally Colour the Lineart

 

Dario [vulcaniano99] (2006/02/17) [#100]:

In Italy they started to publish Marvel comics in 1971. The paper quality was much better than that used in the USA originally, so the pages are much better printed than in the American originals. However, due to the high cost of colour, they print only half the pages in colour, printing only the lineart of the others.

I would like to digitally colour the lineart. Do you have a process for that?
Harry Mendryk (2006/02/17) [#101]:

To digitally colour the pages originally printed as line art, I can made a few suggestions. I’ve done something similar, using line art that I digitally bleached from some Joe Simon covers.

The first step would of course be to scan a line art page. If the printing quality is pretty good, in Photoshop use first Filter > Noise > Median with a very low Radius setting (perhaps 1). Then use Image > Adjustment > Threshold to covert the line art to pure black and white. If the print quality is not good enough, you may have to just use Image > Levels or Image > Curves to improve it as much as possible. In either case, you now have the line art in grayscale.

Next open a new file that is the same size as your line art image. But this file should be in whatever colour mode you want to work in. I generally do my work in CMYK. Make a new Layer: Layer > New > Layer. Right now this Layer is blank, but eventually will hold the Line Art.

On the new file, create a new Channel, again for the Line Art. Now going back to the original Line Art file, Select > All and then Edit > Copy. Go to the Line Art Channel of the new file and Edit > Paste. Now Select > Load Selection, and in the Channel selection of the dialog box, choose the Line Art Channel. Also click on the Invert box. After clicking OK, go to the new Line Art Layer you created before. Make sure your foreground is pure black. Now use Edit > Fill with Foreground Color, 100% Opacity and Normal Mode.

You now have a Layer for the Line Art, and a Background Layer that you can use to do the colour work in. Working in the Background will not affect the Line Art. Use whatever tools you want: Pencil, Paintbrush, and Airbrush are commonly used. You may not need the Line Art channel any more. But I would keep it, in case you mess up your Line Art channel by mistake.

You’ll want to match the colour to the Italian comic’s coloured pages. Take a scan of one of them, and use on it Filter > Blur > Gaussian Blur. Set the Radius high enough to remove the screen dots. You can then use the Eye Dropper Tool to select colours from this file.

 

 

Modern Reprints : Colour Techniques

 

Davis Trell (2006/08/09) [#124]:

Some of the colouring in recent recreations, the colours are too lurid.

For the one of the Rawhide Kid (“Two Gun Kid” cover), the colourist even coloured the Kid’s hat yellow! It was really hard on the eyes. Your cover was okay, Harry; it was the insides I disliked.

Some on the Kirby group argued that we are used to seeing old yellows on Kirby’s pages, that are faded, but when first printed they weren’t!

Also the colourists back then had a limited range of colours available, most noticeably in value, and couldn’t overwhelm the black line art. With modern colour the lines seem less important, with the oversaturated colours fighting for attention.
Gregory A Huneryager (“Greg”) (2006/08/09) [#125]:

I agree. The stories need to be coloured with modern paper stock in mind. I prefer the look of “Batman Chronicles” to the Archives, for that reason. The paper on the former is cheaper looking, but it doesn’t hurt to look at.

Another prefered variation was the recolouring on DC’s hardcover “Best of the ’40s“, “Best of the ’50s” books, which was a less white stock with some nice colouring, particularly by Greg Theakston on the Lou Fine stories.

I really think the best way to do it is to photograph the story. I don’t know if that’s more expensive or time consuming, but I like the way it looks in the Marvel “Five Decades” book and the recent Krigstein comics book.

 

Harry Mendryk (2006/08/09) [#126]:

There is always personal preference when it comes to colouring. Particularly when reprinting Golden or Silver Age comics. The printing technology just isn’t the same. IMHO a bright red on flat paper looks very different than when printed on high quality glossy paper.

But still, common sense should prevail. Rawhide Kid with a yellow hat? Everybody knows that good guys wear *white* hats! As they used to say in old Westerns, when someone would not come out to fight, “yer yellow”.

 

Craig Ede (2006/08/09) [#127]:

The Will Eisner “Spirit” Archives do the best job matching non-slick paper and colour, improving on the originals.

But, of course, the original “Spirit” didn’t have glossy covers.

 

Harry Mendryk (2006/08/10) [#131]:

I agree. The Spirit archives are amazing.

 

Craig Ede (2006/08/09) [#128]:

There was a lot more restoration involved in the Krigstein book than just “phographing” the stories, as the article in the book makes clear. That book is my top choice as an example of how comics reprinted in hardcover should be handled.
Huneryager, Gregory A (2006/08/09) [#129]:

In the Krigstein volume, I’m sure the Marvel “Five Decades” stories — those in the back of the book — were just photographed, and they look great.

It’s amazing how sophisticated some of the old stories are in terms of their colour use, most of which gets lost in the reprint. The early Sub-Mariner stories in “Marvel Mystery” are sometimes quite exquisite, as are some of the Vision. “Marvel Mystery” #13, which has the first Vision, has very interesting colouring on both of them, especially on the clouds and smoke. I’m assuming that Marvel was so small that the individual artists did their own separations.

Marvel should have found some way to do a better reprint of Marvel #1-4. That may be the worst of the archival reprint books.
Matthew Moring (2006/08/09) [#130]:

The guides we were given on the “Captain America” Masterworks volume were pages sourced from the Microcolor microfiche sheets. The colour was way off on them.

On other books such as “The Rawhide Kid“, they want the colourists to follow the same colours as originally appeared in the original issues, albeit with proper trapping.

I agree. I’d like to see a wider range of colours & gradients used, then flat colours, as in the first wave of “Masterworks” from the 1980s.
Harry Mendryk (2006/08/10) [#132]:

Not all of that “Captain America” Masterworks volume, because I supplied them with good scans of “Captain America” #2.

 

 

Matthew Moring (2006/08/10) [#133]:

Some of the pages were fine, but most weren’t. I worked on a Tuk story for it (might have been the one in #2), and that was among the cleanest, easiest to restore stories I’ve encountered: good quality scans.
Harry Mendryk (2006/09/22) [#136]:

So far all the comparisons I have made between the original comics and the “Masterworks” volume, for Captain America, show that Marvel has done a great job in keeping to the correct colour.

The line art for “Captain America” #1 does not appear to be based on bleached comic pages like the rest of the volume. I have compared them to copies of the flats (a type of proof that uses line art and no colours) that Joe Simon has. They appear to be an almost perfect match, and do not show the type of blurring that occurs due to the original primitive printing techniques used.

The “Captain America” Masterwork volumes seem accurate and are great buys. I doubt many on this list could afford to buy the original comics: I do not have them! My only complaint is that I dislike the use of glossy paper for Golden and Silver Age comic art. I much prefer the flat paper used in DC’s “Spirit” volume, which I consider the gold standard for reprints.

 

Greg [Theakston] (2006/09/23) [#137]:

That book was photographed from flats in the Jack Kirby Collection. Wish I’d been using the computer to retouch the rest of the volumes, but those were more primitive times.

 

 

Note on other Methods

 

Greg [Greg Theakston] (2006/02/01) [#86]:

1. Destructive methods (Painting Covers)

There are times when the chemical approach simply won’t do. Bill Black asked me to convert a Frazetta GHOST RIDER cover for him, and the black plate just floated off of the page. Whatta mess. Ditto on Harvey covers of the same period. And Atlas.

Those covers were printed on “clay-coated paper.” A low grade paper, coated with a fine layer of clay for a gloss finish. Cheaper I suppose, but a pain in the neck for me. Charlton used clay-coat as well. When the paper gets wet, the clay-coat lets go, and the result is a mess.

Maybe I should have sun-bleached them, as I did during the 1970s, but that’s so time consuming, unless it’s summer. I’ve been searching for the perfect process for 30 years.

These days I paint bucket white for results, on covers, but it takes forever!

Alex Toth Reader Vol.2 is on the newsstands this week. I took great care in reconstructing the Ben-Day patterns on CRIME AND PUNISHMENT #66. Hours and hours spent unclogging lines, and reconstructing patterns.

Destroy a comic? HAW! I’ve done that to $150,000 worth of comics. As Spock said, “One must die, so that many will live”. Or, as Walt Simonson said about Theakstonising, “You gotta break some eggs to make an omlette.” F.Y.I., I use as low grade copies as I can get.

 

2. Non-Destructive method (Tracing)

Next up, RAWHIDE KID #24 for Marvel. Seems the proofs and film are missing. Gawd those covers are a bitch. I’ll probably re-ink it on vellum. Short-cut method used on some of the “All-Winners” covers, and interiors. So much faster to just trace them off at 300% than scrub, and scrub, and scrub.

I believe that it’s important for retouchers to understand how the inker worked, and his intent.

 

< finis >

 

 

Feedback – Contact Me :

 

Posted in American Comic Books | Leave a comment

Science – Curvature of Spacetime

Reasons for Curvature

Curvature of spacetime is an illusion.

Because gravity is a field which propogates spherically (i.e. it radiates outward from a central mass in all directions equally), points of equal gravitational strength (being at an equal distance from the central mass) necessarily lie on a curved surface, because that surface is a sphere.

As objects in motion within that field follow a path of equal strength (unless acted upon by an outside force), because that path is a curved one the object will follow a curved path through spacetime. It is the field strength which is curving, not the actual underlying fabric of spacetime, but the effect creates the illusion that the fabric is itself curved.

 

What does Einsteinian curvature of spacetime imply?

General Relativity is usually interpreted (albeit misleadingly) as predicting that mass curves Einsteinian spacetime. Although this is inaccurate, it is a common form of shorthand (harmless so long as the true state of affairs is borne in mind: that what is curving is actually the field strength).

This curvature influences the path of light and other electromagnetic waves (and perhaps gravitational waves), as these waves propogate through spacetime along the (curved) path of least resistance, and hence follow that curvature.

As the universe is circular, theoretically such a wave could, by following a curved path – and if given sufficient time – arrive back at its starting point. In practice, this does not occur, because the journey time would exceed the current age of the universe.

However, this curvature of spacetime might imply that there is a shorter path between two widely separated points: in other words, if the shortest distance between two points is a straight line, but electromagnetic waves do not follow a straight line, this implies that there exists a shorter path – between, say, star A and star B (or galaxy A and galaxy B) – than the path taken by electromagnetic waves.

Electromagnetic waves follow a low-energy path. A more direct (i.e. shorter) path is of necessity a high-energy path: one which can only be followed by injecting energy, because instead of following a line of constant inertia (i.e. constant gravitational strength) it involves crossing the gravitational field, passing through points which have a greater inertial value than the starting point.

 

A Non-Curved Path

On a related point, is a particle (anything which has mass) constrained by the same principles which restrict an electromagnetic wave (which has no mass) into following only a curved path?

Does the inertia inherent in mass, which Newton theorised as tending to make it follow a straight path unless acted upon by an external force, distinguish massive from massless objects in this respect? The answer seems to be that it does.

The implication is that there are conditions under which it is possible for an object or particle having mass to follow a non-curved path (perhaps as a factor of mass plus acceleration), although impossible for a massless one (which cannot be accelerated).

Given that a shorter path is theoretically possible, a massless particle (such as a neutrino), which does not interact noticeably – or at all – with the fabric of spacetime, might be capable of making the trip between point A and point B in less time than light, because it is capable of following the shorter path.

Some theories of quantum entanglement imply that effects are occuring faster than it would be possible for a signal to pass between the two points concerned at the speed of light. If a neutrino is capable of following a shorter path than light must follow, that would explain how a signal might be transmitted in a shorter time.

A neutrino is not restricted to the curved path of electromagnetic effects, since particles are free to move in any direction, hence are free to move across the field lines. An ordinary, i.e. massive, particle would not do so, since its inertia would cause it to tunnel along the path of least resistance, the same path which (when encountering local gravitational pockets, such as a star) electromagnetic waves follow. However, since a neutrino is massless, having almost no inertia, it does not interact with the field: it does not possess the inertia which restricts the movement of mass (hence can move at the speed of light). When it tunnels, it does not need to follow the line of least resistance: to it, all paths offer no resistance.

Accordingly, in theory, despite propagating at the same rate as electromagnetic waves, a neutrino could take a more direct path, thereby moving from one point to another in less time than those waves.

Since a galaxy has a center of mass which generates a gravitational field of broadly spherical shape, within a galaxy electromagnetic effects must be following a curved path between any two points. A particle which is capable of following a straight path must necessarily travel between those points in less time.

In theory, because a galaxy is circular in shape, its gravitational field is also circular. Light might curve around its rim, in a great circle, whereas neutrinos might pass through its centre. The distance from one edge to the opposite edge is less in a straight line through the galacic core than it is in a great circle that follows the circumference.

A neutrino must have a minimum mass, hence a small response to the inertial constant of the spacetime field.

That constant exists in an unmodified form where there is no gravitational
field; the value of the constant falls as distance from a centre of mass
reduces; and it attains a value of nil at the event horizon (because the
tunnelling distance falls to zero).

A particle’s response to it is governed by additional factors: the mass of
the particle; the velocity of the particle; the acceleration of the particle
(if its velocity is not constant); and its angle of incidence to the gravitational
field. These factors are usually summarised as its angular momentum.

The neutrino, having almost nil momentum (because it has too little mass to
respond normally to inertia), has almost no connection to the spacetime
field. So it does not follow the curvature when the field strength curves,
because it does not respond to the field strength.

 

Space: The shortest distance is curved

It’s interesting how often people say “the Earth is pretty flat”.

On a very local scale there is some truth in that (just not very
much, as the Earth is a sphere); but there is a nice analogy with
space, since on a very local scale space, too, appears ‘flat’, and
its curvature gradually emerges as the distance scale is increased.

The shortest distance isn’t really curved, but the lowest energy
transfer orbit always is, because resistance to movement
(i.e. gravity) has a spherical pattern, since it radiates outward
(spherically) from a central point, the Sun: what is curving is
the field strength.

 

Dark Matter

In theory, a particle which does not interact electromagnetically with ordinary matter, e.g. a particle of dark matter, would also be capable of moving from one point to another in less time than an electromagnetic wave. It is unclear whether dark matter possesses inertia: it is believed to generate gravity, a purely structural effect, which modifies inertia (defined as that resistance to movement which spacetime offers to ordinary particles), but it is unclear whether dark matter itself possesses inertia (i.e. whether it feels that resistance).

Posted in Science | Leave a comment

Science – Heisenberg’s uncertainty principle

Indeterminacy as an aspect of Heisenberg’s uncertainty principle.

One way of looking at Heisenberg’s theory is to consider the Apollo XI mission to the Moon, in 1969.

The spacecraft needed to be guided accurately: to such a degree of accuracy that it was considered impossible to plot its trajectory in miles per hour, so instead all measurements of velocity were calculated in feet per second.

During the outward journey to the Moon, the spacecraft had a velocity of about 3,000 feet per second. This meant that it was impossible to state with accuracy the position of the spacecraft at any given moment: in the space of a second the vessel’s position altered by 3,000 feet (put another way, in any given second there were 3,000 possible positions for the spacecraft).

Even if an interval of one-tenth of a second is used, there is still a margin for error (an *uncertainty*) of 300 feet, the amount by which the vessel’s position must alter during that period.

This error is not resolved even by employing an interval of one-hundredth of a second, or one-thousandth of a second, for in either case there remains an uncertainty (of 30 feet or 3 feet) in the position of the spacecraft.

This is a pretty good analogy for Heisenberg’s uncertainty principle, for it demonstrates that where an object is in motion (even at far lower speeds than the sub-atomic particles with which Heisenberg’s theory deals), it’s impossible to precisely identify the object’s position: that there is necessarily an uncertainty in any measurement of its position, caused by its motion.

This implies that a body’s motion, which causes a constant change in position, by continually varying its location within the (arbitrary) co-ordinate system being used, makes a nonsense of the notion that a moving body can have a precise position, at least in the sense that a stationary body has.

Since all objects within the universe are in motion at some level (on the Earth, objects share – simultaneously – the planet’s rotation about its own axis, the planet’s rotation about the Sun, and the Sun’s rotation about the galactic centre), this implies that no object can have a precise position (relative to an absolute frame of reference, if such a frame is even possible in a system lacking any fixed reference point).

An object can have an approximate position, relative to the chosen timeframe in use (i.e. dependant upon whether we – arbitrarily – choose a time interval of one second, 1/10th of a second, or 1/100th of a second); but not an absolute position.

What we are in fact measuring in a moving object is its change in position over time, so we should not expect to be able to measure, also, an absolute position for it: since an absolute and a relative position (i.e. relative to the chosen timeframe) are logical opposites.

Velocity is really a measurement of the extent to which position alters within a chosen time interval, and this, although yielding a speed for the motion, is also a measure of the uncertainty in the positional data.

Motion within a frame of reference that is itself in motion (such as motion relative to the surface of the Earth, a body which is continuously rotating) implies that such motion will not follow a straight line (if measured relative to an *external* reference frame, such as one centred on the Sun). Such motion, if viewed against any external frame of reference, will follow a curved path.

One consequence of this curvature is that the shortest distance between two points is not, in reality, a straight line. Following what appears (without the aid of a fixed, external point of reference) to be a straight line, the object in motion has in fact described an arc in space (or curve) instead.

Where the destination point is itself in motion, unless the journey time is instantaneous the destination point will be at a different location by the time of the journey’s end, compared with its position at the time of the journey’s commencement.

If we consider, once again, the 1969 mission of Apollo XI: on the outward stage of the mission, upon leaving the Earth, the spacecraft’s trajectory was aimed at a point in space where the Moon would be, 3 days later, when the spacecraft reached the Moon’s orbit (and its velocity was adjusted to rendezvous with the Moon at that location). It could not be aimed at the Moon’s position as it was on the launch date, because the Moon – being itself in motion – would no longer be there at the end of the 3 day journey time.

In its initial phase, immediately following lift-off, the spacecraft entered Earth orbit. In orbiting the Earth, a spacecraft might be placed into a geo-synchronous orbit (also termed a geo-stationary orbit) – although this did not occur with Apollo XI. In such a case, however, the spacecraft maintains a fixed position relative to the surface of the Earth: to an observer on the ground, it remains stationary above a single point on the surface.

However, it is obviously wrong to suggest that the spacecraft is actually stationary. Both it and the Earth’s surface are in motion, and the appearance to the contrary is an illusion, created by the fact that both are in motion with an identical velocity and an identical rotation.

The illusion occurs if, but only if, both the Earth and the spacecraft are viewed from a frame of reference which excludes all external reference points: if the spacecraft is viewed only with reference to the planet it is orbiting.

In a real sense, all motion shares this characteristic: by limiting the frame of reference, i.e. by omitting external points of reference, a false impression of any situation emerges.

For example, a Pacific Islander might sail East from Australia (we will, for the sake of this example, assume that it is possible to sail around the world without running aground). He sails always in a straight line due East: but less than a year later, although he has never turned aside from that straight path, he is astonished to find himself arrived back in Australia.

Because this hypothetical sailor has no external reference point, he has based his world-view on his straight-line course, employing a point-of-view which sees the world as flat. We, who have a point-of-view which recognises the Earth to be a sphere, can see that he must eventually return to his starting point. But only by expanding his reference point, from a 2-dimensional perspective to our 3-dimensional perspective, can he gain a true picture of his situation.

For us, only by expanding our 3-dimensional perspective to one which is genuinely 4-dimensional (i.e. which incorporates the concept of time, in addition to the 3-dimensions of space), can we gain a true perspective on the universe.

 

Posted in Science | Leave a comment

Star Trek : The Enemy Within

Bloopers in “The Enemy Within” –

•  On the wall outside Yeoman Rand’s cabin, the nameplate specifies the cabin as “3C 46”, i.e. Deck 3 Section C, Cabin 46. But when Transporter technician Wilson passes the cabin later, in the scene where Janice Rand calls for help, Wilson shouts into the intercom that they are on “Deck 12”, when the plate had specified deck 3.

•  At one point early in the episode, Kirk says the surface temperature on the planet falls to 120 degrees below zero at night; but towards the end Sulu calls in to report that the surface temperature is now 170 below zero.

•  Mr Spock’s eyebrows in this episode are – consistently – very sharply tilted upwards, looking positively Satanic; but in most later episodes in season 1 – and thereafter – Spock’s eyebrows do not have this Satanic-looking tilt.

•  Spock, supposedly unemotional, expresses emotion openly in this episode: when he says he is annoyed at Kirk interrupting his work, he really does look annoyed.

•  The damaged transporter unit ioniser, which Scotty says can’t be repaired in less than a week, is actually quickly repaired, by Spock adding some leader circuits and bypass circuits.

•  The scratches on the evil Kirk’s face jump back and forth between his left cheek
and his right cheek in different shots: Janice Rand is seen to scratch his left cheek, but in some shots – all of them closeups – the scratches are on the other cheek.

This looks like a blooper, but it is not due to a mistake in makeup (i.e. the make-up department has not made-up Shatner incorrectly, by putting the scratches on the wrong cheek).

What is happening is that, in order to enhance the effect (partly achieved by makeup) of the evil Kirk looking slightly different to the “real” Kirk, i.e. slightly nastier, in all closeups of the evil twin the image is deliberately reversed. This has the effect of making him look a bit unusual, because Shatner’s face (like anyone’s) is not completely symetrical, so he looks slightly different when the image is reversed: we see him as he would look in a mirror. It’s only a slight difference, but noticeable.

This reversal can only be achieved in closeups: in wider shots, the technique would cause other actors’ faces, sets, and particularly any writing present, to also be reversed.

However, because the scratches on Shatner’s left cheek now appear on his other cheek, this gives a clue to what the film editor and director have done in order to achieve the effect. Because of the presence of the scratches, it looks like a blooper; but there is actually a valid reason for it.

 

 

Posted in Television | Leave a comment

Blake’s 7 : Forty Years in Space

In January 2018, Terry Nation’s Blake’s 7 celebrated its 40th anniversary. The initial episode of the space opera was broadcast on BBC1 on January 2nd, 1978.

Just three years and fifty-two episodes later, its final epic adventure hit our screens. And apart from two all-too-brief reunions in the 1990’s, for the radio adventures, that was it for the BBC’s only successful television science fiction series outside of Doctor Who.

The original concept of Blake’s 7, as envisaged by Terry Nation, was ‘Robin Hood in Space’. Blake, the eponymous hero, took on the role of Robin Hood, and Space Commander Travis, representing the evil Federation, took the part of the Sheriff of Nottingham, with Blake’s crew as Robin Hood’s ‘Merry Men’ and Supreme Commander Servalan as Bad King John.

The original focus of the show was Blake’s struggle against the Federation. His role as Robin Hood called for him to be a traditional hero; yet, at the same time, Terry Nation made him obsessed with righting the wrongs done to him by the Federation, the evil and totalitarian regime embodied by the ruthless Travis, who cut a nicely sinister figure with his eye-patch and his artificial arm.

Being the hero, Blake was somewhat restricted by having to be heroic, which from a scripting perspective was a real drawback. Unlike his opponents he didn’t have the capacity to be ruthless, for in Robin Hood only the Sheriff of Nottingham is allowed to kill people. Therefore Travis could be ruthless, but Blake was hamstrung by being all-talk: he threatened, but he never followed through.

This distinguished the character from Travis – and from Servalan too. Blake could kill only in self-defence. He could be nasty to the Federation, as they were the ‘baddies’. But he could not be gratuitously nasty, unlike Travis or Servalan. This also distinguished him from Avon, who was notionally Blake’s ally, but who was permitted by Terry Nation to be everything Blake was not.

For romantic interest, Jenna Stannis was cast in the role of Maid Marion. Jenna’s romance with Blake never really got going, but because of it she always supported him when the crew was divided over one of his more reckless schemes. And you knew that if the crew ever split up, she would inevitably go with Blake. But her dependence on him cast her character firmly into the role of ‘the damsel in distress’: the traditional function of Maid Marion.

In subsequent seasons the ‘Robin Hood in Space‘ concept was refined by script editor Chris Boucher into something more akin to ‘The Dirty Dozen in Space‘, once Blake (and Terry Nation) had left the show and only those characters with criminal backgrounds remained. But Boucher did a good job of making them sympathetic even so, an idea that also derived from the film ‘The Dirty Dozen‘.

Blake and his crew were emulating Robin Hood’s creed of robbing the rich to give to the poor, in that they were making a stand on behalf of the huddled masses who the Federation were oppressing.

The format of the show was ‘good versus evil’, with Travis as the evil, black-hatted villain. Yet Blake’s crew were hardly shining innocents themselves, save perhaps by comparison with Travis. Vila’s thieving and Avon’s banking swindles were, after all, merely criminal. Travis, by contrast, crossed the line to evil very early on, when he was identified as a mass murderer.

Travis was a man to whom an enemy did not cease to be an enemy merely because it had surrendered; and his character was defined by the charge against him at his eventual court martial: the man who continued an attack after the total surrender, murdering over one thousand unarmed civilians.

The format created significant problems with the character of Blake himself. To fulfill the ‘Robin Hood in Space‘ concept he had to be whiter-than-white: there was no scope for him to be less than perfect. So, in terms of character, he couldn’t develop, which imposed strict limits on what the writers could do with him.

He could not be ruthless, or acquisitive, or disbelieve in his own cause. And the criminal charges laid against him could not be true: he had to be a man wronged. In this essential respect he was unlike his ‘Merry Men’, all of whom were actually guilty of the crimes of which they had been convicted. So, in comparison to them, Blake was a very one-dimensional character. And particularly when contrasted with Avon, Blake’s character was bland.

The upshot was that Blake was too good to be altogether believable. He must always do the right thing – that’s to say, fight for truth and justice – regardless of the odds, and even at the cost of the lives of his own followers. Most notably Gan, who was killed in the episode Pressure Point; but also Nova and Arco who were killed in the episode Cygnus Alpha.

In order to make the series interesting and believable, there had to be a character who was less dull and less sanctimonious than Blake.

That character was Avon, who really had committed crimes, and who was genuinely motivated by self interest. Avon was also capable of not believing in Blake’s cause. And he was not willing to follow Blake blindly; although everyone else was, apparently. So there was frequently conflict between Avon and Blake, in their scenes together, creating tension and therefore drama.

There were significant differences between the characters of Avon and Blake. The most important of these emerged as early as the second episode, Spacefall, where Avon proposed that they refuse to surrender when the Federation threatened to execute the hostages. To surrender in those circumstances, which is what Blake ultimately does, is to be heroic. But Blake, as the hero, was expected to act heroically: giving up his own life for others. Avon was not the hero, so was allowed to be motivated by self interest – a more realistic motivation, one which made him less of a cartoon character than Blake.

That instinct for self-preservation in no way made Avon’s character evil. To be evil was to act like Bayban the Butcher, in the episode City at the Edge of the World, who killed people for pleasure; or to shoot unarmed civilians, as Travis admitted to in the episode Trial; or to destroy entire worlds, as Servalan did in Children of Auron. That distinction, which separated Blake from Travis, also separated Avon from Servalan later on.

Another significant difference between Avon and Blake emerged in the episode Orbit, where Avon went to great lengths to save Orac from the clutches of Egrorian, the villain-of-the-week, but didn’t hesitate to kill Vila by pushing him out of an airlock, when he thought that this was the only way to save himself from Egrorian’s death-trap.

Blake was not the type to push a friend out of an airlock merely because it was the expedient thing to do. In the same situation, Blake would almost certainly have jumped himself (as the hero, he wouldn’t ask someone else to do what he wouldn’t do); or else Blake would not have allowed anyone to jump.

This, more clearly than anything in the whole show, defined the true difference between Avon and Blake. If there had been no other choice, Avon would have pushed Vila out of that airlock. In a matter of survival, Avon was capable of being ruthless. Blake, as demonstrated in Spacefall, was not.

Another difference was their sense of humour. There were humorous elements in Avon’s character, which allowed the audience to feel considerable warmth towards him; but Blake, by contrast, was a cold character, almost completely devoid of humour.

Being the anti-hero, Avon’s character was tailor-made for black humour. And at times, in his relationship with Vila, who was actually a semi-comedic character anyway, there was almost a Laurel and Hardy double-act going on between them; for instance in the Casino in the episode Gambit.

Blake’s character was too sanctimonious by far. As conceived by Terry Nation, Blake was driven by a need for revenge and by a need to be right; and a character that driven is difficult to inject humour into.

But right from the beginning Terry Nation subverted Blake’s character in the interests of the plot. Blake was a frequent victim of the need to inject action into the show, something Nation could seemingly only ever achieve by having Blake take some unjustified risk. So Blake routinely ignored obvious risks. At best, it made him appear naive. At worst, it made him appear plain stupid.

Avon was the one who stood up for common sense. Blake was forever treading a very thin line; and too often he crossed it, doing something rash in order to inject a crisis into the storyline. The outcome of this was that Blake’s character lacked credibility.

Although this was Terry Nation’s regular solution to the need to drive the plot forward, the fact that it undermined Blake’s credibility did not worry Nation, who was never unduly concerned about the credibility of the characters. His concern was to have an ‘action show’, with lots of things happening.

Blake thus became a less credible character than Avon, since the latter was not asked to sacrifice his believability to the needs of the plot each week. This was another major factor that distinguished them. Ultimately, it promoted Avon into the starring role in the series, when Gareth Thomas, unhappy with the way his character was developing, opted to jump ship after the second season, cutting short Blake’s tenure in ‘Sherwood Forest’.

In consequence, for the final two seasons, the conflict was between Avon and Servalan, as both of the previous lead characters — Blake and Travis — had been written out.

This change of direction, with a new male lead in Avon, and with a female lead replacing Travis, was a beneficial development that prevented the remaining seasons from being just a re-tread of what had gone before. Yet the show lost its focus when it lost Blake and his fight against the Federation.

Blake’s departure propelled Avon into the limelight, instead of his being only a supporting character, as he had been up until then, and he quickly became a more interesting character than Blake had ever been. Whereas Blake was very much the conventional hero, Avon was something quite different: the archetypal anti-hero, a much darker, more complex, and more interesting character.

Without Travis, Servalan became the main villain by default. Much more to the fore, she now had scope to develop as a character, where previously she’d had virtually no character development at all. Intriguingly, the change meant Avon was now confronting a woman. The dramatic possibilities were greatly enhanced by this change, as Avon could be given an emotional relationship with a woman that couldn’t have existed between a character like his and a man. Avon was very much a ladies man, as he demonstrated with Anna Grant in the episode Rumours of Death.

Since the show was losing not only Blake but also Jenna, the girl who provided the romantic interest, it became impossible for Avon to have a similar relationship with her to Blake’s, even if the writers could have contemplated such a development. This changed the whole dynamic of the show, as the romantic interest was henceforth to be with a woman who was on the opposite side in the ongoing conflict.

Moreover, Avon was far more likely than Blake to have romantic feelings for a woman like Servalan. There was a credibility to it. She and Avon had a similar outlook on life; one which was a million miles away from Blake’s. The result was that a personal conflict between Avon and Servalan became the mainstay of the show, replacing the original, much more impersonal, arm’s-length theme of Blake’s fight to overthrow the Federation.

And Avon’s relationship with Servalan gradually changed. It was very different initially, at the beginning of season 3, to what it became. At first she probably would have killed him without a second thought, but the relationship evolved. It surprised the audience when he saved her life in the episode Rumours of Death, towards the end of that season. For once, possibly, he had acted as Blake would have done. And there was a significant shift in her attitude to him, as a result of that.

Ultimately, Avon even got to kiss Servalan on screen, providing some evidence of a romantic entanglement. He was probably not seriously interested in her — even though she indicated, in the episode Assassin in the middle of season 4, that she was interested in him. She seemed to represent his ideal fantasy; but he was not going to go there, as he certainly did not trust her.

Nevertheless, there was a definite, and mutual, sexual attraction between them. And Servalan was clearly in love with him in series 4. However, it was unrequited. He was never in love with her, but he recognised that she had some feelings for him, and cynically played on that to his own advantage.

In terms of the overall storyline, the final two seasons drifted, because no one took on the responsibility of leadership as Blake had done. Avon never replaced Blake as leader, not even in the final season. He would sometimes allow the others to participate in what he was doing; and they sometimes asked for his help. But he never defined a mission and asked for volunteers, nor led from the front in any other way, as Blake had done.

The crew now normally reached a consensus. No one was really in charge; and the others actively resented Avon, and would often ask the Orac computer for a second opinion on his latest scheme. And Orac was the only one who Avon took seriously: the only one he treated as an equal, or discussed his intentions with.

There was a degree of humour in the fact that the only one they all trusted, initially, had been Blake (who was reprogramable, but only by the Federation, as seen in the episode Voice from the Past); but now the only one they all trusted was Orac (who was even more obviously reprogramable, but only by Avon, their computer expert!) Always it came back to a matter of trust.

And in the fifty-second and final episode of the television series, their odyssey came to a fatal end when Blake turned up again. Because they no longer knew whether they could still trust him.

In the final scene, Avon is forced to choose between trusting Tarrant and trusting Blake. His choice brought the series to a spectacular conclusion. For when the crunch comes, Avon chooses to trust Tarrant instead, and to shoot Blake. Blake expected Avon would continue to be loyal to him, but fatally overlooked the fact that Avon might also have a loyalty to Tarrant, a misjudgement that arose from the fact that Tarrant was someone Blake had not met before and knew nothing about.

Forty years later, it’s still an ending no one can forget. And it’s still a cause for regret that the show came to an end back in 1981.

Posted in Television | Leave a comment