- Jack Sarfatti Jack Sarfatti On Feb 6, 2013, at 3:49 PM, nick herbert <email@example.com> wrote:
Again a very persuasive argument.
You are correct that the |0>|1> term is small.
But it is multiplied by a different |0>|1> term (to form the product state |0>|1>|0>|1>.
The coefficients of this different |0>|1> term are surprisingly large.
JS: Ah so, Holmes.
NH: As to your ability to make alphaxr as large as you please. Do you think you can do this
and 1) preserve normalization of the input coherent state? 2) preserve the truncation condition?
JS: This issue of the normalization of the input coherent state is non-trivial. In the literature the authors on entangled coherent Glauber state put in what looks like an observer-dependent normalization forcing the Born probability rule to be obeyed. This can always be done ad_hoc, but it is not part of the rules of orthodox quantum theory where unitary time evolution guarantees invariance of the initial normalization choice that should not depend on what future choice is made by the measuring apparatus (for strong Von-Neumann projections).
For example, for a trapped ion internal qubit +,- entangled with its coherent phonon center of mass motion z. z'+ instead of the unitary invariant choice
| > = (1/2)^1/2[|z>|+> + |z'>|->]
The Born rule trace over the non-orthogonal Glauber states gives the seemingly inconsistent result
P(+) = P(-) = (1/2)[1 + |<z|z'>|^2]
P(+) + P(-) > 1
which I say is a breakdown of the Born probability rule in the sense of Antony Valentini's papers.
The dynamics of Glauber state ground state Higgs-Goldstone-Anderson condensates with ODLRO (Penrose-Onsager) is inherently nonlinear and non-unitary governed by Landau-Ginzburg c-number equations coupled to q-number random noise. The bare part of the noise dynamics sans coupling to the condensate is of course orthodox quantum mechanical.
Now what the published paper's authors do is to use an ad-hoc
| > ' = | > = (1/2[1 + |<z|z'>|^2])^1/2[|z>|+> + |z'>|->]
giving the usual no-signaling
P(+) = P(-) = 1/2
NH: And by the way, just what is the wavefunction for the input coherent state before the beam splitter?
You are never specific about what has to go into the beamsplitter to achieve the performance you describe.
- Jack Sarfatti On Feb 6, 2013, at 1:49 PM, Demetrios Kalamidas wrote:
Hi to all,
Concerning my scheme, as it appears in the paper, lets do a certain type of logical analysis of the purported result:
Let's say that the source S has produced 1000 pairs of entangled photons in some unit time interval. This means that we have 1000 left-going photons (in either a1 or b1) AND 1000 right-going photons (in either a2 or b2).
Let's say we have chosen 'r' to be so small that only 1 out of every 1000 right-going photons is actually reflected into modes a3' and b3'. So, 999 right-going photons have been transmitted into modes a2' and b2'.
In my eq.6, we observe that the 'quantum erasure' part is proportional to 'ra'. Let's say we choose 'ra' such that '|ra|squared', which gives the probability of this outcome, is 10 percent.
This means that roughly 100 right-going photons have caused 'quantum erasure', for their 100 left-going partners, by mixing with the coherent states in a2' and b2'.
Thus, "fringes" on the left will be formed that show a variation of up to 100 photons, as phase 'phi' is varied, between the two outputs of beam splitter BS0.
Now, for this total batch of 1000 right-going photons, ONLY ONE PHOTON, roughly, has made it into a3' or b3' and mixed with the coherent states over there.
So, even if that ONE PHOTON contributes to "anti-fringes" on the left, it could only produce a variation of, roughly, up to 1 photon, as 'phi is varied, between the two outputs of BS0....and that is nowhere near canceling the "fringe" effect, but can, at most, cause a minute reduction in the "fringe" visibility.
JS: This seems to be a plausible rational intuitively understandable informal argument. Very nice. However, words alone without the math can be deceiving.
DK: Please note that we can choose 'r' to be as small as we desire, i.e. we can arrange so that one out of every billion right-going photons can be reflected into a3' and b3' WHILE STILL MAINTAINING the '|ra|squared'=10percent value (by just cranking up the initial coherent state amplitude accordingly).
I wrote this logical interpretation of my proposal in order to show that Nick's analysis goes wrong somewhere in predicting equal amplitudes for the "fringe" and "anti-fringe" terms.
JS: I do hope Demetrios will prove correct of course. Even Nick Herbert desires that. Is young Demetrios the new Arthur? Has he pulled the Sword from The Stone?