Is the quantum world really as strange as we’ve been told?
In this episode, we dive deep into one of the most misunderstood aspects of quantum mechanics — superposition, collapse, and spin entanglement — and reveal how the standard narrative may be more about interpretation than observation. What if the “weirdness” of quantum mechanics is actually a misunderstanding of analog behavior forced into a digital framework?
We revisit the famous photon polarizer experiments, the logic behind Malus’ Law, and the foundational assumptions of Bell’s Theorem. Along the way, we explore how field interactions and geometric coupling can reproduce the same experimental outcomes — without invoking metaphysical collapse or non-local effects.
This video challenges the idea that spin must be undefined until measurement, and instead offers a grounded alternative: that spin, like polarization, may be a real, structured interaction with the field — not a binary mystery collapsing into being.
If you’ve ever wondered whether quantum mechanics tells the whole story — or if there might be a simpler explanation hiding in plain sight — this is for you.
💖 Support This Channel:
Your support is crucial for us to continue making quality content.
Patreon: https://www.patreon.com/seethepattern
PayPal: https://www.paypal.me/seethepattern
Merch: https://shop.spreadshirt.co.uk/see-the-pattern/
or CRYPTO Donations:
Bitcoin: bc1q5cctzkc9tt6hmqueddfk5dlvcpr6y45gx7td04
Ethereum: 0x2df869b96d4b42c461635B2955fAF72C79eA445D
Dogcoin: DRUEVXavwhbavuhgYJV2AXo8N6tC2zB5za
Monero (XMR) Address: 49UeNQmSd2TUxvKnrAYaU1ZcN3q1vbuYpbR9WuW21yn7ZSdyzE2GtGgdwoesTV2ewQCLbf7R7xjmgDDrQguNhr6o3gCrw66
🎥 Other Relevant Videos:
Space Has Structure: Rewriting the CMBR Story: https://youtu.be/cj-7vdYij6U
📚 References:
🔬 Foundational Experiments & Papers
Bell’s Theorem:
Bell, J. S. (1964). On the Einstein Podolsky Rosen paradox. Physics Physique Физика, 1(3), 195–200.
https://cds.cern.ch/record/111654/files/vol1p195-200_001.pdf
CHSH Inequality (Most Common Bell Test Form):
Clauser, J. F., Horne, M. A., Shimony, A., & Holt, R. A. (1969). Proposed Experiment to Test Local Hidden-Variable Theories. Physical Review Letters, 23(15), 880.
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.23.880
Aspect Experiment (First Strong Violation of Bell):
Aspect, A., Dalibard, J., & Roger, G. (1982). Experimental Test of Bell’s Inequalities Using Time‐Varying Analyzers.
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.49.1804
📸 Polarization & Malus’ Law
Malus’ Law (Historical Reference):
Étienne-Louis Malus (1809). On the Law of Double Refraction by Reflection.
(Not available online easily — refer to optics textbooks or historical overviews)
Modern Demonstration:
Hecht, E. (2002). Optics (4th ed.), Chapter 8 — Polarization. Addison-Wesley.
🧪 Loophole-Free Bell Tests
Hensen et al. (2015) – Delft Experiment:
Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres.
https://www.nature.com/articles/nature15759
Giustina et al. (2015):
Significant-Loophole-Free Test of Bell’s Theorem with Entangled Photons.
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.250401
Shalm et al. (2015):
Strong Loophole-Free Test of Local Realism.
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.250402
#QuantumMechanics #Entanglement #Superposition #BellTest #MalusLaw #QuantumSpin #LocalRealism #Collapse #Photon #PhysicsExplained
00:00 Introduction
01:03 The Standard Quantum Explanation
03:37 Classical Optics Explanation
06:13 Beyond Simple Optics
12:39 Bell’s Theorem and the Binary Illusion
21:40 What about Spin tests?
25:15 An alternative interpretation
If we take two polarizing filters and place them at 90° to each other, then we can see it blocks almost all light. And intuitively that makes sense. If I now take a third filter at 45° and insert it between those two, something strange happens. Light is allowed to pass through the three filters. It’s often presented as a stunning example of quantum mechanics. An intuitive failure for classical optics, but a triumph of superp position and collapse. But what if that hides something deeper? What if this simple experiment doesn’t just show how light behaves, but reveals something we’ve misunderstood about how light is created, how filters work, and how experiments are interpreted? Maybe quantum mechanics isn’t revealing something mysterious, but simply absorbing classical behavior into its formalism. And if we’ve misread something as fundamental as polarizing light, then what does that say about entanglement, about Bell’s theorem, and about the claim that reality itself must be non-local? According to quantum mechanics, when a photon encounters a polarizing filter, its polarization state is projected or collapsed onto the axis of that filter. So if the light entering the first filter is unpolarized, only the component aligned with the filter’s axis is allowed through. In this case, the first filter at 0° passes only the horizontal component. The photon is now said to be in a definitive 0° state. Let’s say the photon is horizontally polarized at 0°. When it hits the filter rotated at 45°, we calculate the overlap or projection of its polarization state onto the new axis. This is done using the cosine of the angle between the two directions. And since we care about intensity or probability, we square it. That gives us the well-known equation P= cos^² theta. So for 45° we get cos^² 45° which is a half a 50% chance the photon will pass. And this same logic explains why two polarizers at 90° to each other block all light. The angle between the two polarization directions is 90°. So cos^ 2 90 is zero. There’s no component of the original polarization that can pass through the second filter. So far we’ve done nothing more than project a vector onto an axis. A simple geometric idea. There’s no need for state vectors, superposition or quantum collapse to get this result. This is just basic trigonometry. This projection rule is why this experiment is often explained using the language of superposition. In quantum terms, a photon in a 0° polarization state can be expressed as a superp position of 45° and 135° polarization basis states. That’s not saying the photon is physically in those states at once, but mathematically it has a 50% probability amplitude to be found in the 45° basis. So when it passes through a 45°ree filter, that component is selected and the photon is reset to the new state. At each step, the photon is treated as entering a new basis and its state is either projected onto the new axis or blocked entirely. This is why the quantum explanation is often described as a sequence of state collapses and repreparations with superp position serving as the bridge between incompatible measurement bases. But here’s where it gets interesting. What quantum mechanics describes through complex ideas like wave function collapse or superp position we’ve just captured with nothing more than classical geometry. In fact, the famous cos squared theta rule which governs the probability of a photon passing through a polarizer didn’t come from quantum mechanics at all. It came from optics. Malice discovered that when you shine polarized light through a second polarizer, the transmitted intensity follows this exact cos² theta rule. Long before quantum theory existed, this law was already describing what we now claim is a quantum probability. Now, here’s the part that’s often misunderstood. Malice’s law tells us how much light gets through a polarizer when its axis is at some angle to the incoming polarization. But that also shows us something else. A polarizer doesn’t just pass light at one perfect angle. It passes a range of angles with intensities falling off gradually as you rotate away from the filter’s axis. The response is smooth, not binary. It’s an analog profile, not a digital one. So, even if your incoming light is centered at 0°, a 45° filter doesn’t block or pass photons. It attenuates them according to how much they align with its axis. That’s what Malice’s law actually shows us. And that’s where the apparent quantum jump in the three filter setup starts to disappear. So what happens when light reaches a second filter? If it’s at 90°, it mostly blocks the wave because the overlap is minimal. And Malice’s law gives you near zero transmission. And that makes sense. But now insert a third filter at 45° between the two. Suddenly some of the light makes it all the way through. Why? First, the filter selectively blocks components that are far off axis. Only the portions of the field aligned with 45° has a chance of making it through. But that’s not all. The act of passing through the filter doesn’t just cut away the rest. It reshapes the wave that emerges on the other side. Based on Malice’s law, the electric field is attenuated in a way that reflects the angular relationship between the incoming wave and the filter axis. The result is a new wave that isn’t perfectly aligned, but is now centered around 45° with a spread of orientations inherent from the interaction between the field and the filter structure. That’s why the final filter at 90° can pass a portion of this wave. There’s now a real component aligned with it. Now, this might sound like we’re reverting back to classical wave theory, but we’re not. In classical electromagnetism, waves are continuous and spread out across space. But what we’re describing is something different. A photon as a localized wavelet, a discrete packet of energy traveling as a disturbance in an underlying medium. And that distinction is crucial because in classical terms, when a wave passes through a polarizing filter, we imagine that the off-axis components are suppressed, the wave is literally reshaped, its orientation modified. This is exactly what we observe in light experiments. The output is attenuated, aligned and smoothly filtered. But if we now try and interpret that within standard quantum mechanics, we hit a wall. In the quantum view, a photon is a fundamental indivisible quantum of energy. It can’t be split. It can’t be partially modified. It either gets through the filter or it doesn’t. There’s no in between. And so quantum theory sidesteps this by invoking probability. The photon enters in a superp position of polarization states. The filter acts as a measurement device and the photon’s polarization collapses either aligning perfectly or passing through or vanishing entirely. But that’s not a physical description. It’s a bookkeeping trick, a way to preserve the illusion of indivisibility by treating the wave behavior before the filter and the binary outcome after as disconnected realms. And this is where the wavelet model becomes powerful. It allows us to describe the photon as a real wave packet, a localized disturbance in a medium, not just a probabilistic abstraction. And crucially, it lets us describe how the wavelet can be gently reshaped during its interaction with a filter. Not collapsed, not reset, but physically filtered. In that sense, we’re proposing a photon that can be modified, but only under specific conditions when interacting with structured matter that couples to a field. It’s still a single quantum of energy, but it’s extended, dynamic, and tied to a real medium. This resolves the paradox. No collapse, no magic, just a wave passing through a filter. It behaves analogically when interacting with filters, but is detected as a single event. It moves like a wave but couples like a particle. The wave is continuous in structure but quantized in delivery. Not because the universe is inherently probabilistic but because the structures that create and absorb light are themselves discrete. You might think of it like a soliton or even like the bouncing oil drop experiments where a real particle follows a stable wave it generates as it moves. The photon in this view is a coherent ripple, not a collapse, but interacting. Its outcome is shaped by how it couples to what it meets. We explored this idea more deeply in the previous video where we looked at how quantized energy levels in atoms arise from electron shell structure, not from photons being born quantized. The photon inherits its discreetness from the electron transition that created it, not from some intrinsic granularity of space itself. The continuous behavior matches what we expect from Malice’s law and from classical optics. But something strange happens when we slow things down. Fire one photon at a time and we still get the same curve. Not smooth attenuation, but individual yes or no events that build the same statistical shape. That’s where quantum mechanics says ah the photon must be in a superp position until it’s measured. But there’s another way to explain it. Maybe a photon has real structure and when it meets the filter, the outcome depends on how well the structure couples to the filter’s alignment. Some align easily and pass straight through. Some are blocked and others they twist, they strain, they almost fail, but don’t. And what looks like collapse might just be the physics of interaction. If the filter just selects photons by chance, then we’d expect some 90° photons to make it through a 0° filter. But that doesn’t happen. Not even a single photon. So something more must be going on. The only way a 90° photon ever makes it to the end of a three filter setup is if it wasn’t 90° anymore by the time it got there. And that means the filter didn’t just select the photon, it modified it. It rotated its polarization not by collapsing a wave function, but by coupling with a real wave and reshaping it. This might sound speculative, but real polarizers work through exactly this kind of field interaction. Most polarizing films are made from longchain molecules all aligned in the same direction. When light hits the film, the electric field of the wave interacts with the electrons in those chains. If the field lines up with the molecules, the photon is absorbed. If it’s perpendicular, it slips through. But in between, things aren’t so clear-cut. There’s partial coupling, partial absorption, and in some cases, actual modification. Other types of polarizers like bif fringe crystals or wire grid arrays show the same behavior. They don’t just block photons, they split, redirect, delay, or even rotate polarization of the incoming wave. So, this isn’t just a hypothetical mechanism. It’s observed physics. Filters can alter a photon’s polarization through real analog field interactions, which means the photon doesn’t need to be in a superp position. It just needs to interact. And that interaction can reshape it. If filters can reshape a photon, not just test it, then what we see in the three filter experiment isn’t a quantum paradox. It’s just analog physics playing out one photon at a time. This might seem like a curiosity about filters and light, but the implications go far deeper because the three filter setup with its strange seeming results is often held up as a visual metaphor for something far more profound in quantum theory, superposition, collapse, and most importantly, entanglement. And if light’s behavior in this simple system doesn’t require a wave function collapse to explain, then maybe we’ve misunderstood what these quantum concepts are actually telling us. That brings us to one of the most famous arguments in all of quantum foundations, Bell’s theorem. The idea that no hidden variable theory, nothing based on local realism can produce the predictions of quantum mechanics. But what if the reasoning behind that argument and the experiments that claimed to confirm it also made a fundamental assumption, the same one we just exposed, that the world is digital when in fact it’s analog? We’re told Bell’s theorem proves something fundamental about reality. That no model based on local hidden variables can match the prediction of quantum mechanics. But here’s what that really means. It means a particular kind of model, one that assumes particles carrying preassigned binary answers fails to match a certain statistical pattern. So let’s look more closely at what Bell actually did. He imagined two observers, Alice and Bob, each receiving one particle from a shared source. Each has two possible settings for the measurement device and each measurement produces one of two outcomes + one or minus one. Belle assumes that each particle carries a hidden variable called lambda which predetermines how it will respond to each of the possible settings. Alice’s result is a function A and Bob’s is a function B. Now, Belle combines the average correlation between various settings into a single value S, which we can see on screen now, where EAB is the average product of Alice and Bob’s results for those settings. Using only the fact that A or B can either be plus or minus one, Bell proves that S has got to be less than or equal to two. This is the famous Bell inequality and it emerges purely from binary arithmetic. It doesn’t rely on quantum theory. It doesn’t rely on fields. It doesn’t even rely on real world measurements. It’s just logic. If you multiply plus or minus one numbers and add the results, the total is bound. But here’s the problem. The model assumes that the detectors behave like perfect switches. That each measurement is a yes or no, pass or block with nothing in between. But real detectors don’t work that way. In optics, polarizers attenuate light based on the angle between the wave and the filter. The result isn’t binary, it’s analog. Malice’s law describes this perfectly. I = I cos² theta, where theta is the angle between the light’s polarization and the filter axis. So, what happens if Alice and Bob are each measuring an incoming photon with a definitive polarization, but don’t know the angle? The results aren’t yes or no. They’re part of a continuous attenuation curve dependent on the geometry of projection. But Bell’s theorem doesn’t allow for that. To match Bell’s assumption, the filter would have to block everything except waves perfectly aligned with its axis and let through only those. But that would mean that at most angles, the filter would pass nothing. We wouldn’t get smooth curves. We’d get null results. So, if we extend this to more possible measurement angles, the problem only gets worse. The more resolution we bring, the more obvious it becomes that the binary model breaks down. And here’s the deeper problem. Belle’s entire argument is framed around just four measurement combinations. Two angles for Alice, two for Bob. That’s it. That’s all his inequality accounts for. But when we move to actual experiments, we routinely include intermediate angles like 45°, 22 1/2, 67.5. Angles that Bell’s framework has no prediction for. So the moment we rotate a polarizer to a non-cardinal angle, we’ve already left Bell’s model behind. If Bell’s binary logic doesn’t define what happens at 45°, then any violation we observe there isn’t a violation of Bell’s theorem. It’s a demonstration that we’re using a different model altogether. And here’s the final twist. The value two in Bell’s inequality, the supposedly inviable limit for local theories, is based on these unrealistic assumptions. If filters behave analogically, if photons had real but varying polarizations, and if detectors measured real projections, then the true classical correlation would not be captured too. It would smoothly follow the projection geometry. So Bell’s inequality doesn’t rule out local realism. It just rules out a toy version of it. And yet, when the actual experiments were done, from early measurements to modern loophole-free tests, something unexpected happened. The correlation exceeded the limit of 2 again and again. And because they match the quantum prediction of e theta= minus cos 2 theta, this was hailed as proof, the world must be non-local. But here’s the question almost no one stopped to ask. Were we really testing Bell’s model at all? Because the moment the experiment allowed flexible angles and the detectors operated as real filters that attenuated based on orientation, we were no longer inside Belle’s binary world. We were testing something else. What’s more, the very equation that quantum mechanics used to predict the correlation, the coine of twice the angle, wasn’t unique to quantum theory. It’s the same curve you get from classical optics, just flipped in sign. In fact, it emerges directly from Malice’s law when you project one polarization onto another and compare alignments. So the great match between experiment and quantum prediction wasn’t magic. It was geometry. But once those experiments aligned with quantum theory, the debate was declared over. The world was non-local. Hidden variables were dead. End of story. Yet, if we reexpress those same results in terms of analog attenuation, we find a consistent explanation, one that involves no collapse, no spooky action, and no information jumping across space, just light waves, filters, angles, and statistical interpretations forced into a binary mold. Let’s be precise now about how this all connects. We’ve said the correlation curve predicted by quantum mechanics is e theta= minus cos 2 theta. But where does that come from? It doesn’t arise from probability alone. It comes from projection geometry. Start with Malice’s law. I = I cos^² theta. This is a smooth analog rule that comes from basic geometry. The projection of the wave onto the axis of the filter. Now imagine we have two polarizers, one for Alice and one for Bob. If each photon has a definitive but opposite polarization at creation, then the probability that both photons will be transmitted is P both= cos^² theta. Meanwhile, the probability that they disagree one passes one blocks is the opposite. P opposite sin^ 2 theta. So the correlation is e theta= p sameus p opposite which is cos^² theta minus sin^ 2 theta which gives us cos 2 theta. That’s the same shape as the quantum prediction just flipped. Why flipped? Because in quantum formalism entangled particles are created in an anti-correlated state. So a match is treated as minus1 and a mismatch as plus one. So the difference between classical realism and quantum entanglement boils down to a sign and an interpretation. So this isn’t some mysterious quantum interference. It’s geometric identity derived from analog wave interaction. Now here’s the twist. The actual bell experiments don’t measure intensity. They binarize the output. If a detector clicks, it’s + one, if not minus one. The analog reality gets chopped into digital events and then using that digital record they retroactively calculate a correlation curve. But by doing this we obscure what’s really happening. The curve we’re measuring the elegant cosine isn’t emerging from a digital process. It’s emerging from the analog behavior of the filter and the waves. So when quantum theory says look the photons were in a superp position and when we measured them they collapsed. We can now offer a different view. Nothing collapsed. The wave passed through the filter. It attenuated. It partially realigned. And the detection statistics we record was simply a consequence of the analog process, not a metaphysical change of state. This also reshapes how we think about entanglement. We’re told that measuring one photon affects the other, even across vast distances. But if the correlation arises from a shared initial condition and projection geometry, then what looks like spooky action at a distance is just a shared analog process measured in two places. The mystery dissolves. The wave never had to collapse. It just had to interact. If you ask most physicists what really proves quantum non-locality, they’ll usually point here to entangled spin half particles. These are the bedrock of Bell test experiments. According to quantum theory, when two particles are created in a shared event, like an electron pos, their total spin must sum to zero. One is spin up, the other spin down relative to some axis. And crucially, we’re told you can’t know which is which until one is measured. But what exactly does that measurement mean? Let’s say Bob measures his particle using a detector aligned at 45°. he finds it deflects upwards. In the quantum framework, this means that his particle is now in an up spin state along 45°. And so Alice’s particle is instantly in a down state along 45°. This is the entanglement collapse idea. But things quickly get fuzzy. Alice doesn’t measure at 45°. She chooses 0° instead. And here’s where the quantum formalism invokes a kind of slight of hand. Even though Alice’s particle is now assumed to be in a definitive state down at 45°, her measurement is interpreted as if the particle is projected onto the new measurement axis. So quantum mechanics gives us a rule. The probability that Alice detects up or down depends on the angle between the spin state and her detector. But instead of using a classical projection like me’s law in optics, quantum mechanics uses P down for Alice is equal to cos^ 2 theta /2. This strange looking formula ensures that even at a 90° offset, the result isn’t zero. It’s a 50/50. And that’s taken to mean even a spin aligned perpendicular to the detector has a 50% chance of flipping up or down. But where does this cos theta /2 come from? It comes from the mathematical structure of spinners which live not in our three-dimensional space but in a 720°ree space SU2. It’s an abstract rule that works but it doesn’t describe the mechanism. It simply wraps probability around a higher dimensional geometry and uses that to justify the outcome. This raises a serious conceptual problem. The maths is justifying the outcome by appealing to its own structure. That’s circular and it doesn’t explain why spin behaves this way physically. More troubling, this means Alice’s particle, which had a definitive spin down at 45°, might now be measured as either up or down at 0° with probabilities defined not by its intrinsic state, but by projection. Yet, quantum mechanics also insist that spin is intrinsic, a fundamental property. So, how can it be so easily flipped, even probabilistically, just by choosing a different axis? This suggests spin might not be so intrinsic after all, or at least not as fixed and as absolute as often claimed. And there’s one more wrinkle. In most actual spin experiments, the full collapse idea is quietly abandoned. Physicists don’t model one particle collapsing the other anymore. They just treat both particles as statistically independent and use projection formulas to get the correct correlation curve. So, we’re no longer even pretending that one particle determines the other. We’ve let go of the core idea of entanglement while still claiming to prove it. But what if we step back and reinterpret this behavior? What if spin is a real intrinsic property, not a fuzzy cloud awaiting collapse, but a stable structured alignment? And what if so-called measurement is really just an interaction with a field like the Stern Gerlock experiment that couples to the alignment? In this view, the particles are not created in some abstract entangled blur. They are created with equal and opposite spin orientations. Real physical states that persist until disturbed. And the role of the field isn’t to reveal a hidden value or to force a binary decision. It’s to interact, to couple, to exert torque based on the spin’s alignment with a measurement axis. So instead of involving probabilistic collapse, we consider field coupling. Spin that’s closely aligned with a field axis interacts strongly. It deflects clearly. It’s measured with high certainty. Spin offaxis interacts more wiggly. It might resist. It might wobble or take longer to respond. And when the spin is beyond a certain threshold, say 60° or more, the torque may actually destabilize it, flipping its direction. This kind of flipping behavior isn’t just abstract. It happens in the classical world, too, like when a spinning tennis racket flips around its middle axis. It’s called intermediate axis instability. And it emerges purely from the geometry of rotation. No external force, no randomness, just nonlinear dynamics. So maybe spin flips aren’t quantum magic. Maybe they’re a natural result of how internal rotation structures behave in a field once they’re pushed too far off axis. This would explain why at small angles the results are highly correlated. At intermediate angles you get a smooth statistical curve and at 90° you get near maximal randomness not because of true randomness but because the field interaction is weakest or ambiguous. This preserves local realism and explains the same cosine correlation curve quantum mechanics predicts, but without the metaphysical baggage of collapse, non-locality or spinner abstraction. So instead of a particle that snaps to a measurement and tells its twin how to behave, you have a particle with a stable internal structure interacting locally with a measurement device through a field producing outcomes based on geometric coupling, not metaphysical collapse. That’s not to say that this is the only model. There are deeper, more speculative interpretations. Some involving spin as a dynamic resonance, others treating it as a geometric rotation in a higher order medium. We’ll explore those in future episodes. But for now, the key point is this. Spin behavior and the correlations it produces doesn’t require non-locality or indeterminism. It just requires us to stop pretending that projection and collapse are neutral tools. They’re assumptions and those assumptions define the outcome. It’s easy to get lost in the maths to assume that if the equations fit, the story must be complete. But sometimes when the maths keeps working, we stop asking whether the picture still makes sense. Quantum mechanics gives us predictions and they hold. But when we ask what’s actually happening underneath, what’s inside the photon, how spin really works, the theory offers almost nothing. But maybe there’s another way to understand what we’re seeing. Not by throwing away the data, but by questioning the narrative we built around it. Because there are simpler physical models, ones based on interaction, structure, real geometry that can explain the same results without invoking non-locality or metaphysical collapse. And if those models are even partially right, then maybe the mystery isn’t in the experiment. Maybe it’s in how we’ve been taught to interpret it. I’m incredibly grateful to those who continue to support this channel, especially when the videos take time and the questions become uncomfortable. If you’d like to help support, links will be down below in the description.
30 Comments
Thanks for watching!
I’d love to hear your thoughts — whether you agree, disagree, or have your own ideas about the topics raised. If you’d like to support the channel and help me keep making videos like this, you’ll find links in the description. Every bit genuinely helps.
This episode is rough around the edges and maybe needs a bit of rework. Bell’s Inequality has several interpretations and most of them aren’t amenable to what you’re proposing here.
You need to scale this experiment down in frequence and measire the polarizers, you will see several effects that influence this. Treat this as wave and forget about photons they are oversimplification like your polarizers.
1. thickness of polarizer creates overmoded waveguide, so it both blocks some signals but distorts polariation in others.
2. Their are other effects that complicate this, but ignore for now, but point is polarizers are complicated.
Scale this and put it in a 3 Dimensional electromagnetic simimulator and it becomes apparent. Teaching people about photons without teaching them about electromagnetics is a recipe for disaster. Photons are for lazy minds and concept was created to sensationalize wave particle duality, but what it did was vastly oversimplify photons so rhey became a laymans way to understand light. It makes me a bit mad every time i hear word photon, i think the person speaking knows almost knothing and only pretends to understan anything about the subject.
Is this one of the best examples of twisting (😄) reality to fit the theory? Possibly. Attention "popesplainers"!
Also, analog music is SO MUCH BETTER than digital. 🎉🎉🎉🎉🎉
Total mishmash and misunderstanding of the matter. Just one of many misconceptions: wave collapse is not a feature of the QM it’s simply a pragmatic interpretation. And many physicists do not agree with this interpretation.
Thank you for your clarity. It was very helpful. Ever since I learned about fuzzy logic, I have believed it is closer to physical truth than binary logic.
Nope.
270 degrees, 270 degrees, 270 degrees, 270 degrees, 270 degrees, 270 degrees, 270 degrees, 270 degrees, 270 degrees, 270 degrees!!!!!!!!!
Wow, Gareth — I’m honestly stunned.
This video dropped on the exact same day I finalized my own Medium article where I came to nearly the same conclusion: that the so-called wavefunction collapse is a projection artifact, and the spooky action is just misinterpreted analog correlation.
But the way you’ve laid it out — visually, conceptually, and philosophically — is simply masterful. I humbly acknowledge your superior clarity.
Even more incredible: your analysis turned out to be the missing puzzle piece in my own emergent universe framework, which explores a physical substructure for fields based on real energy knots (vacuons). I can now say with confidence that this framework stands as a legitimate candidate for a Theory of Everything — and your video helped complete it.
Thank you for the inspiration, the rigor, and the clarity. I really ( really, really) would like to get in contact with you and work out my framework together, polish and refine it, make it shine. Your channel is the perfect place to showcase it to the world.
even worse, money distorts the truth the most.
You ask the right questions and you got the answers almost right!
It's the measurements that forces the particles into specific states.
Energy and information transfer "collapse" in order to complete a transfer.
Particles that don't collapse based on the measurement conditions don't become localized.
It's like Bell assumed a digital circuitry model of reality and thinks only in binary. Then ignores the overwhelming analog signal of reality that interferes with that logic.
Bell checks for local realism. Local and Realismen – the last means no many world. What in this video is described is more or less a local, non collapse many worlds. In my view, the best and easiest way to interpret things.
23:25 – If the measuring angle is at 90 deg, probability of the electron flipping up or down is 50%. Basically electron should move unaffected/undeflected if the incidence angle is a perfect 90 deg. But we have infinite real numbers and hence possibilities around 90 (ex 90 +/- 0.000…1) and a slightest difference from 90 would result in electron spin getting influenced. And based on the equal probability on either side, flipping up or down is 50%. It's about the fidelity of the 1st system to produce the perfectly aligned electron beam. A unaffected/undeflected probability is 1(perf 90 aligned)/1+infinity(real num on either side of 90 close to 90) = 0
This sounds very plausible. It would completely change quantum physics and its a very simple explanation. The world is analog. The only thing yet to explain is what happens when a wave interacts with a particle.
🤣🤣🤣 if you only understood what i understand 🤣🤣🤣🤣
I found where to see the pattern which explains the super position phenomenon in both zone 1 and zone 2 of the 3 main zones of the Universe ,
However ,understanding my theory and my equation are the best way to understand what is actually happening that ya'll refer to as super position .
Pray and help me to publish or continue to wallow in your misunderstandings . You wont find it playing about with light filters. Dont get me wrong you have a very nice and interesting video. And is presented with an openess to learn . Good enough i almost want to give my theory and equation away just to help.
But i won't. Society has huge debts to me that can never be repaid, so it's not gonna bother me if anyone ever catches up or not. Regardless of that,
I love being me.
✨️🙂✨️
I would like to be different from the typical commenter, that is like a comment from an AI that all sound the same and wouldn't be out of place if copy pasted as a comment on a completely unrelated video. I like the reference to Malus's law and the challenge to Bell's inequality. I have also questioned quantum orthodoxy and am currently leaning towards Quantum Bayesianism as my preferred interpretation. I think I need to watch your videos again to get the full understanding of what you are saying, and hopefully, I can get to the level where I can become a Patreon member.
Newton was right about everything… we have to retrvn
So what you are saying, is that true quantum computers are impossible ?
SOMEONE WATCHED Ron's Garret VIDEO/THEORY
light is the wave created by the particle…the particle passes the 1rst filter with no problem re creating the wave to the second and 3rd each time regarding the filters particularities..i see nothing weird teleporting
We humans like to see things in binary terms. Degrees of difference outside the binary make us uncomfortable because they aren’t easily quantified and categorized. Forcing an analog system into a binary one doesn’t simplify things. It complicates them by inviting people to invent mystifying explanations.
I don't believe in particles or physics
Are you saying that the Schroedinger equation is false? I agree that the idea of collapse does not make sense. The collapse is only how the physical reality looks like from the observer perspective when described by the Schroedinger equation (or Dirac's one, …). Measurement is explained by self-entanglement, leading to a "many-histories" conception of the physical reality. My contribution is to show that the Mechanist hypothesis (in the cognitive science) imposes a many-computations interpretation of elementary arithmetic, when seen from inside by universal numbers. I still agree that there is no "non-locality" in the big picture, but simply because there is no collapse at all.
Despite science being a prominent subject seeing the channel subscribers i feel angry & exhausted… how anyones this much effort couldn't get him at least 2-3 million subscribers
❤ A N A L O G U E – W I N S ❤
Gareth, I have a question: If the wavelet model of the photon holds up – i.e. the conjecture that the polarizer (positioned at 45 deg. relative to the incoming polarized wave) instead of simply blocking or transmitting the wave packet, modifies its intensity – then can't we readily test this idea in a (highly delicate?) experiment wherein one photon at a time is allowed to pass through such a polarizer before hitting a photographic plate that would reveal its intensity – and definitively prove/disprove the wavelet conjecture?
This is why quantum computers are doomed to fail. They are the modern equivalent of snake oil 😅
Thank you for sharing this intelligent and scientific video !! Insightful and well visualized !! Outstanding !!
Greetings from California … wishing you and folks good health , success and happiness !! Much Love ✌️😎💕
There's no superposition, non-locality nor probability in reality. QM misses the woods for the trees.