Mirages, Frame Rates, and the Observer
The Wall of Lights on the Plains
I was driving across the Saskatchewan plains when reality did something strange.
Out ahead of me, where there should have been nothing but horizon and sky, a wall of lights appeared.
Not a town. Not a line of farmhouses. It looked like a solid glowing band stretching across the distant line where earth and sky meet. It reminded me of the way people in deserts describe mirages that look like cities or fortresses—except this wasn’t some far-off dune sea. This was flat, open prairie that I know well.
In the moment, it felt like I was heading straight toward a glowing barrier at the edge of the world.
Later, when I checked my dash cam footage to prove to myself I wasn’t imagining it, the “wall” was either faint or completely missing. The camera had been pointing the same way as my eyes. It was recording at a solid frame rate. The scene should have been there.
But what I saw and what the camera recorded did not match.
That gap—between my raw experience and the digital record—is what pushed me into a deeper question:
If a mirage is “just” physics, why did my eyes see a vivid wall of light that my camera basically ignored?
And if our brains process visual information in rhythmic bursts—like a kind of internal frame rate—how much of what we call “reality” is actually a rendering created by the observer?
This article is my attempt to answer that, from both a scientific and occult point of view.
The Morning the Plains Built a Wall
It started as a subtle shimmer.
I was driving a long, straight stretch of highway. The sky was clear, the land was flat, and visibility should’ve been almost infinite. In the distance, I noticed something unusual along the horizon—not the classic “wet road” mirage where the asphalt looks like water, but a horizontal band that didn’t match the landscape I knew.
At first, it was just a faint blur.
As I got closer, it sharpened into what looked like a continuous wall with lights sprinkled along it. Not bright like stadium lights, more like the glow of distant street lamps or industrial lights, compressed together until they formed a bar of light.
It had the same “mirage” feeling as desert illusions I’ve read about—where travelers swear they see buildings, towers, or cities floating in the distance—but the environment was wrong. No dunes. No heat waves dancing off the sand. Just cold, flat prairie.
Still, my eyes insisted: There’s something there. A barrier. A wall of lights.
Then, like many people do in strange situations now, I thought: Good thing I have a dash cam. I’ll check it later.
When I finally did, that’s where things got weird.
On the footage, the horizon looked mostly normal. At best, I could convince myself there was a slight, blurry band, but not the solid luminous structure I’d seen with my own eyes. No clear wall. No obvious lights.
Same place. Same time. Two very different realities.
Mirages Beyond the “Water on the Road” Trick
Most people think of mirages as that “water on the road” illusion you see on a hot day. The kind that makes the asphalt look like a reflective pool that vanishes as you get closer.
Those are called inferior mirages. They happen when:
- The ground is very hot.
- The air right above it is hotter and less dense than the air further up.
- Light from the sky bends upward through that temperature gradient.
- Your brain traces those rays back in a straight line and concludes: “there must be a reflective surface there.”
What I saw wasn’t that.
The “wall of lights” I observed fits better with another type of mirage: a superior mirage, and more specifically, a complex version often called a Fata Morgana.
In a superior mirage, the temperature profile is flipped:
- The air near the ground is colder and denser.
- The air above it is warmer and less dense.
- Light bends downward toward the denser layer instead of up.
Because of this bending, objects that are physically below the geometric horizon can appear lifted up, stretched, stacked, or turned into bars and blocks along the horizon line. A distant town, a row of farm buildings, or industrial lights that would normally be too low or too far to see can be pulled into view, warped into a continuous glowing band that looks like a wall at the edge of the world.
When the inversion is complex—several layers of different temperatures stacked together—the light doesn’t just bend smoothly. It refracts in a more complicated way, producing multiple images of the same distant objects:
- Some upright,
- Some upside down,
- Some stretched,
- Some compressed.
To a human observer, these can combine into illusions that look like:
- Floating cities,
- Walls of light,
- Large Distant Structures.
That’s exactly the kind of thing I felt I was seeing in that moment on the plains.
So from a pure physics standpoint, there’s a very reasonable explanation:
A temperature inversion over flat land turned distant lights—maybe from a town, industrial site, or clusters of farms—into a distorted, band-like image along the horizon. The atmosphere acted like a stack of lenses, taking scattered points and compressing them into a continuous luminous “wall.”
But that still doesn’t answer why my eyes saw it so clearly and my dash cam almost didn’t.
To understand that, I have to talk about frame rates, brains, and how a human observer is very different from a camera.
We Don’t See Like Cameras
Both my eyes and the dash cam were pointed at roughly the same place. Yet they reported different worlds.
The simple explanation is: “My eyes are better than a cheap camera.” But that’s only half the story.
A camera:
- Has a fixed position and angle.
- Captures light in discrete frames at a fixed frame rate.
- Stores pixel values that get compressed to save space.
My eyes and brain:
- Move constantly—tiny micro-movements of the eyes, subtle shifts in my head and body.
- Continuously adjust sensitivity, contrast, and focus.
- Combine raw input with prediction, memory, and meaning.
Those differences are huge.
Geometry matters
A superior mirage sits right on the horizon and often occupies a tiny angular slice of space. A small change in vantage point—just a few centimeters up or down—can change:
- Which bent light paths actually reach your eye or the lens.
- Where the apparent image sits relative to the horizon.
- Whether the stacked images overlap into a “wall” or spread out into scattered fuzz.
If my eyes are slightly higher than the dash cam, or if I unconsciously adjusted my posture to get a better view, I might have lined up with the most dramatic part of the mirage. The dash cam, fixed at its mount position, might have sampled a slightly different set of rays where the effect was weaker.
Same phenomenon. Different window into it.
Dynamic range and subtlety
Human vision can handle an impressive dynamic range. I can see dim objects near bright ones, notice tiny contrasts, and track delicate changes over time.
A dash cam, especially a consumer one (and cheap like mine):
- Overexposes bright points, turning lights into blown-out blobs.
- Underexposes dark regions, turning subtle bands into flat black.
- Applies automatic adjustments and heavy compression that smooth out small changes frame to frame.
The mirage I saw wasn’t a cartoon-level special effect. It was subtle—delicate shifts of brightness and shape that my brain stitched into a clear structure. The camera might have recorded those variations, but compression could have smeared them into a boring, low-contrast strip that doesn’t remotely match my lived experience.
The result is a kind of visual mismatch:
- To my eyes, the wall was obvious, textured, and full of internal pattern.
- On video, the same band looks like “maybe some haze at the horizon.”
So far, this is still just optics and engineering.
But there’s another layer: how my brain turns that incoming data into a world—and how that depends on something like an internal frame rate.
Do We See at 30 FPS? Not Exactly, But…
People often talk about human vision as if we “see at 24 fps” or “30 fps,” like a camera or a movie. It’s a convenient analogy, but it’s wrong in a literal sense.
My eyes aren’t taking separate still images and playing them back like a flipbook. Instead, photons hit my retina continuously, and my brain organizes that stream into something coherent.
That said, the brain does show rhythmic activity in certain frequency bands. Some of the rhythms related to perception are in the range of about 30 to 70 cycles per second. You could loosely think of that as a kind of "sampling" or "timing" grid that the brain uses to coordinate how it binds features together into objects.
So even if I’m not seeing in discrete frames, my brain is processing visual information in bursts and waves. These cycles help determine:
- What I notice.
- What I group together.
- What I ignore.
In other words, perception has a temporal structure. It’s not just “light goes in, picture comes out.” It’s more like:
- Sample the input.
- Compare it to expectations.
- Update the internal model.
- Repeat—over and over, many times per second.
So when I stare at a strange band on the horizon, my brain is not just passively recording it. It’s actively trying to make sense of it based on all the patterns I’ve ever seen before.
Is it a town? A tree line? Hills? A storm? A fleet of vehicles? A reflection? A wall?
The more ambiguous the input, the more my internal model has to work to “lock in” a best guess.
The Brain as a Rendering Engine
This is where the idea of “rendering” becomes more than just a metaphor.
If I think of my brain as a kind of rendering engine, then:
- The raw sensory input is like noisy, incomplete data.
- The internal model is like the geometry and physics engine of a game.
- The final image in consciousness is the rendered scene my mind presents as “reality.”
My brain doesn’t just receive the world. It constructs it.
When I saw that wall of lights, the physical situation was something like:
- A distant set of light sources.
- Uneven air layers bending the light into stacked, compressed images.
- A very small region near the horizon where all of this converged.
From there, my internal system did the rest:
- It detected a strange horizontal bright band.
- It filled in missing details.
- It stabilized this into a single, coherent object: “a wall with lights.”
The dash cam had no such inner model. It captured pixels, nothing more. No predictions, no meaning, no memories of what a “wall of lights” might be. When I later watched the footage, I was watching that poor, reduced version of events. My brain couldn’t extract the same structure it had in the moment because the data had been stripped down too much.
So I ended up with two realities:
- Live reality: my brain rendered a strong mirage-object.
- Recorded reality: the camera recorded faint hints that were functionally useless to my renderer in replay.
That gap between what “really happened” and what gets logged on video is where my idea of the observer effect comes in—not in the strict quantum physics sense, but in a perceptual and occult sense.
The Observer Effect, Occult-Style
In quantum mechanics, the “observer effect” has a precise meaning tied to measurement and interaction. But that’s not what I’m claiming here. I’m not saying my consciousness changed the air itself or bent light differently than the dash cam did.
What I am saying is this:
The nature of my experience of reality depends on the observer—on me.
- The atmosphere created an optical situation that allowed for a mirage.
- My nervous system, with its rhythms, frame-like sampling, and predictive models, turned that into a vivid wall of lights.
- The dash cam, lacking that inner world-building engine, did not.
From an occult or magickal perspective, this gets interesting.
Many esoteric traditions teach that the world we live in is not purely objective. They speak of:
- Maya – the world as illusion or dream.
- A subtle “astral” medium that reflects and refracts images and thoughts.
- Magick – the shaping of perception and probability through focused will.
In that framework, what I saw on the horizon was a literal and symbolic lesson:
- The outer atmosphere bent physical light into a strange configuration.
- The inner “atmosphere” of my mind bent that configuration into a meaningful image.
The mirage wasn’t just a trick of hot-and-cold air. It was a mirror showing me how reality is always co-created:
- Part external conditions.
- Part internal expectation and attention.
My dash cam participated only in the external. I participated in both.
Magick, Mirages, and the Art of Seeing
If magick is, at least in part, the art of consciously working with how reality shows up for us, then mirages are like free lessons from the universe.
They show me that:
- The world I experience is not a perfect one-to-one copy of what’s physically “out there.”
- My mind continuously fills in gaps, smooths over noise, and locks in patterns, turning light and shadow into meaningful objects.
- Tools like cameras, while “objective” in one sense, are missing the entire layer of meaning-making that I actually live in.
So when I talk about the observer effect in this context, I mean something like:
The observer doesn’t just look at reality; the observer helps decide which version of reality becomes solid in their awareness.
The Saskatchewan wall of light becomes then:
- A meteorological event.
- A neurological event.
- And, if I choose to see it that way, a magical event.
The magic isn’t that I broke physics or rewrote the air. The magic is that consciousness took something ambiguous and rendered it into a portal-like scene that felt charged with significance.
For a moment, the plains looked like they ended in a luminous gate.
And then, as I drove on and the conditions shifted, the gate dissolved.
What I Took from That Drive
The dash cam not seeing what my eyes saw used to bother me. It felt like the camera was contradicting my experience, like it was saying:
“You imagined it. It wasn’t there.”
But the more I thought about the physics and psychology of the situation, the more I realized:
- The mirage was there, in the only place it could be: in the interaction between atmosphere and observer.
- The camera was not lying. It was simply blind to the kind of pattern my brain could extract.
- My perception was not lying either. It was performing the high-level work of building a world from incomplete inputs.
Standing back from it all, the event becomes a useful symbol for how I think about reality now:
- The world is not a fixed, finished photograph.
- It’s something like a constantly updated render, driven by sensory input, nervous system rhythms, and deeper layers of mind and meaning.
- Sometimes, under certain conditions, the render glitches—and in that glitch, we can see the engine behind it.
The wall of lights on the Saskatchewan horizon was one of those glitches. A Fata Morgana on the plains. A lesson in optics. A reminder that my eyes and my instruments don’t always agree. And an invitation to treat perception itself as an occult art: a place where physics and magick overlap, not in contradiction, but in collaboration.