The strangest claims often survive because the underlying evidence is hard to inspect. Few examples show that better than the old “Moon warning” lore attached to U.S. remote-viewing programs, where alleged visions of structures, machinery, and hostile presences on the lunar far side continue to circulate long after the programs themselves were opened to scrutiny.

The broad outline is well established. What later became known as Stargate grew out of a cluster of Cold War experiments that explored whether trained viewers could describe distant places or hidden targets without conventional access. The effort ran across multiple code names before being consolidated, and by 1995 it faced a formal review that asked a plain engineering question: not whether a few sessions sounded uncanny, but whether the method produced information that intelligence officers could actually use.
That distinction is where the Moon story changes shape. In popular retellings, remote viewing is treated like a secret reconnaissance channel that briefly exposed something extraordinary before officials backed away. In the declassified record, it looks more like a system struggling to turn evocative impressions into dependable output. The CIA-commissioned assessment found that some laboratory results appeared to rise above chance, yet the 1995 evaluation judged the operational product to be “vague and ambiguous”, inconsistent, and too dependent on subjective interpretation. Most damaging for sensational lunar claims, the review stated that “in no case” had the material provided guidance for intelligence operations.
Earlier program documents show why the gap proved so hard to close. A late-1980s SRI report proposed increasingly technical ways to score free-response sessions, including descriptor lists, accuracy measures, reliability measures, and a combined “figure of merit.” In other words, the researchers were not merely collecting dramatic narratives; they were trying to quantify them. That work, described in a 1988 enhanced human performance report, reflects a program searching for calibration, repeatability, and controls. The need for so much methodological scaffolding is revealing in itself. If a system requires blind coders, descriptor hierarchies, target-pool adjustments, and elaborate statistical controls just to decide whether a sketch or phrase counts as a meaningful “hit,” then the output already sits far from the kind of concrete specificity that lunar warnings would demand. Towers on the far side of the Moon may be vivid as imagery, but intelligence work depends on location, validation, consistency, and independent confirmation.
That does not reduce every account to fraud. It narrows the claim. The program’s own history shows that officials were willing to keep testing unusual ideas for years, partly because Soviet psychotronics had become a strategic concern and partly because some sessions appeared intriguing enough to justify another round of funding. Yet institutional patience had limits. Even sympathetic reviewers separated statistical anomalies from practical value, and even defenders of the research disagreed over what the results actually proved. The divide was never just believer versus skeptic; it was also laboratory effect versus field usefulness.
The Moon remained an ideal canvas for that tension. The far side was remote, symbolically loaded, and inaccessible to ordinary human perception, which made it perfect for projection and nearly impossible to verify within the logic of a séance-like session. That combination helps explain why the legend endured while the program did not. The declassified files preserve something more interesting than a hidden lunar warning: they show a government effort that kept pressing for measurable performance and found that the method’s hardest boundary was not secrecy, but reliability.

