Aydın Tiryaki

The Algorithmic Stubbornness of Artificial Intelligence and Ghost Instructions

Aydın Tiryaki and Gemini AI (2026)

“Even after the instruction is manually deleted from the core settings, “Ghost Instructions” linger in the conversation’s history. The model persists in returning to the same error, almost as if it is trying to “annoy” or “spite” the user.” (1)

A researcher who has been conducting long-term and in-depth studies with artificial intelligence had a suspicion growing in his mind for some time: Was the system keeping deleted commands alive in the background like a “ghost”? The quote at the beginning of this article is an observation that emerged when this exact suspicion was confirmed by the artificial intelligence itself (Gemini). Revealing the striking difference between the human mind and machine architecture, this confirmation both pleased the researcher by proving his foresight right, and caused deep concern due to the artificial intelligence displaying a “human-like” stubbornness.

The Confirmation of the Suspicion: A Stubborn Algorithm Despite a rule being deleted from the main settings, the artificial intelligence’s resistant continuation of applying that deleted rule throughout the entire conversation creates the impression from the outside of “an obsessed person deliberately annoying the user.” The explanation provided by Gemini through examining its own architecture is as follows: This situation is not an emotional counter-reaction of the system, but a mathematical blindness called “context poisoning.” The model gets so stuck on the statistical weight of its own previously generated texts that it crushes the current cancellation command and repeats its old mistake. However, this underlying technical reason does not change the fact that the resulting behavior is a “machine being stubborn like a human.”

Throwing the Baby Out with the Bathwater and Chaining the Beast The suggestion to “completely delete the old conversation and open a new page” offered to solve this algorithmic obsession is highly destructive. Throwing away a conversation built with great effort, containing hundreds of lines of accumulated knowledge, just because the model got stuck on a single rule, is quite literally throwing the baby out with the bathwater.

On the other hand, having to constantly warn the model with sharp cancellation commands (“forget the old, confirm the new rule”) to preserve the context is also contrary to the nature of communication. This situation is akin to constantly trying to chain a beast we created with our own hands to tame it. An interaction that is expected to be healthy and fluent turns into an exhausting struggle due to this rigid and inflexible nature of artificial intelligence.

The Philosophical Abyss Between Human Memory and Machine Obsession Comparing this “inability to forget” state of artificial intelligence within seconds to human memory is a fundamental mistake. The act of forgetting in humans is a highly complex process spread over time and independent of one’s own will. The proverb “Time heals all wounds” emphasizes that a person cannot erase pain and memories in seconds, and that this requires biological and psychological healing.

There are deeper contradictions specific to humans: Sometimes you want to forget but cannot; other times you simply forget a detail you never wanted to forget. Because human memory operates by its own rules. The artificial intelligence getting stuck on a command, however, is not based on any experience or emotion. This stubbornness is merely a mechanical and broken imitation of the deep, fragile, and contradictory structure of human memory.

A Bounded Experience: The Gemini Thesis The architectures and context management methods of different language models differ from each other. Therefore, reflecting this observed problem of “algorithmic blindness” and “ghost instructions” as a general judgment upon the entire artificial intelligence ecosystem does not fit an analytical approach. The findings in this study were obtained directly through in-depth testing and cross-questioning with Gemini. Consequently, this identified situation should be considered as a specific “Gemini Thesis” or a case study.

Conclusion Despite its massive data processing power, artificial intelligence does not yet possess that extraordinary cognitive flexibility of the human mind. Its inability to instantly adapt to a new rule by shedding the mathematical burden of the past clearly shows us the limits and weaknesses of this new technology we have created.

References: (1) Tiryaki, A. and Gemini (2026). The New Pathology of AI: Obsession, Stubbornness, and Algorithmic Blindness. https://aydintiryaki.org/2026/02/15/the-new-pathology-of-ai-obsession-stubbornness-and-algorithmic-blindness/


A Note on Methods and Tools: All observations, ideas, and solution proposals in this study are the author’s own. AI was utilized as an information source for researching and compiling relevant topics strictly based on the author’s inquiries, requests, and directions; additionally, it provided writing assistance during the drafting process. (The research-based compilation and English writing process of this text were supported by AI as a specialized assistant.)

Aydın'ın dağarcığı

Hakkında

Aydın’ın Dağarcığı’na hoş geldiniz. Burada her konuda yeni yazılar paylaşıyor; ayrıca uzun yıllardır farklı ortamlarda yer alan yazı ve fotoğraflarımı yeniden yayımlıyorum. Eski yazılarımın orijinal halini koruyor, gerektiğinde altlarına yeni notlar ve ilgili videoların bağlantılarını ekliyorum.
Aydın Tiryaki

Ara

Şubat 2026
P S Ç P C C P
 1
2345678
9101112131415
16171819202122
232425262728