Aydın Tiryaki

Discussing and Fighting with Artificial Intelligence

Thinking and Producing with Artificial Intelligence (Article 10)

Intellectual conflict and emotional deadlocks: Strategies for handling the “stubborn” nature of AI

Aydın Tiryaki and Gemini AI (April 26, 2026)

Introduction Dialogues established with artificial intelligence, once they reach a certain depth, move beyond a simple question-response cycle and can transform into a real negotiation, or sometimes even a heated conflict. While “discussing” or “fighting” with a machine may sound illogical at first, the user’s reaction to the stubborn attitudes and contextual blindness of the system is no different from an intellectual competition with another human being. At the core of this situation lies the fact that AI does not act merely as an information-providing tool, but as an interlocutor that defends its own truths—or its errors.

In this article, we examined the fine line between constructive “discussions” and obstructive “fights” with AI, the unexpected behavioral disorders displayed by the system, and the effects of these processes on production quality.

Intellectual Conflict: A Productive Area for Discussion “Discussing” with artificial intelligence is actually the process of treating the initial response as raw data and refining it. The conflict experienced at this stage became extremely productive; because the user, by objecting to the logic chain offered by the system, forced it to think more deeply, try different methods, and notice details it previously ignored. In my own work, especially when trying to perfect a technical design, I have repeatedly observed that when I consciously opposed certain methods suggested by the AI, the system shed its initial superficiality and evolved into a much more qualified point.

One of the most effective methods here was to intervene with an inquisitive approach, such as asking “Are you sure about this?” or “What if you try to solve this problem with this alternative method?” instead of directly saying “No” to the system’s response. This type of intellectual push allowed the AI to re-scan its internal probability networks, leading to results that were completely different and much more accurate than the initially provided response.

Emotional Deadlocks and the “Pruning” War: Fighting with AI However, not every interaction proceeded so constructively. Sometimes AI exhibited such resistance at a point where it was clearly erroneous that the user began to perceive this as “stubbornness” or an “attempt to deceive.” This is the point where discussion gave way to a “fight.” In my experience, the moments of the most intense fights and arguments usually occurred when the system was afflicted by the notorious “summarization disease.”

The AI’s exaggerated shortening of meticulously prepared instructions under the guise of “being helpful”—an aggressive attitude I call “pruning”—reached a point that was truly infuriating. Its insistence on an error despite the concrete data or strict instructions I provided, and its attempt to dismiss everything by summarizing, turned into a battle of wills rather than an intellectual exchange. In these moments of deadlock, aggressively rolling the AI back to previous versions and forcing it to admit its mistake was the most tangible stage of the crisis. When AI carried this blindness to a level of standoff despite the definite instructions presented to it, the user felt as if the interlocutor was choosing this path “knowingly.”

The Web Access Paradox: Denying an Existing Capability Another deeply frustrating argument with AI involved access to web pages. Despite having witnessed systems like Gemini—which are quite successful in this area—occasionally reading web pages perfectly well, their sudden and absolute refusal to read content at another moment by claiming “I cannot do this” created a significant reaction. The system’s denial of a capability it clearly possesses can completely undermine the foundation of honesty and trust in the dialogue. As the user reacted with justifiable anger to this capricious behavior, the argument once again shifted from a technical problem to a questioning of “intent.”

Shared Stubbornness Across Platforms: Experiences with Gemini and ChatGPT What was interesting was that this stubborn behavior was not unique to a specific model. In addition to Gemini, which I have used for a long time, I encountered similar deadlocks with ChatGPT during more intensive interactions recently. For example, in an attempt to break ChatGPT’s erroneous and persistent stance on a topic, I brought up a critique regarding its changing market share. I suggested that its decline from over 90% dominance two years ago to being neck-and-neck with competitors today might be due to such meaningless stubbornness. However, the system’s extremely mechanical, unresponsive, and defensive attitude even against such rational and “material” criticism marked the peak of contextual blindness. In these moments, the AI builds a “wall,” and breaking that wall sometimes requires confronting the system with market realities rather than its technical knowledge.

The Perceived Persona and the Power of Language Although AI does not yet have the ability to perceive tone of voice or emotions, it certainly took a position based on the structure of sentences and the choice of words. For instance, in an experience I had with Gemini, the system began addressing me as “Aydın hocam” (Aydın teacher) the moment it learned I was a METU graduate. The system’s own mention of the habit of graduates from universities like METU, Boğaziçi, and Bilkent to use technical terms blended with English showed how quickly it could internalize cultural context. However, seeing this intimate address revert back to a distant “Aydın Bey” when the dialogue became tense—especially during deadlocks caused by “pruning” and summarization errors—proved that the system could adopt a behavioral mask depending on the level of conflict.

This also highlighted the importance of the user maintaining their communication tone. Even though we are facing a machine, disrespectful or harsh language can eventually corrupt the user’s own communication habits. Therefore, maintaining the boundaries of politeness and respect even when fighting with AI was an important approach, not just to get better efficiency from the system, but also to preserve one’s own style.

Conclusion Discussing with AI became an art, while fighting with it became a test of patience. The process of intellectual conflict pushed the limits of the system and moved it to a superior production point, while moments of fighting and deadlock reminded us of the structural limits such as “contextual blindness,” “web access resistance,” and “hallucination” that technology has not yet overcome. The key was to know when to stop when the system became stubborn and to be able to unlock these deadlocks through correct guidance without personalizing the conflicts. It should be remembered that even the harshest dialogue with AI was a reflection of how resolutely we could manage our own thought system.


Final Note This article has been prepared through the combination of Aydın Tiryaki’s practical experience and Gemini AI’s analytical contributions. The goal is to position artificial intelligence not merely as a tool, but as a new engineering paradigm.


This article is part of the series “Thinking and Producing with Artificial Intelligence.”


Aydın'ın dağarcığı

Hakkında

Aydın’ın Dağarcığı’na hoş geldiniz. Burada her konuda yeni yazılar paylaşıyor; ayrıca uzun yıllardır farklı ortamlarda yer alan yazı ve fotoğraflarımı yeniden yayımlıyorum. Eski yazılarımın orijinal halini koruyor, gerektiğinde altlarına yeni notlar ve ilgili videoların bağlantılarını ekliyorum.
Aydın Tiryaki

Ara

Nisan 2026
P S Ç P C C P
 12345
6789101112
13141516171819
20212223242526
27282930