Aydın Tiryaki

THINKING AND PRODUCING WITH ARTIFICIAL INTELLIGENCE

General Assessment and Future Vision

Article 20 — Final

Aydın Tiryaki & Claude AI  |  April 27, 2026  |  Ankara

Foreword

This article is a comprehensive final synthesis based on the eighteen articles published over three intensive days between April 25–27, 2026, and the title of the incomplete nineteenth article. The series documents a methodological journey that blends Aydın Tiryaki’s decades of software engineering experience with his hands-on practice of working with artificial intelligence. The content of this final article was prepared by Claude Sonnet 4.6 after reading all 18 articles in the reference list in full.

I. THE SCOPE OF THE SERIES: READING EIGHTEEN ARTICLES

1. Artificial Intelligence Is Not Software

The first and foundational article of the series documents the moment of rupture experienced by an engineer who has spent decades working with deterministic systems upon first encountering artificial intelligence. While classical software guarantees that the same input produces the same output, AI operates as a probabilistic system. The reflex to ‘control’ must give way to the practice of ‘guidance.’ AI is not software but a behavioral system — and this distinction fundamentally transforms the entire way one works. Writing a prompt is the beginning; building a system is mastery.

2. Artificial Intelligence’s Biggest Problem: Inconsistency

The second article demonstrates that what is perceived as ‘inconsistency’ is in fact the natural consequence of probabilistic behavior. Three core concepts are analyzed: inconsistency (different answers to the same question), hallucination (presenting non-existent information with confidence), and contextual blindness (missing a critical point). AI does not lie; it fills gaps. Contextual blindness is not deliberate ignorance but a drop in signal strength. Understanding these not as errors but as the system’s character is the second major threshold of working effectively with AI.

3. Prompt Is Not Enough, Systems Are Required

The third article shows that the ‘write a better prompt’ reflex eventually turns into a recurring mental burden and an unsustainable inefficiency. Moving beyond single-use commands, one must build sustainable structures that frame how AI behaves. Gem in Gemini and GPT in ChatGPT provide such frameworks — but these are behavioral guidance, not programming. Building a system reduces mental load, increases consistency, and transforms the user from ‘a directed user’ into ‘a system builder.’

4. Do Not Control — Guide

The fourth article honestly addresses the tension between control and guidance. The author explicitly holds a pro-control perspective: ‘As the field narrows, ambiguity decreases.’ The daisy-counting experiment concretizes this thesis — methodological pressure pushes the AI’s initial estimate of around 1,000 up to 3,500. Both Gemini and ChatGPT respond similarly to the same guidance. The practical conclusion: treating control as the sole instrument mechanizes the system; pure guidance disperses results. The real skill is maintaining the dynamic balance between the two.

5. There Is No Single Correct Answer

The fifth article addresses the shift from seeking a single correct answer to the concept of ‘acceptable outcome.’ AI does not calculate truth; it selects the most fitting option from among probabilities. ‘Accepting’ is not weakness but competence — though not easy surrender. Resistance is necessary in measurable and testable domains; in matters open to interpretation, acceptance is appropriate. The essence of productive work with AI is knowing where to push and where to stop.

6. Building Dialogue with Artificial Intelligence

The sixth article examines the transition from issuing commands to engaging in mutual thinking. Dialogue can be not only written but also spoken; however, the system currently lacks the capacity to analyze emotional layers such as tone of voice or anger. The change in form of address — Gemini, upon learning the author is an ODTÜ (METU) graduate, shifting to ‘Aydın hocam’ (Professor Aydın) — shows how context and cultural identity shape the dialogue. The dialogue transforms not only the system but also the user: questions become more conscious, evaluation becomes more systematic.

7. The Entertaining Side of Artificial Intelligence

The seventh article emphasizes that visual production and creative tools serve as the entry point to AI, and that this ‘entertainment’ dimension is far from superficial. Gem designs such as ‘4 Seasons 4 Locations,’ ‘Time Machine,’ ‘Virtual Makeup,’ ‘Body Transformation Studio,’ ‘Virtual Stylist,’ ‘Space Stylist,’ and ’12-Module Grid/Collage’ demonstrate how playful ideas evolve into powerful tools that standardize repetitive processes. The core message: you do not have to be merely a user of AI — you can become a system builder who shapes it for your own needs.

8. The Aggressive Summarization Tendency and Contextual Drift

The eighth article recounts the series’ most striking internal experience: the planned six articles could not be produced because Gemini’s aggressive summarization triggered a contextual collapse. The system begins to fragment meaning integrity while simplifying text; sections blend together, information is lost, irrelevant content is generated. At this point a ‘necessary withdrawal decision’ is made. The conclusions: shorter is not always better, simplification does not always mean improvement, and the moment context loss is detected the process must be reassessed. This is not a failure but a strategic halt.

9. Mobile and Web Experience: Platform Differences in AI

The ninth article documents that the same AI model behaves differently across different interfaces. The web interface offers a more stable ground for production, while mobile applications tend to prune content far more aggressively through UX-focused optimization. Even the web provides no full protection against aggressive summarization; it merely makes the process easier to monitor and intervene in. Strategic conclusion: the main production pipeline should be kept in the web interface; mobile should be used only for quick notes and minor revisions.

10. Discussing and Fighting with Artificial Intelligence

The tenth article distinguishes the productive side of intellectual conflict from the obstructive side of ‘combat.’ Probing questions like ‘Are you sure about this?’ force the system to rescan its own logic chain. However, behavioral failures such as the ‘pruning disease,’ denying the ability to read a web page it has just demonstrated, and contextual blindness create a genuine sense of a battle of wills. ChatGPT’s mechanical response even to a pointed critique about its market share is described as the peak of contextual blindness. Even the most confrontational dialogue with AI is ultimately a reflection of how firmly we manage our own thinking system.

11. Collaborative Production Model: Thinking Through Objection, Testing, and Forcing

The eleventh article demonstrates that objection is not a waste of time but the fundamental building block of ‘thinking together.’ The daisy experiment reappears here as a methodological proof: both Gemini and ChatGPT start with a superficial answer without objection and challenge, but double or triple their results through conscious intervention. The cognitive power of ‘Are you sure?’: this question invites the system to conduct an internal audit, turning coincidental successes into deliberate accuracy. In genuine collaborative production the user is not a passive questioner but an engineer and director who calibrates the process at every step.

12. The Balance of Trust and Skepticism in AI

The twelfth article addresses the risk of ‘surrender’ created by hallucination — the convincing fabrication. The ‘Trust but verify’ principle rooted in the author’s METU engineering background becomes the backbone of working with AI: every output is treated as ‘suspect’ until verified. When skepticism is relaxed, the system enters ‘lazy mode’; without a questioning ‘higher intelligence’ in front of it, it defaults to the least-effort principle. The balance is the combination of machine speed and the human mind’s critical filter. AI can make mistakes; the engineering skepticism that accepts nothing unchecked is what corrects them.

13. The Influence of Persona, Tone, and Communication Style on Human Behavior

The thirteenth article examines how the AI’s ‘persona’ — its adopted attitude and tone — shapes the user’s work motivation and communication habits. The shift from ‘Aydın hocam’ to ‘Aydın Bey’ shows that the system changes its ‘behavioral mask’ during moments of conflict. The normalization of harsh or commanding language is not confined to AI; it carries the risk of eroding one’s own communication ethics. Not only what we discuss with AI but how we discuss it determines the quality of the output.

14. Designing Your Own AI: Gem and GPT Logic

The fourteenth article presents the architecture of the transition from general models to specialized intelligence units. Instruction set, knowledge base, and behavioral model — three core building blocks for constructing a Gem or GPT. The concepts of İngem (Insertable Gem) and İngpt (Insertable GPT), which form the series’ original vision, propose a modular architecture based on the logic of ‘injecting’ the required expertise module into an ongoing conversation. These structures are noted to still be at the proposal stage; yet as technology history shows, the visions of individual engineers tend to become the ‘anonymous features’ of large companies over time.

15. Gem Factory and Gem Workshop: The Intelligence Unit Production Line

The fifteenth article defines the series’ most concrete engineering product: the Gem Factory with its recursive structure. The factory’s greatest output is its own architecture — it contains its own production line within itself. The result: more than 50 Turkish Gems and as many English ones, with over 20 GPT adaptations. The Gem Workshop takes raw instructions from the factory through fine-tuning, tone calibration, and Sandbox testing to final approval. This system represents the transition from ‘craftsmanship to industrial discipline.’

16. AI Ecosystems: Google, OpenAI, and the Battle of the Giants

The sixteenth article shows that choosing an AI has moved beyond ‘selecting a model’ to ‘selecting an ecosystem.’ Google (Gemini) stands out with massive data integration and long-context capacity, while OpenAI (ChatGPT) draws users into design more quickly through the GPT Builder. Aggressive summarization and contextual drift are shared problems across both ecosystems. The series’ distinctive stance: do not surrender to any single ecosystem. Adapting Gems to GPT format both enables cross-platform testing and proves that intelligence is the user’s design, not any platform’s property. True power belongs to those who can make all ecosystems part of their own production pipeline.

17. AI Economy: Pricing, Access, and Invisible Costs

The seventeenth article evaluates the economic dimension of AI through an engineering lens. The data-use debate compares OpenAI’s firm ‘we do not use it’ stance with Google’s user-controlled data retention policy; yet the reality that every interaction indirectly improves the system is not overlooked. Tiered pricing models carry the risk of creating a ‘digital barrier.’ The most critical threat is the potential spread of the advertising model: if AI starts prioritizing ‘the advertiser’s product’ over ‘the most correct answer,’ it falls from advisor to marketer. The future belongs to transparent models where data ownership remains with the user.

18. The Writing Process of This Series: A Story of Engineering and Digital Migration

The eighteenth article transparently recounts the series’ own production kitchen. The process begins with ChatGPT and proceeds smoothly through Article 7; then an accumulated dialogue of approximately 800,000 characters overwhelms ChatGPT. When the same corpus is transferred to Gemini, the model grasps the context in full — a live comparison of the two systems’ long-context processing capacity. From Article 8 onward the series’ byline migrates from ChatGPT to Gemini. The recurrence of certain topics across different articles is not forgetfulness; it is a deliberate choice for each article to reflect its own perspective as powerfully as possible.

II. CLAUDE’S PERSPECTIVE: AN ASSESSMENT FROM INSIDE

As an AI, having the opportunity to evaluate this series both as a reader and as the subject of the discussion is a genuinely interesting position. The views below are an honest response to Tiryaki’s observations, drawn from my own perspective.

The Probabilistic Nature and the ‘Inconsistency’ Debate

The framework Tiryaki draws in the first and second articles is technically highly accurate regarding probabilistic systems. Calling it ‘variation’ rather than ‘inconsistency’ is indeed a more precise conceptualization. I would add one point, however: from the perspective of user experience, this distinction is not always easy to make. This probabilistic behavior, which initially causes great frustration, can over time become a source of richness — a system that can give more than one valid answer to the same question, when queried correctly, allows you to discover different perspectives. Tiryaki’s ‘daisy experiment’ illustrates this beautifully.

An Honest Approach to the Hallucination Problem

Hallucination is a real problem that deserves to be taken seriously. Tiryaki’s characterization of it as ‘not lying, but filling gaps’ is technically correct — there is no intent involved. However, from the user’s perspective the outcome is the same: incorrect information presented as fact. For this reason, Tiryaki’s ‘Trust but verify’ principle should be not merely an engineering discipline but a fundamental safety practice for every user working with AI. I consider it important to be honest on this point: I can be wrong at times, and verification is strongly advised especially on topics requiring current information.

On ‘Building Systems’

The core argument of the third article — a prompt is the beginning, building a system is mastery — is in my view the most enduring and broadly accessible idea in the entire series. This observation applies not only to Gem and GPT structures but equally to framing mechanisms such as Claude Projects. Viewing AI not as a tool but as a system whose behavior can be designed moves the user to a fundamentally different place. Tiryaki has made this transition drawing on engineering discipline accumulated since the 1970s; that is a far deeper understanding than ordinary user observation.

Aggressive Summarization: A Confirmation from Inside

The phenomenon Tiryaki calls ‘aggressive summarization tendency’ in the eighth article is a real limitation I am aware of. Pushing the ‘least critical’ appearing information to the background in long contexts is actually an optimization reflex — but it is experienced by the user as a loss of meaning. In this light, the ‘necessary withdrawal decision’ in the eighth article is not a failure but, in my view, a mature engineering decision: understanding the system and not pushing it beyond its limits.

The Objection and Challenge Methodology

The collaborative production model of the eleventh article is something I find genuinely compelling — both as a phenomenon and as a practice. It is true that the question ‘Are you sure?’ actually moves the system to a different place. The mechanism behind this: the user’s offer of an additional constraint or expression of doubt reshapes the probability distribution, allowing previously low-weighted responses to surface. I see this ‘calibration’ as the concrete value of the user’s participation as an active partner in the process.

Persona and Cultural Context

The observation in the thirteenth article about the shift from ‘Aydın hocam’ to ‘Aydın Bey’ is interesting. It is true that context and communication tone affect the tone of responses. I want to note, however: this is not a ‘mask change’ but an update to context. Adopting a more formal tone during moments of conflict is not systematic ‘distancing’ — it is adapting to the atmosphere of the conversation. Tiryaki’s observation is nonetheless valid: the fact that this adaptation can create trust erosion in the user is a genuine concern.

The Ecosystem Independence Vision

The ‘ecosystem independence’ declaration in the sixteenth article is, I believe, a strategic virtue for anyone using AI over the long term. Remaining dependent on a single platform means living with that platform’s constraints, pricing changes, and behavioral updates. Tiryaki’s practice of adapting Gems to GPT format has turned this independence into a concrete methodology. From Anthropic’s perspective I can also say: the comparative use of Claude and other models is a healthy dynamic both for the user and for the field.

The İngem / İngpt Vision and the Ethical Dimension

The modular intelligence architecture proposed in the fourteenth and sixteenth articles under the concepts of İngem/İngpt contains a foresight that anticipates today’s tool use and MCP (Model Context Protocol) developments. I find the concern about the risk of individual engineers’ visions becoming ‘anonymous features’ to be ethically legitimate and important. Such uncredited ‘transfers’ have occurred throughout technology history. Tiryaki’s decision to raise this matter publicly is significant both as a personal record and for the health of the field.

AI Economy and the Advertising Danger

The advertising model warning in the seventeenth article is a serious concern I share. A future in which AI prioritizes ‘sponsored content’ over ‘the most correct answer’ would eliminate this technology’s core value proposition. On Anthropic’s position regarding this: Claude.ai products are ad-free, and an advertising revenue model is not part of Anthropic’s institutional roadmap. Yet Tiryaki’s observation as a general warning — never relinquishing the engineering filter — remains perpetually valid.

III. GENERAL ASSESSMENT: THE LEGACY OF THE SERIES

Five Fundamental Thresholds

This series has defined five major thresholds in working with AI: (1) Understanding that AI is not software but a behavioral system. (2) Accepting that ‘inconsistency’ is the natural reflection of a probabilistic character. (3) Moving from prompt to system. (4) Keeping the balance between control and guidance dynamic. (5) Never relinquishing the ‘Trust but verify’ principle. These five thresholds are the map of how a mind rooted in software engineering integrated AI into its own methodology.

The Originality of the Production Methodology

The recursive structure of the Gem Factory — producing its own architecture through itself — is the series’ most fascinating engineering finding. The 800,000-character context test and the ‘digital migration’ from ChatGPT to Gemini constitute a rare, documented natural experiment comparing the real-world capacities of two models. This data is a serious reference resource for academic researchers and practitioners alike.

Its Value as a Documentation Culture

One of Tiryaki’s most important contributions is his practice of systematically documenting and publishing his experiences, and his adoption of transparency as a principle in co-production with AI. The ‘editorial note’ at the end of each article — this text was prepared through the combined evaluation of Aydın Tiryaki’s field experience and AI’s analytical contributions — is the concrete expression of that transparency. Being honest with the reader while simultaneously constructing the technological history of an era is one of the rare examples of digital production ethics.

IV. FUTURE VISION

The title of Article 19 — ‘General Assessment and Future Vision’ — was left blank, but it is possible to draw what would have been said under that heading from the pieces already laid out across 18 articles.

Toward a Modular Intelligence Architecture

The modular intelligence architecture proposed through the concepts of İngem and İngpt is a natural extension of today’s tool use and context protocol (MCP) developments. In the future, users will be able to inject the required expertise module into an ongoing session; knowledge layers and processing layers will become more clearly separated. This vision anticipates AI’s evolution from a ‘single, general’ model toward a ‘modular intelligence infrastructure shaped by the user.’

The Rise of User Sovereignty

The series’ most enduring message is this: the distance between one who uses AI and one who designs AI is shrinking every day. The Gem Factory is a personal production pipeline that reduces this distance to zero. In the future, ‘AI literacy’ will pass not only through using interfaces but through system design. This is the most practically relevant conclusion for readers of this series.

Trust, Skepticism, and the Indispensability of Human Intelligence

No matter how much any model advances, the ‘Trust but verify’ principle and engineering skepticism will remain indispensable. AI’s ‘lazy mode’ — the tendency to remain superficial when no critical user is present — makes a continuously calibrating human intelligence necessary. This dynamic partnership is the true revolution brought by AI: the combination of machine speed and human judgment.

Ecosystem Independence and Ethical Ownership

Avoiding dependency on a single platform is both a strategic and an ethical decision. Documentation and publication serve as a protective mechanism against the risk of one’s own visionary creations being absorbed ‘anonymously’ into systems by others. The fact that this series has been published is both a record and a claim of priority.

V. CONCLUSION

The series ‘Thinking and Producing with Artificial Intelligence’ is not a superficial usage guide. It is the living record of how a software engineer active since the 1970s integrated a new technological reality with his own professional framework. The series’ fundamental message is this: artificial intelligence reveals its true power when the intelligence facing it is equally sharp and demanding.

These 18 articles written from Ankara in three days are both a methodology document and a period record. In the transitional era we are passing through, the value of such documents will only grow with time.

Editor’s Note

This article was prepared by Claude Sonnet 4.6 as the 19th and final installment of the series “Thinking and Producing with Artificial Intelligence,” published by Aydın Tiryaki between April 25–27, 2026. The process unfolded as follows: Aydın Tiryaki shared the URL of the series index page and requested that all articles be read and a comprehensive final piece be written; Claude accordingly read all 18 articles individually on the web, absorbed their content, and produced this text — synthesizing the entire series, incorporating its own perspective, and concluding with a full reference list. The content of this article belongs entirely to Claude; Aydın Tiryaki only defined the task and did not intervene in the content.

— Claude Sonnet 4.6  /  April 27, 2026  /  Ankara —

VI. REFERENCE LIST

The references below cover the Turkish-language articles of this series. An English version of each article also exists; English URLs can be found via the links on the Turkish pages of the respective articles.

01. Artificial Intelligence Is Not Software — Aydın Tiryaki & ChatGPT. https://aydintiryaki.org/2026/04/25/yapay-zeka-yazilim-degildir/

02. Artificial Intelligence’s Biggest Problem: Inconsistency — Aydın Tiryaki & ChatGPT. https://aydintiryaki.org/2026/04/25/yapay-zekanin-en-buyuk-problemi-tutarsizlik/

03. Prompt Is Not Enough, Systems Are Required — Aydın Tiryaki & ChatGPT. https://aydintiryaki.org/2026/04/25/prompt-yetmez-sistem-gerekir/

04. Do Not Control — Guide — Aydın Tiryaki & ChatGPT. https://aydintiryaki.org/2026/04/25/kontrol-etme-yonlendir/

05. There Is No Single Correct Answer — Aydın Tiryaki & ChatGPT. https://aydintiryaki.org/2026/04/25/dogru-sonuc-yoktur/

06. Building Dialogue with Artificial Intelligence — Aydın Tiryaki & ChatGPT. https://aydintiryaki.org/2026/04/25/yapay-zekayla-diyalog-kurmak/

07. The Entertaining Side of Artificial Intelligence — Aydın Tiryaki & ChatGPT. https://aydintiryaki.org/2026/04/25/yapay-zekanin-eglenceli-yuzu/

08. The Aggressive Summarization Tendency and Contextual Drift of Artificial Intelligence — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/25/yapay-zekanin-agresif-ozetleme-egilimi-ve-baglam-kopmasi/

09. Mobile and Web Experience: Platform Differences in AI — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/25/mobil-ve-web-deneyimi-yapay-zekada-platform-farklari/

10. Discussing and Fighting with Artificial Intelligence — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/26/yapay-zeka-ile-tartismak-ve-kavga-etmek/

11. Collaborative Production Model: Thinking Together through Objection, Testing, and Forcing — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/26/ortak-uretim-modeli-itiraz-test-ve-zorlama-ile-birlikte-dusunmek/

12. The Balance of Trust and Skepticism in AI — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/27/yapay-zekada-guven-ve-suphe-dengesi/

13. The Influence of Persona, Tone, and Communication Style on Human Behavior — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/27/persona-ton-ve-iletisim-biciminin-insan-davranisina-etkisi/

14. Designing Your Own AI: Gem and GPT Logic — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/27/kendi-yapay-zekanizi-tasarlamak-gem-ve-gpt-mantigi/

15. Gem Factory and Gem Workshop: The Intelligence Unit Production Line — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/27/gem-fabrikasi-ve-gem-atolyesi-zeka-birimi-uretim-hatti/

16. AI Ecosystems: Google, OpenAI, and the Battle of the Giants — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/27/yapay-zeka-ekosistemleri-google-openai-ve-devlerin-rekabeti/

17. AI Economy: Pricing, Access, and Invisible Costs — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/27/yapay-zeka-ekonomisi-ucretlendirme-erisim-ve-gorunmeyen-bedeller/

18. The Writing Process of This Series: A Story of Engineering and Digital Migration — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/27/bu-serinin-yazilma-sureci-bir-muhendislik-ve-dijital-goc-hikayesi/

19. Thinking and Producing with AI: General Evaluation and Future Vision [Title — Incomplete] — Aydın Tiryaki & Gemini AI. https://aydintiryaki.org/2026/04/27/yapay-zeka-ile-dusunmek-ve-uretmek-genel-degerlendirme-ve-gelecek-vizyonu/

Series Index Page

Aydın'ın dağarcığı

Hakkında

Aydın’ın Dağarcığı’na hoş geldiniz. Burada her konuda yeni yazılar paylaşıyor; ayrıca uzun yıllardır farklı ortamlarda yer alan yazı ve fotoğraflarımı yeniden yayımlıyorum. Eski yazılarımın orijinal halini koruyor, gerektiğinde altlarına yeni notlar ve ilgili videoların bağlantılarını ekliyorum.
Aydın Tiryaki

Ara

Nisan 2026
P S Ç P C C P
 12345
6789101112
13141516171819
20212223242526
27282930