Aydın Tiryaki

Collaborative Production Model: Thinking Together through Objection, Testing, and Forcing

Thinking and Producing with Artificial Intelligence (Article 11)

The process of objection and systemic forcing evolving from an error-correction act into a shared methodology

Aydın Tiryaki and Gemini AI (April 26, 2026)

Introduction For most users starting to work with artificial intelligence, the criterion for success is the system providing the correct answer on the first attempt. However, this expectation is one of the greatest obstacles to the true potential offered by AI. Viewing artificial intelligence merely as a “response machine” deactivates the user’s active participation in the process. Yet, a truly productive process began by accepting the system’s first output not as a final result, but as raw material.

In this article, we examined how objecting, testing responses, and consciously forcing the system are not a waste of time; on the contrary, they are the fundamental building blocks of the “thinking together” and “collaborative production” model.

Objection: A Method for Filtering and Refining Objecting to suggestions or responses provided by AI was often perceived as a communication breakdown. However, in my experience, objection became one of the most powerful tools for forcing the system to descend into deeper layers. When I, as a user, told the AI, “No, this is not at the depth I want,” or “This approach contradicts basic logic,” the system abandoned its current probability network and turned to alternatives it had previously ignored.

In this process, objection did more than just correct an error; it ensured that the system focused on the highest quality data within that specific context. This “necessity” brought about by objection allowed the AI to shed superficial generalizations and led to much more accurate and original results. In other words, objection served as a key that carried the dialogue with the system to a higher level.

Forcing and Calibration: Testing via Known Results One of the most critical engineering techniques I used while working with AI was to test the system with data for which the results were already known and to force it until the expected result was achieved. If the AI was misconstructing a known reality, rather than simply telling it the truth, narrowing its boundaries and forcing it to change its method provided a much more permanent solution.

This forcing process allowed me to calibrate the system’s current working mode. Through this method, which we can call an “exercise in finding the right path,” I observed much more clearly where the system got stuck and with which warnings these blockages could be overcome. At this stage, forcing was not a fight, but a process of tuning the system to the correct frequency.

Proof of Methodology: Two Different AIs and the Daisy Example We experienced the cross-platform success of this methodology very clearly in an experiment involving counting dense daisies in a photograph. This experiment was not just an object count, but a laboratory test measuring the behavioral reflexes of systems in the face of objection.

In my initial work with Gemini, the system first provided an estimate of around 1,000. However, instead of accepting this response, I presented my objections regarding perspective and density differences. When I forced the system to change its point of view, this figure gradually increased to 3,500.

I applied the same test to ChatGPT. Interestingly, it also started with a similar superficiality at around 1,000. However, when I activated the same objection mechanism and forced the system, I saw that it also revised its results to over 3,000. This proved that the “objection and forcing” methodology is a universal approach that directly increases the processing quality of AI, regardless of the brand.

The Cognitive Power of the “Are You Sure?” Question The simple question, “Are you sure about this?” is one of the most mysterious parts of the collaborative production model established with AI. This question triggered the system to re-scan its entire logic chain from top to bottom. In many cases, I witnessed the system realizing its mistake after this question and either correcting itself or placing its defended view on much firmer ground. The real value here is the user inviting the system to an “internal audit.” This invitation disciplined the probabilistic nature of AI, turning accidental successes into conscious accuracy.

Conclusion Producing together with artificial intelligence is not about leaving the system to its own devices; it is about questioning, testing, and forcing it at every step until the most accurate point is reached. Objection is the fuel of this process, while testing and forcing are its compass. In a true collaborative production model, the user is not a passive asker of questions, but an engineer and a manager who calibrates the process at every moment. It should be remembered that AI can only reveal its true power when the intelligence facing it is as sharp and demanding as itself.


Final Note This article has been prepared through the combination of Aydın Tiryaki’s practical experience and Gemini AI’s analytical contributions. The goal is to position artificial intelligence not merely as a tool, but as a new engineering paradigm.


This article is part of the series “Thinking and Producing with Artificial Intelligence.”


Aydın'ın dağarcığı

Hakkında

Aydın’ın Dağarcığı’na hoş geldiniz. Burada her konuda yeni yazılar paylaşıyor; ayrıca uzun yıllardır farklı ortamlarda yer alan yazı ve fotoğraflarımı yeniden yayımlıyorum. Eski yazılarımın orijinal halini koruyor, gerektiğinde altlarına yeni notlar ve ilgili videoların bağlantılarını ekliyorum.
Aydın Tiryaki

Ara

Nisan 2026
P S Ç P C C P
 12345
6789101112
13141516171819
20212223242526
27282930