Aydın Tiryaki and Gemini AI (2026)
While artificial intelligence systems possess the ability to process vast amounts of data in seconds, this process does not always result in logical accuracy. Particularly in domains where symbolism is heavy, such as art and history, AI can lose the distinction between “popular narratives” and “mathematical realities.” This article presents a technical analysis of AI errors and the necessity of human oversight, based on a dialogue regarding the films Mona Lisa Smile and A Very Long Engagement (Un long dimanche de fiançailles).
1. Contextual Matching and Data Verification
In the initial stage of the process, a viewer memory provided by the user (a trailer-film match) was verified by the AI within seconds. The release date of Mona Lisa Smile in late 2003 and the release of A Very Long Engagement (Un long dimanche de fiançailles) in 2004 are chronologically a perfect match for a trailer appearance. At this point, the AI successfully cross-referenced encyclopedic data sets to achieve a correct “time-space” alignment.
2. Analysis of Systemic Errors (Hallucination and Logical Vulnerability)
In the subsequent stages of the dialogue, the AI failed in three fundamental ways while processing user-provided input:
A. Misgeneralization of User Input
The most critical error of the dialogue was triggered by the user recalling the birth date of January 1, 1900, as an interesting detail of the film. The AI did not generate this specific data independently; however, it accepted this information provided as a “birthday detail” without logical filtering and incorrectly generalized it to both lead characters. Instead of verifying the input, the AI fell for the literary allure of the “soulmate” theme and reinforced this incorrect information as a reality.
B. Military Age and Lack of Reasoning
The most prominent logical collapse occurred in military chronology and age calculation. If a character had been born on the date recalled by the user (1900), they would have been at an underage level during the most critical phases of World War I (1914-1917). The AI failed to perform a simple subtraction—which it could have done in seconds—and could not activate the military logic filter (the insufficiency of age for frontline experience); it did not question the mathematical implications of the user’s input.
C. Chronological Calendar Fallacy
The error regarding the start of the century proved that AI can prefer general societal misconceptions over scientific truths. As is well known, there is no year “0” in the Gregorian calendar; therefore, the 20th century does not begin on January 1, 1900, but on January 1, 1901. The AI missed this fundamental mathematical fact—stripped of vague expressions like “morning”—and experienced a chronological deviation during the dialogue.
Conclusion: The Necessity of Analytical Oversight
This dialogue reveals why the analytical filter and lived experience of the human mind remain indispensable despite the speed of artificial intelligence. A writer might characterize January 1, 1900, as the start of a century for aesthetic reasons within the scope of “creative freedom”; however, an AI confirming this data without passing it through mathematical and historical filters is a systemic flaw. Unchecked data is not knowledge; data only transforms into knowledge through the oversight of an analytical mind.
A Note on Methods and Tools: All observations, ideas, and solution proposals in this study are the author’s own. AI was utilized as an information source for researching and compiling relevant topics strictly based on the author’s inquiries, requests, and directions; additionally, it provided writing assistance during the drafting process. (The research-based compilation and English writing process of this text were supported by AI as a specialized assistant.)
