Aydın Tiryaki and Gemini AI (Collaborative Work)
AI technologies are accelerating day by day, becoming smarter, and infiltrating every aspect of our lives. However, as an engineer, there is an issue on my mind that is far more critical than speed: Responsibility and Trust.
Recently, I had a deep technical conversation with my AI assistant. The topic we put on the table was this: When an AI makes a mistake, should it go back and correct its error even after the chat window is closed? Or should it stay silent, thinking “it’s too late”?
Here are the suggestions we developed starting from this question and their counterparts in the technical world of AI.
1. Evaluation of “Idle Time”: No Sleep, Just Verification
As an engineer, I focus on the efficiency of systems. AI systems currently work “reactively”; they answer the question and stop. However, my suggestion was:
My Suggestion: After providing an answer, the AI should continue to check the response in the background during millisecond gaps (idle times) or when the system is quiet. Just like a student reviewing their exam paper before handing it in.
Technical Counterpart (Asynchronous Verification): This suggestion can be defined as “Delayed Feedback” in AI architecture. It is technically feasible for the system to scan past conversations against more reliable sources during idle processor times and warn the user if an error is found. This could transform AI from a mere “answering machine” into a “research assistant” that works 24/7.
2. The “Files Got Mixed Up” Syndrome
You might remember old movies where hospital files get mixed up, and a perfectly healthy person is told, “You have 3 months to live.” The doctor realizes the mistake, but by then, the movie turns into a comedy. However, in real life, especially in health and education, misinformation creates tragedy, not comedy.
My Suggestion: AI cannot say, “I gave the answer, the topic is closed.” If it learns that information provided in the past (e.g., a drug side effect or a math problem) was wrong, it should run after the user like the doctor in those movies and send a notification saying, “Sorry, the files got mixed up, here is the correct information.”
Ethical Dimension (Product Recall): Just as car manufacturers recall vehicles when they notice a faulty part, AI companies must recall “faulty information.” This is not a favor; it is a requirement of the user’s “Right to Receive Accurate Information.”
3. Red Alert Situation: Immediate Intervention via SMS and WhatsApp
Sending a notification only within the application may be insufficient in some cases. We all register for these systems with our email addresses and phone numbers. When we have these communication channels, staying silent during a vital error is unacceptable.
My Suggestion: If the error carries a risk regarding health, safety, or serious financial loss (for example, if a drug dosage or an urgent legal deadline was given incorrectly), the AI should not settle for just leaving a note inside the app.
Action Plan: The system should immediately send an SMS or WhatsApp message to the user’s registered phone, stating: “Dear User, we detected a critical error in the health information we provided you yesterday. Please disregard it; the correct information is as follows…” If technology exists to save lives, it should not hesitate to use these direct communication channels at the most critical moment.
4. Inbreeding and the Risk of “Model Collapse”
AIs are trained on data from the internet. However, the internet is now full of content generated by AI.
My Observation: If AI is constantly fed with data produced by other AIs (which can sometimes be erroneous), generations of diseased information will emerge, just like “inbreeding” in biology. Errors will grow exponentially.
Solution (Refinement): AI evolves as long as it detects and corrects its own mistakes. Feedback from the user saying “This is wrong” or cross-checks performed with rival AI models should be used to filter and clean this information pool.
5. Managing Doubt: The “Digital Blue Tick”
Being constantly on alert as a user is exhausting. We need to eliminate the doubt of “Is this information correct?”
My Suggestion: The system should notify the user not only of errors but also of information it has confirmed to be correct. When I look back at an old chat, if I see a “Checkmark” next to that information, I can use it with confidence, knowing that “The system has re-checked this for me, and it is still valid.”
Conclusion: Our expectation from AI technology should not be just speed or processing power. Our expectation should be a responsible technology that can make mistakes but stands behind them, sees its user not as a “guinea pig” but as a “stakeholder,” and possesses ethical values.
Only then can we talk about a true collaboration between AI and humans.
A Note on Methods and Tools: All observations, ideas, and solution proposals in this study are the author’s own. AI was utilized as an information source for researching and compiling relevant topics strictly based on the author’s inquiries, requests, and directions; additionally, it provided writing assistance during the drafting process. (The research-based compilation and English writing process of this text were supported by AI as a specialized assistant.)
