Experiment: LLM Misunderstanding Time
Some time ago, I felt specifically that the AI workflow in my app was not working as expected. This happened several times even with the same workflow I had done previously. I didn’t change anything, so my assumption is that the capabilities of the OpenAI gpt-4.1-nano model have changed. What happened to me was that the output from the gpt-4.1-nano LLM model often produced results that did not meet expectations. It gave the wrong time output, which is a major concern specifically for my application. Therefore, for the time being, I moved to gpt-5-nano because it is still relatively cheap, it worked, it is smarter, but a new problem appeared. ...


