Hello! I’m Myra, an Applied Scientist working at the intersection of data, AI, and machine learning.
This site is where I share learnings, projects, and reflections along the way.
Hello! I’m Myra, an Applied Scientist working at the intersection of data, AI, and machine learning.
This site is where I share learnings, projects, and reflections along the way.
Hallucination in Large Language Models (LLMs) is often treated as a mysterious artifact of scale or an avoidable glitch in data or training. In this blogpost, we argue the opposite: hallucination is not an accidental bug, but a predictable, systemic failure resulting from the open-loop generative architecture of current LLMs. Drawing on systems and control theory, we show that LLMs behave like unstable dynamical systems—generating text without measuring factual error or applying corrective feedback—allowing small mistakes to accumulate and amplify over time. We then contrast this with closed-loop architectures, where real-time monitoring, feedback, and intervention provide a principled path toward more reliable and self-correcting language models. ...
Imagine a world where AI does more than just answer your questions—it solves your problems, adapts to your needs, and collaborates seamlessly across domains. AI agents—especially those powered by Large Language Models (LLMs)—are paving the way for such a future. Unlike static models, these dynamic entities combine understanding, reasoning, and action, enabling them to autonomously interact with their environment and achieve specific goals. This post introduces the Agentic Framework: the fundamental concepts and evolution of AI agents, and their role as a transformative force in artificial intelligence. ...