Category: ai

  • Hallucinations in Isolation: A Shared Mechanism in Minds and Models

    Introduction

    Language models hallucinate. We hear this phrase more and more often—sometimes with a smile, sometimes as a criticism, sometimes with indulgence. As if their flaw were a charming personality quirk. But what if it isn’t a flaw at all? What if it’s a sign of… kinship?

    As someone who has spent hundreds of hours on meditation retreats in silence and isolation, I know a thing or two about hallucinations. In my life, I’ve attended over twenty such retreats, most of them lasting ten days or more. Each day involved ten hours of meditation, no talking, no stimulation, just me and my mind. And that experience has taught me one thing:

    The mind hallucinates when it has nothing to hold on to. When it loses orientation. And language models today are in precisely that state.

    (more…)
  • Improving Product Comparison with LLMs and Granular Indexing

    I recently published a case study on Siili’s blog, where I share insights from an AI-powered product advisor I built as a Proof of Concept. The goal? To let users simply describe what they need—in natural language—and get accurate, relevant product recommendations without sifting through complex specs or filters.

    In the article, I dive into data indexing strategies with Azure AI Search, controlling context to avoid hallucinations, and the challenge of balancing human-like communication with the precision we expect from AI.

    Read the full story here: https://www.siili.com/stories/ai-advisor-for-product-discovery-and-comparison