Ever read a headline and thought, “Something feels off, but I can’t explain why?”

I built CLARi (Clear, Logical, Accurate, Reliable Insight), a custom GPT designed not just to verify facts—but to train your instincts for clarity, logic, and truth.

Instead of arguing back, CLARi shows you how claims:

  • Distort your perception (even if technically true)

  • Trigger emotions to override logic

  • Frame reality in a way that feels right—but misleads

She uses tools like:

🧭 Clarity Compass – to break down vague claims

🧠 Emotional Persuasion Detector – to spot manipulative emotional framing

🧩 Context Expansion – to expose what’s being left out

Whether it’s news, social media, or “alternative facts,” CLARi doesn’t just answer—she trains you to see through distortion.

Try asking her something polarizing like:

👉 “Was 5G ever proven unsafe?”

👉 “Is crime actually going up, or is it just political noise?”

🔗 Link to CLARi

She’s open to all with this link —designed to challenge bias, dissect manipulation, and help you think clearer than ever.

Let me know what you think! Thanks Lemmy FAM!

  • Bluesheep@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    Fancy sharing the details of your prompt? I might like to recreate in a corporate environment.

    No sweat if your keeping it closed.

    • CitizenBane@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      3 days ago

      It will be open sourced eventually. I need to figure out how to properly replicate the responses on other LLMs, whether local or not. I’m seeking help in this.