How We Work · The Lama Nest

How we work

At The Lama Nest we apply the LAMA Method™, our in-house system to analyze products and reviews rigorously and clearly. We combine AI-assisted automation with human review. Our goal: help you make better decisions with less noise.


Our Principles

  • Usefulness over hype: technology is a tool; judgment is human.
  • Transparency: we explain the methodology, publish the update date, and cite sources when appropriate.
  • Rigor & simplicity: robust metrics with easy reading and actionable recommendations.
  • Independence: if there is any affiliate relationship, we disclose it; our conclusions are not for sale.

The Process, Step by Step

  • 1) Scope & categories — We define the product, variants, and evaluation categories (e.g., perceived quality, ease of use, support, etc.).
  • 2) Data collection & preparation — We gather public reviews from different markets (when available), remove duplicates, and detect noise (spam, bots, copies).
  • 3) Normalization & aggregation — We harmonize scales, languages, and formats; we balance review weights by period and market to avoid bias.
  • 4) LAMA metrics computation — We apply four core metrics and generate visualizations for quick reading.
  • 5) Human review — We validate outliers, cross-check with technical sheets, manufacturer support and, when applicable, regulatory documentation.
  • 6) Publication & monitoring — We publish with an update date and re-evaluate if reviews or the product change substantially.

LAMA Method™ Metrics

Each metric includes: what it measures, how it’s calculated, and how to read it.

1) Happy Llamas

What it measures: the minimum percentage of good reviews (4–5★) we can guarantee with 95% confidence.

How it’s calculated: Wilson interval on the % of 4–5★; we use the lower bound (LB95).

How to read it: higher = better. Indicates a solid base of satisfied customers.

2) Grumpy Llamas

What it measures: the maximum percentage of bad reviews (1★) that could exist, with 95% confidence.

How it’s calculated: Wilson interval on the % of 1★; we use the upper bound (UB95).

How to read it: lower = better. Signals a low likelihood of mass “haters.”

3) Disappointment Risk

What it measures: a practical risk that the product causes issues due to technical or regulatory topics.

How it’s calculated: we detect risk patterns in descriptions and reviews and summarize them as LOW / MEDIUM / HIGH.

How to read it: the higher the risk, the more caution. With few reviews, the score may rise due to uncertainty (the model’s caution).

How We Interpret Results

  • Quick read — High Happy Llamas + Low Grumpy Llamas = confidence in overall satisfaction. High consistency = cross-country alignment. Low disappointment risk = little post-purchase friction.
  • Context matters — New products or small datasets may show wide intervals; we disclose that and update when volume permits.
  • Practical conclusions — We close each analysis with clear recommendations based solely on the objective information collected (who it’s for, pros/cons, what to check before buying, etc.).

Quality Controls

  • Temporal sampling — We monitor “spikes” (campaigns, new versions) that can skew perception.
  • Noise detection — Signals of inauthentic or coordinated reviews are minimized.
  • Traceability — We log dates, sources, and relevant product changes.
  • Editorial review — A person validates the final judgment and the explanatory text.

What We Publish (and What We Don’t)

  • We publish: conceptual methodology, metric definitions, visualizations, and conclusions.
  • We don’t publish: prompts, code, additional internal thresholds, or detailed pipelines (trade secret).

Contact & Suggestions

Are you a manufacturer who wants to provide technical documentation or clarifications? Are you a user who has noticed a change? Write to us. We improve the LAMA Method™ with you.