jank.ai
All test types Pricing Sign up free
UX perspective · simulated user feedback 0.15 credits × persona count

Persona Feedback.

Four to six personas — diverse in tech proficiency, demographic, and goal — independently rate the page across five dimensions and write their reactions in first-person. Surfaces friction a single reviewer can't see: a power user spots inefficiency, a cautious first-timer spots missing trust signals, a thumb-driven mobile user spots layout problems.

What it does runPersonaFeedback()

Step one: the LLM generates N personas (default 4, tunable to 8) appropriate for the page. Each persona has a name, role, background, list of goals, and list of concerns. Personas are written once per audit so they're consistent across the report.

Step two: for each persona, the LLM looks at the screenshot from that persona's viewpoint and returns a structured rating:

Each rating is a 5-star score with a 2–3 sentence comment, a specific-issues list (what bugged them), and a specific-praise list (what worked for them). The overall persona rating is the average of the five.

Who shows up in a typical run

Personas are generated to match the page. Common archetypes (with example names — the LLM picks fresh ones):

The mix adapts to the site. A B2C consumer page tends to draw Maria + Sam + Kelly; a B2B SaaS dashboard draws Greg + Diana + Robin. Add a customPrompt to steer ("focus on enterprise procurement personas") or to swap the persona set for your audience.

What it surfaces

Coverage

Breadth
5 dimensions × 4–8 personas = 20–40 independent UX signals per page
Depth
Per-persona ratings + comments + issue lists + praise lists, in first-person voice
Adapts to
The page itself. Pricing pages get a price-sensitive persona; docs sites get a developer persona; consumer landing gets a casual visitor.
Customisable
Pass personas.customPrompt to steer ("test as enterprise IT buyer") or set count: 8 for broader coverage

Sample finding

// One persona entry from report.personaFeedback.personas[]
{
  "personaName":    "Kelly — Price-conscious shopper",
  // Star ratings shown to the viewer.  Stored as 0–10 in JSON so half-stars
  // (e.g. 2.5/5 = "★★☆☆☆ ½") are representable when averaged across personas.
  "overallStars":  2.7,    // 5.4/10 → ★★★☆☆ (2.7/5)
  "ratings": {
    "usability":      3,    // ★★★☆☆
    "visualDesign":   3.5,  // ★★★½☆
    "content":        2.5,  // ★★½☆☆
    "trust":          2,    // ★★☆☆☆
    "accessibility":  2.5   // ★★½☆☆
  },
  "comments": "The pricing page promises 'starting at $29' but when I clicked through\nto checkout the total was $42.  The $13 I didn't expect was a 'platform fee'\nin small text below the line items. I'd close the tab and go look at\ncompetitors. Make the all-in price obvious on the pricing card.",
  "specificIssues": [
    "Hidden platform fee at checkout",
    "No annual-vs-monthly toggle on the pricing card",
    "Compare-plans button is below the fold on a 13-inch laptop"
  ],
  "specificPraise": [
    "Free tier is clearly labeled, no credit card required to try"
  ]
}

See also

Compliance
Accessibility / WCAG 2.2 →
Robin's persona is heuristic; the dedicated audit gives per-criterion pass/fail mapped to WCAG 2.2.
Agentic
Exploratory Agent →
Personas observe a screenshot; the agent actually clicks around. Different signal types.
Run a free persona audit → All seven test types