Persona Feedback.
Four to six personas — diverse in tech proficiency, demographic, and goal — independently rate the page across five dimensions and write their reactions in first-person. Surfaces friction a single reviewer can't see: a power user spots inefficiency, a cautious first-timer spots missing trust signals, a thumb-driven mobile user spots layout problems.
What it does runPersonaFeedback()
Step one: the LLM generates N personas (default 4, tunable to 8) appropriate for the page. Each persona has a name, role, background, list of goals, and list of concerns. Personas are written once per audit so they're consistent across the report.
Step two: for each persona, the LLM looks at the screenshot from that persona's viewpoint and returns a structured rating:
- Usability ★★★★★ — how easy is it to do what I came here to do?
- Visual design ★★★★★ — does this look credible and current?
- Content ★★★★★ — is the copy clear, scannable, accurate?
- Trust ★★★★★ — do I feel safe entering my data / making a payment?
- Accessibility ★★★★★ — can I perceive and operate this with my abilities + setup?
Each rating is a 5-star score with a 2–3 sentence comment, a specific-issues list (what bugged them), and a specific-praise list (what worked for them). The overall persona rating is the average of the five.
Who shows up in a typical run
Personas are generated to match the page. Common archetypes (with example names — the LLM picks fresh ones):
- Maria — Cautious first-time visitor. Wants reassurance, clarity, trust signals. Hates jargon. Notices when the privacy policy is hidden.
- Greg — Power user. Wants efficiency, keyboard shortcuts, advanced filters. Hates extra clicks.
- Sam — Mobile-thumb user. One-handed iPhone navigation. Wants obvious taps, no horizontal scroll, readable text.
- Robin — Keyboard + screen-reader user. Tab-orders the page, listens for landmarks. Notices missing alt text and focus traps.
- Kelly — Price-conscious shopper. Compares plans side-by-side, hunts for hidden fees, reads return policies first.
- Diana — Skeptical evaluator. Procurement / IT-buyer voice. Wants compliance docs, security pages, SOC 2 status, references.
The mix adapts to the site. A B2C consumer page tends to draw Maria + Sam + Kelly; a B2B SaaS dashboard draws Greg + Diana + Robin. Add a customPrompt to steer ("focus on enterprise procurement personas") or to swap the persona set for your audience.
What it surfaces
- "Maria — 4/10 usability: the hero says 'Reimagine your workflow' but I have no idea what this product is. I'd bounce in 3 seconds."
- "Greg — 6/10 usability: I want to bulk-action 50 rows but the only option is to click each one individually."
- "Sam — 3/10 mobile: the sticky cookie banner covers the 'Sign in' button on iPhone 13. Can't dismiss it without first tapping Accept."
- "Robin — 2/10 accessibility: I tabbed to the menu but the dropdowns don't open with Enter. They only respond to mouse hover."
- "Kelly — 5/10 trust: the pricing table says 'starting at $29' but the actual checkout shows $42. Where did the extra come from?"
- "Diana — 6/10 trust: there's a SOC 2 badge in the footer but it links to a 404. Real auditors will check that."
- Praise lists, too: "Greg — visual design 9/10: keyboard shortcuts are documented in a tooltip on every button. Excellent."
Coverage
personas.customPrompt to steer ("test as enterprise IT buyer") or set count: 8 for broader coverage- These are simulated personas, not real users. They surface high-signal hypotheses, not statistical confidence.
- No real-user metrics — no analytics, no conversion data, no session recordings. Pair with your own RUM tools.
- The screen-reader persona (Robin) is heuristic — for legal/conformance review use the dedicated WCAG audit plus a real screen-reader pass.
- Personas don't navigate the site — they react to a single screenshot. For interactive UX issues, see Exploratory.
- Doesn't substitute for usability testing on sensitive launches (pricing changes, major redesigns) — use it for triage, not final sign-off.
Sample finding
// One persona entry from report.personaFeedback.personas[] { "personaName": "Kelly — Price-conscious shopper", // Star ratings shown to the viewer. Stored as 0–10 in JSON so half-stars // (e.g. 2.5/5 = "★★☆☆☆ ½") are representable when averaged across personas. "overallStars": 2.7, // 5.4/10 → ★★★☆☆ (2.7/5) "ratings": { "usability": 3, // ★★★☆☆ "visualDesign": 3.5, // ★★★½☆ "content": 2.5, // ★★½☆☆ "trust": 2, // ★★☆☆☆ "accessibility": 2.5 // ★★½☆☆ }, "comments": "The pricing page promises 'starting at $29' but when I clicked through\nto checkout the total was $42. The $13 I didn't expect was a 'platform fee'\nin small text below the line items. I'd close the tab and go look at\ncompetitors. Make the all-in price obvious on the pricing card.", "specificIssues": [ "Hidden platform fee at checkout", "No annual-vs-monthly toggle on the pricing card", "Compare-plans button is below the fold on a 13-inch laptop" ], "specificPraise": [ "Free tier is clearly labeled, no credit card required to try" ] }