Alexandria Eden
Making the web work for everyone β and building the guardrails AI agents need to be trusted.
I'm a builder at the crossroads of cognitive science, accessibility, and AI safety. After 17 years in international marketing at the University of Colorado, I've seen how diverse users navigate digital experiences. The testing tools often fail the people who need them most.
CBrowser started as a question: What if browser automation could think like a real user, not just click buttons? That led to 26 cognitive traits, 21 personas from first-timers to users with tremors, and a safety framework that keeps autonomous agents from doing harm.
The Question That Started It All
Traditional testing asks: "Does this button work?"
I wanted to ask: "Will a confused first-timer on a slow connection find this button β and will they give up before they do?"
The answer meant bringing cognitive science into test automation. It meant modeling patience, frustration, and decision fatigue. And as AI agents go mainstream, it meant building safety guardrails.
Passionate about developer experience, AI safety, and making hard technology easy to use. Interested in DevRel, developer advocacy, and technical evangelism roles.
Remote-first
Beyond Code
When I'm not building personas, I'm a singer and pianist β proof that some patterns can't be automated.
What Drives This Work
Accessibility First
The same rules that help users with disabilities also help everyone -- and AI agents.
Responsible Automation
Safety zones ensure automation never causes harm. AI agents need guardrails to be trusted.
Open Source
CBrowser is MIT licensed. Good testing tools should be free, not locked behind enterprise pricing.
Natural Language
QA should not require coding skills. Write tests in plain English.
The CBrowser Story
CBrowser came from a simple insight: traditional browser automation tools are great at checking if features work. They are terrible at knowing if users can actually use them.
In international marketing, I watched diverse users struggle with interfaces that passed every automated test. Tests said "success." Users said "frustrating."
We needed automation that could simulate real cognitive load. The impatience of a first-time visitor. The confusion of unfamiliar UI. The frustration that builds when things don't work.
The Cognitive Difference
- βTraditional: "Did the button click?"
- βCBrowser: "Did the user find the button? Were they frustrated?"
- βTraditional: Pass/Fail
- βCBrowser: Patience level, confusion score, abandonment risk
- βTraditional: One-size-fits-all
- βCBrowser: Test as elderly, first-timers, users with disabilities
Join the Community
CBrowser is open source and MIT licensed. Star us on GitHub, contribute, or reach out.