top of page

Hello I'm

Lana

Research Scientist

confidence intervals
& confident decisions

Lanen headshot monochrome.jpeg
researchergraphic.png

About Me

Hello, I’m Lana, a research scientist who has navigated from cultural psychology, to data science, and finding a way to merge the two in user research. I'm a quantitative user researcher with experience spanning FAANG companies, fast-paced start-ups, and my own business ventures. All of this came after my career in healthcare systems and research.

What I do

Throughout my career, I've not only delivered high-impact research that shaped product strategy and user experience, but also played a key role in scaling research teams and operations to meet growing organizational needs. At major tech firms, I’ve led end-to-end studies that influenced global initiatives, while at start-ups, I’ve built research infrastructures from scratch— mentoring and defining best practices. As a founder, I brought products to life using a data-driven, user-first approach, giving me a unique lens on innovation. I'm especially passionate about pushing the boundaries of quantitative research—developing new methods, integrating advanced analytics, and helping teams unlock deeper user insights at scale.

Image by Andrew Neel
Image by Alvaro Reyes
Image by Alvaro Reyes
phoneimage32.png

My Experience

After SimplePractice’s acquisition by Vista, I launched my own research studio to meet growing demand for rigorous, scalable insights - leading end-to-end projects across AI, healthcare, energy, and autonomous vehicles for clients such as Waymo, and blending strategy, survey science, and cross-functional execution into one seamless practice.

Objective Tendency
 

As the company’s first dedicated UX hire, I built foundational research systems, led a Webby-nominated marketing site redesign, and unified product and marketing through shared mental models, UX metrics, and conversion strategy - expanding market opportunity by over 50% and driving higher-value sales across teams.

SimplePractice

I scaled and unified a cross-functional quantitative research team, standardizing survey practices and telemetry access, embedding research into product, design, and analytics, and aligning roadmaps to business metrics. I worked closely with creative professionals across multiple disciplines to help guide Adobe creative products to better suit their needs. Highlights include improving satisfaction with AI-generated images by 16%, defining ROI on design decisions, and establishing beta program measures critical for feature launches.

Adobe

I led research and design strategy for developer support tools - leveraging AI, redefining the B2B user journey, and surfacing backend-driven customer issues - while also guiding international workshops for engineering leadership, mentoring researchers company-wide, and presenting externally on survey strategy and response optimization.

Amazon Web Services

As the primary quant researcher for engineering systems at Windows, I led a small team and managed vendor partnerships to deliver 5–10 research reports weekly, blending telemetry, qualitative sentiment, and usability methods to monitor and improve system health. I spearheaded over 15 research projects annually across engineering tools, accessibility, and cultural usability - including leading Windows 10 for Satya Nadella (Microsoft) Japan - and built scalable benchmarking programs to prioritize engineering investments. My work directly increased internal engineer satisfaction by 10% and boosted response rates from 12% to 100%, while using big data and market research techniques to deliver actionable insights to stakeholders.

Microsoft

As managing statistician, I led over 100 large-scale projects for Fortune 500 companies, political campaigns, and entertainment clients - developing flexible statistical frameworks for both qualitative and quantitative research while translating complex analyses into actionable strategies for segmentation, pricing, and brand reputation. I managed a remote team of four, standardized workflows to increase output from 3 to 48 projects per day, and coordinated global vendor partnerships and cross-functional communication to ensure seamless project delivery.

Penn, Schoen, and Berland

At a marketing startup, I led a small team and drove branding and SEO strategy, conducting in-house workshops for major clients to align brand identity and market positioning. I also redesigned and unified 50+ websites using generative research and concept testing to improve usability and consistency across digital touchpoints.

Kaizo Marketing

I led a scalable 3D research and data collection lab focused on AR/VR ergonomics, conducting high-profile hardware and software UX studies, including head-mounted display and eye-tracking research - for clients like Microsoft, Intel, and LG across the U.S. and South Korea. I also managed and mentored a team of five graduate optometry students, guiding research on vision ergonomics, safety, and user experience prior to product launch.

Vision Performance Institute

Before transitioning into technology research, I spent the early part of my career deeply embedded in the psychiatry and medical fields—working in both adult and pediatric psychiatric intake, developing triage protocols for crisis lines, and conducting research in clinical settings. These experiences gave me a foundational understanding of human behavior, empathy in high-stakes environments, and the rigor of evidence-based decision-making—all of which continue to shape how I approach research, systems, and user-centered design in tech today.

Previous Life
Medical Systems and Research

Portfolio

Here's a sampling of projects I've led at different companies in various disciplines. To view these in more detail, please visit my full portfolio section.

"We measure the things we care about."

Alex Johnson

 Locations

Seattle WA, Dallas TX

Alex Johnson

Good Time to Call

Mon to Fri: 7am to 12pm PST

Alex Johnson

Contact Details

+1-206-420-5666

lanen.vaughn@gmail.com

Contact Me

If you have any questions or would like to find out more, please contact me via email, phone or fill out the Contact form below let me know how I can help, and I’ll be more than happy to assist you

How do you decide when to do Quant or Qual?

The choice between quantitative and qualitative methods is guided first by the nature of the research question, and second by practical constraints such as the available resources, access to participants, and timeline.


In general, quantitative methods are best suited for questions related to scale, magnitude, frequency, and correlation - such as "How many users drop off at this step?" or "What’s the impact of X on Y?" These methods allow for statistical inference and are ideal when generalizability or measurement precision is key.


Conversely, qualitative methods are optimal for exploring context, motivation, behavior, and meaning—the "why" and "how" behind user actions. They offer depth, flexibility, and richness, making them especially useful in early-stage discovery, concept validation, and usability research.


That said, in situations where direct observation or interviews aren't feasible - due to constraints like participant availability, geographic reach, or confidentiality - quantitative methods may be used exploratorily. For example, survey data, telemetry logs, or behavioral analytics can serve as proxies to uncover patterns or form hypotheses about user intent and experience.

How do you decide how many anchors to have in a survey question?

When designing research - particularly surveys or measurement frameworks - it is essential to first ground the work in the business problem you’re trying to solve and the type of decision you intend to make. This foundational clarity informs everything from question design to scale construction and how the results are interpreted. 


For example, if the ultimate decision is binary - a go/no-go, yes/no, or launch/hold type of call - then using overly nuanced response scales (e.g., 7-point Likert scales) can introduce unnecessary complexity, often to the point where it is conflating or changing the respondents’ choices. In such cases, organizations often end up top-boxing or bottom-boxing the results anyway (e.g., treating a 6 or 7 as “yes” and everything else as “no”), effectively reducing the data to a dichotomy that could have been measured more cleanly, intuitively, and accurately by leaving that decision to the respondent (many people assume that adding more anchor points is inherently adding “nuance”; in truth, if they later globally combine or “box” these anchors in the analysis, then they are effectively projecting personal assumptions onto the respondents post hoc). There are ways to determine the validity of these kinds of changes but often require more complex analysis such as Rasch analysis to understand the relationship of the respondents to the test. 


By starting with the decision logic in mind, you can design a scale or framework that mirrors the granularity of the decision, avoids false precision, and increases clarity for stakeholders. This not only ensures better alignment between research outputs and business actions but also respects the time and cognitive load of respondents.


In short: match your measurement strategy to the decision you're trying to inform—not the other way around.

How do you know when research is “done”?

“Who’s paying for it?” - mentioned here a little in jest, but this question gets to the heart of research prioritization and sustainability.

 

Budget ownership is rarely just about funding; it often signals who has a true stake in the outcomes, and whose goals the research is ultimately meant to serve.


In practice, the continuation or expansion of a research program is heavily influenced by resourcing, visibility, and alignment with business priorities. Even the most rigorous research can stall if it's not actively supported - financially, socially, or operationally. Likewise, whether a project is considered “done” is less about methodological completeness and more about timing, stakeholder bandwidth, and competing initiatives.


The value of research is demonstrated not just in the findings themselves, but in how well those findings are activated - used to make decisions, shape strategy, or drive product changes. Sustained impact often depends on whether the research is socialized, championed, and continuously funded or embedded into decision-making frameworks.


When those conditions - budget, buy-in, and activation - aren’t in place, it's a signal to reassess. In such cases, it's often more strategic to pivot your time and resources to projects where there is clearer traction and organizational support. Research, after all, is only as powerful as its ability to influence real outcomes.

What is one of the biggest mistakes you see in research?

Too often, researchers default to “rules of thumb” rather than designing around the actual research question. A common example is the automatic use of 5-point Likert scales. In reality, Rensis Likert never intended for one universal scale to fit all contexts - his original work involved a highly detailed, time-intensive process to determine the appropriate number and wording of anchors based on the specific topic being measured.

I had the unique opportunity to study under a graduate advisor who had trained alongside Likert himself, which gave me a deep understanding of the methodological rigor behind these scales. That perspective has helped me recognize when modern survey design drifts too far from its foundations.

In industry settings, I often see researchers apply 5-point scales without considering whether they align with business goals, decision logic, or the statistical analysis plan. While the theoretical frameworks from academia provide an essential foundation, research in a product or business environment requires a different lens - one that prioritizes actionability, stakeholder relevance, and contextual fit.
 

What is your process for generating a research report?

A strategic way to design impactful research is to work backwards from the report or decision outcome. This means starting with a clear hypothesis that maps to a KPI, product goal, or user need. For some teams, I find it especially useful to create a draft report early in the process - a placeholder version based on the research questions.


This draft helps clarify what insights we’re aiming to produce, reveals potential secondary questions, and exposes any missing data sources. It also gives an early indication of whether the research will drive actionable value or needs to be reframed. Working this way ensures alignment, sets shared expectations, and keeps the research tightly connected to business impact.

How do you build a team from scratch?

There’s an important distinction between building a team and building a discipline - and the latter is significantly more complex. Building a team often means hiring, structuring, and supporting individuals within a pre-established function. The foundation has already been laid; you're working within existing expectations, processes, and cultural understanding of the role.


Building a discipline, however, is closer to flipping a house. You may inherit a few structural elements - like a vague mandate for “research” - but much of the work involves reimagining what that function could and should be in the context of the company’s current maturity, business goals, and cultural readiness. It requires not only carving out operational space for research to live and grow, but also educating stakeholders, aligning with strategic partners, and continuously demonstrating the value of research in ways that resonate at the executive level.


It’s a blend of vision and advocacy - establishing norms, frameworks, and practices where none exist, while adapting them to the evolving context of the organization. Success means not just shipping studies, but fundamentally shifting how decisions are made and how evidence is integrated into the product development process.

How do you prioritize projects for your team?

I balance the professional goals of my team members with the business needs and associated risks. Physically speaking, it often looks like a whiteboard full of Post-its mapping out with various colors: personal growth goals, stakeholder demands, technical complexity, and strategic timing. I take a systems-level view, mapping individual ambitions against organizational objectives, then weighing initiatives based on factors like potential impact, resource availability, cross-functional dependencies, and timing within the fiscal or product roadmap. This helps me identify opportunities where team members can stretch meaningfully without jeopardizing delivery, and ensures that growth isn’t an afterthought but embedded in the flow of the work. 

What is your leadership style?

I lead with strategic empathy, systems thinking, and a deep respect for complexity. My approach blends analytical rigor with human-centered insight, grounded in the belief that meaningful outcomes come from thoughtful structures, clear intent, and authentic relationships.


Whether mentoring early-career researchers or reframing high-stakes product decisions, I aim to surface what matters most - both in the data and in people. I create environments where curiosity is protected, imposter syndrome is disarmed, and potential becomes visible. My leadership is quiet but bold: I challenge assumptions, ask better questions, and guide with clarity - often before the roadmap even exists.


I believe the true value of research is not just in what it reveals, but in how it's activated, championed, and built into the decisions that shape products, teams, and culture.

What’s for dinner?

On a personal note - Oh golly. Give me a good soup any day! Here are some of my favorite dishes:


Carrot Coconut Bisque - the show-stopper! Carrot, coconut milk, sweet potato (not yam), ginger. Best paired with green onion pancake.


Mango salsa - semi-ripe mango, cilantro, fresh ginger, lime, red onion, salt and… ginger ale! This combination imparts a sweet and tangy base for an earthy and spicy fish. I prefer it with a blackened mahi-mahi and some coconut rice.


Blueberry sauce - great for pancakes or ice cream: blueberries stewed with orange juice, orange zest, white balsamic, raspberry, honey.


The “everything sauce” - cilantro, garlic, olive oil, salt, lime. This works for 78-95% of people worldwide. Accessible and easy. I’ve even seen some people put it on eggs, which is outside my palate.

Chopped: works on chicken, fish, and certain starches. Favor combinations well with rice.
Blended: starchy chips like plantains 

I’ve owned a few restaurants and it’s always the sauce or the soup. Taking humble ingredients and making them into a delicious dish that gets better over time is a win-win!

If you’ve made it this far and want to go on a food adventure or experiment with some flavors, then let’s connect!

Why didn’t you stay in hardware research?

The work in hardware was very fulfilling but I had reached a ceiling in that particular field due the prevalence of PhDs and MDs. I was actively building and experimenting on tools to measure vision but my designs wouldn’t get very far without MD attached to my name. I did complete some graduate/MD level optometry classes, but getting a medical degree wasn’t going to open the doors to research and measurement.  I went into UX research at a time that it wasn’t as defined. We were borrowing from ergonomics, marketing, and some military studies/tools. I saw an opportunity to step into this field and carve out a space adapting these tools for UX research. 

Two truths and a lie:

I have a bread company I first started with my sister at 7 years old


I lived with the Dalai Lama


I was a corrections officer in a co-ed facility
 

bottom of page