A labor-market interpretation project examining how job quality is inferred, signaled, and misread.
Purpose
When someone decides whether a job will work for them, they are making an interpretive leap — from a set of visible signals to an assumption about what the work will actually be like.
This project looks at that leap.
- •What information is available when it happens?
- •What information is not?
This project does not arrive at concrete answers to those questions. It creates the conditions to examine them.
Scope
Timeframe: 2022–2024.
This window captures a labor market in transition rather than equilibrium. Remote work commitments were being tested. Benefits that had been inflated during a tight labor market were normalizing. Quit patterns were shifting. Whether those changes produced the kind of gaps between promise and experience this project is looking at is part of what the data may reveal.
Population: Workers aged 25–40, household income $45,000–$95,000.
This group tends to be at a point where job decisions carry compounding consequences — where acting on incomplete information is less insulated from error than it might be at other career stages. The available data allows for interrogation within this scope. It does not allow for claims beyond it.
Industries: Technology, Healthcare, Professional Services.
These three are not chosen as a representative sample. They are chosen because each one may surface a different version of the tension between how work gets presented and how it tends to be experienced. Whether they do, and whether they surface it differently from each other, is part of what the project will examine.
Geography: U.S. metropolitan areas with populations above 500,000, excluding New York, Los Angeles, Chicago, Houston, and Phoenix.
Excluding the top five metros keeps the scope away from markets that tend to distort labor dynamics in ways that would make patterns harder to observe in the rest of the data. This is a scoping decision, not a claim about where the tensions this project examines are most present.
Structure
The project is composed of three analytical modules, each corresponding to a defined decision phase.
Module 1: Inference Before Application — How job quality is inferred before any interaction occurs.
What happens in the gap between encountering a job and deciding it might be worth pursuing. What signals are available at that stage, which ones tend to get weighted most heavily, and what the data may reveal about why.
Module 2: Experience vs. Signal — How attraction signals diverge from sustainability and lived conditions.
Whether the signals that make a job look attractive tend to align with how it feels to actually work there. What the data on turnover, workload, and lived conditions may show when placed alongside the signals that dominated the initial evaluation. Where those accounts diverge is documented, not reconciled.
Module 3: Decision Under Uncertainty — What information exists, what is omitted, and why confidence persists anyway.
What information exists in public datasets that does not appear to enter the job search process. What that gap may mean for how people form confidence when choosing work. Why that gap appears to persist.
N.B Each module contains one framing question, one constellation of data, one set of artifacts, and one declared boundary of what the analysis can and cannot support. No module synthesizes the others. No module resolves what came before it. Where datasets produce contradictory accounts, those contradictions are left intact.
Data Modeling Strategy Datasets are organized by the role they play in the project, not by where they come from. No dataset class is positioned as more authoritative than another. Where different sources point in different directions, that divergence is observed and documented rather than smoothed over.
Signal Construction — The data that tends to shape how jobs get evaluated before anyone applies. Occupational Outlook Handbook profiles. Job listing conventions. These map what is typically visible during a job search and what tends to get buried. They are used to examine how job information is presented, not what a job is actually like.
Revealed Behavior — The data that reflects what people actually do over time, rather than what they say they want. JOLTS quit and hire rates at the nation level. This may show whether the jobs people are drawn to and the jobs people stay in are the same jobs — or not. It does not show why any particular person left any particular role.
Compensation Reality — The data that shows what benefits and pay actually look like across industries and employer sizes. National Compensation Survey prevalence data. This may surface whether what gets highlighted in job listings reflects what is actually common or what is actually rare. It does not evaluate whether those benefits are sufficient or adequate for the people who receive them.
Lived Outcomes — The data that reflects what tends to happen to people over time in specific roles and industries. American Community Survey household-level indicators. This may show whether salary, on its own, predicts the kind of financial stability most people are looking for when they choose work. It does not attribute outcomes to specific job choices or predict what any individual's experience will be.
Experience Texture — Aggregated review themes from Glassdoor and Indeed at the pattern level. Not individual opinions, but friction points that appear repeatedly across large samples within specific industries and role types. These are treated as contextual texture, not as definitive evidence. The threshold at which a theme is treated as recurring is an interpretive choice that is documented, not justified. Different thresholds would surface different patterns.
What This Project Produces Interpretive Memos — One per module. Short write-ups that describe what the artifacts surface and where interpretation enters the picture. They stay with the tension they find. Where datasets contradict each other, those contradictions appear as interruptions, not as points to be resolved.
Data Appendix — A full record of field definitions, exclusions, transformations, and gaps. Gaps are documented as they are and they are not filled, not explained away. This appendix exists so that readers can evaluate what the data can and cannot support for themselves.
Omission Index — A cross-module artifact that maps what job search systems tend not to surface, which public datasets contain that information, and where the disconnect between available data and actual job search behavior appears to exist. Whether that disconnect reveals something meaningful is not decided by the project.
What This Project Looked At, and Why
This project examined what happens in the gap between encountering a job and deciding whether it will work — specifically, what information that decision is actually built on versus what it feels like it's built on. Across three modules, it looked at how signals get weighted before anyone applies, where those signals diverge from lived experience, and what publicly available data exists that never enters the job search process. The datasets used were capable of surfacing patterns and tensions. They were not capable of resolving them. Where contradictions appeared between sources, they were documented rather than reconciled.
The project was not built to answer how someone finds work that works for them. It was built to examine why that question is harder to answer than the systems most people use to answer it were designed to make it feel.